<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_02_23_2317259</id>
	<title>Scaling Algorithm Bug In Gimp, Photoshop, Others</title>
	<author>kdawson</author>
	<datestamp>1266934560000</datestamp>
	<htmltext>Wescotte writes <i>"There is an <a href="http://www.4p8.com/eric.brasseur/gamma.html">important error in most photography scaling algorithms</a>. All software tested has the problem: The Gimp, Adobe Photoshop, CinePaint, Nip2, ImageMagick, GQview, Eye of Gnome, Paint, and Krita. The problem exists across three different operating systems: Linux, Mac OS X, and Windows. (These exceptions have subsequently been reported &mdash; this software does not suffer from the problem: the Netpbm toolkit for graphic manipulations, the developing GEGL toolkit, 32-bit encoded images in Photoshop CS3, the latest version of Image Analyzer, the image exporters in Aperture 1.5.6, the latest version of Rendera, Adobe Lightroom 1.4.1, Pixelmator for Mac OS X, Paint Shop Pro X2, and the Preview app in Mac OS X starting from version 10.6.) Photographs scaled with the affected software are degraded, because of incorrect algorithmic accounting for monitor gamma. The degradation is often faint, but probably most pictures contain at least an array where the degradation is clearly visible. I believe this has happened since the first versions of these programs, maybe 20 years ago."</i></htmltext>
<tokenext>Wescotte writes " There is an important error in most photography scaling algorithms .
All software tested has the problem : The Gimp , Adobe Photoshop , CinePaint , Nip2 , ImageMagick , GQview , Eye of Gnome , Paint , and Krita .
The problem exists across three different operating systems : Linux , Mac OS X , and Windows .
( These exceptions have subsequently been reported    this software does not suffer from the problem : the Netpbm toolkit for graphic manipulations , the developing GEGL toolkit , 32-bit encoded images in Photoshop CS3 , the latest version of Image Analyzer , the image exporters in Aperture 1.5.6 , the latest version of Rendera , Adobe Lightroom 1.4.1 , Pixelmator for Mac OS X , Paint Shop Pro X2 , and the Preview app in Mac OS X starting from version 10.6 .
) Photographs scaled with the affected software are degraded , because of incorrect algorithmic accounting for monitor gamma .
The degradation is often faint , but probably most pictures contain at least an array where the degradation is clearly visible .
I believe this has happened since the first versions of these programs , maybe 20 years ago .
"</tokentext>
<sentencetext>Wescotte writes "There is an important error in most photography scaling algorithms.
All software tested has the problem: The Gimp, Adobe Photoshop, CinePaint, Nip2, ImageMagick, GQview, Eye of Gnome, Paint, and Krita.
The problem exists across three different operating systems: Linux, Mac OS X, and Windows.
(These exceptions have subsequently been reported — this software does not suffer from the problem: the Netpbm toolkit for graphic manipulations, the developing GEGL toolkit, 32-bit encoded images in Photoshop CS3, the latest version of Image Analyzer, the image exporters in Aperture 1.5.6, the latest version of Rendera, Adobe Lightroom 1.4.1, Pixelmator for Mac OS X, Paint Shop Pro X2, and the Preview app in Mac OS X starting from version 10.6.
) Photographs scaled with the affected software are degraded, because of incorrect algorithmic accounting for monitor gamma.
The degradation is often faint, but probably most pictures contain at least an array where the degradation is clearly visible.
I believe this has happened since the first versions of these programs, maybe 20 years ago.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255746</id>
	<title>By Accident</title>
	<author>not\_hylas( )</author>
	<datestamp>1266948780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>By accident, I happen to have the aforementioned (TFA) Lightroom 1.4.1.<br>That wasn't skill, that was luck.</p><p>I am a Pro Photographer and I've just purchased (well, the Mrs. got it for me for Christmas) a *Nikon Super Coolscan 9000 WITH matching photon torpedo sleds, complete with very lame software and support.</p><p>I'm about to begin transferring a considerable amount of slides to digital and this is news to me. I still shoot film and plan to continue until pried from my cold dead hands etc., etc.<br>The article illustrates to me what I've always seen as a result of a combination of 2.2 gamma and photo manipulation - degradation of image - but I'm not expert in that - this guy is.<br>We shoot digital for all the commercial clients and reasons (Hasselblad w/ Sinar back).</p><p>I still use 1.8 gamma, because it sounds cool, actually, because we use a lot of Macintoshes, but never-the-less this guy makes me happy with his findings.<br>I can imagine if I&rsquo;d just been a little bit more on the ball, so to speak, that I&rsquo;d be finished by now only to be cursing him along with the rest of you, but for a different reason.<br>So - this fellow is a God send to me.</p><p>So, I imagine this is where I&rsquo;m supposed to put in my Flickered link to show you how AWESOME I shoot.<br>But I don&rsquo;t roll that way.<br>You&rsquo;d be hard pressed to find but, maybe five or so released to swarming masses that is  - us, basically and they&rsquo;re heavily watermarked Stegoed, with an occasional Bear Trap.</p><p>But I do have Goodies.</p><p>Get from a friend [or find] a Fuji Finepix Pro S2 - S5, and this ICC profile , install, you'll be amazed.</p><p>FinePix.icc</p><p>These are very nice, also.</p><p>Ekta Space PS 5, J. Holmes.icc:</p><p><a href="http://www.josephholmes.com/propages/AvailableProducts.html" title="josephholmes.com">http://www.josephholmes.com/propages/AvailableProducts.html</a> [josephholmes.com]</p><p>Beta RGB:</p><p><a href="http://brucelindbloom.com/index.html?BetaRGB.html" title="brucelindbloom.com">http://brucelindbloom.com/index.html?BetaRGB.html</a> [brucelindbloom.com]</p><p>*The Nikon Super Coolscan 9000EN software has<nobr> <wbr></nobr>.icc profile problems, namely they&rsquo;re out of spec with OS X 10.4.x (?) up and will have to be substituted with clones to make it work properly.<br>See examples below.</p><p>Nikon Scan 4.0.2 [M]<nobr> <wbr></nobr>/Library/Application Support/Nikon/Profiles/\%\_NKWide\_CPS.icm<br>
&nbsp; &nbsp; &nbsp; Header profile class is not correct.<br>
&nbsp; &nbsp; &nbsp; Tag 'desc': Tag size is not correct.<nobr> <wbr></nobr>/Library/Application Support/Nikon/Profiles/NKAdobe.icm<br>
&nbsp; &nbsp; &nbsp; Tag 'desc': Tag size is not correct.<nobr> <wbr></nobr>/Library/Application Support/Nikon/Profiles/NKApple.icm<br>
&nbsp; &nbsp; &nbsp; Tag 'desc': Tag size is not correct.<nobr> <wbr></nobr>/Library/Application Support/Nikon/Profiles/NKApple\_CPS.icm<br>
&nbsp; &nbsp; &nbsp; Tag 'desc': Tag size is not correct.<nobr> <wbr></nobr>/Library/Application Support/Nikon/Profiles/NKLch.icm<br>
&nbsp; &nbsp; &nbsp; Header connection space is not correct.<br>
&nbsp; &nbsp; &nbsp; Header data space is not correct.<br>
&nbsp; &nbsp; &nbsp; Header profile class is not correct.<br>
&nbsp; &nbsp; &nbsp; Tag 'desc': Tag size is not correct.<br>
&nbsp; &nbsp; &nbsp; Tag 'A2B0': Number of input channels is not correct.<br>
&nbsp; &nbsp; &nbsp; Tag 'A2B0': Number of output channels is not correct.<br>
&nbsp; &nbsp; &nbsp; Tag 'B2A0': Number of input channels is not correct.<br>
&nbsp; &nbsp; &nbsp; Tag 'B2A0': Number of output channels is not correct.<nobr> <wbr></nobr>... ad nauseam</p><p>If you know of someone who&rsquo;s made the changes please post.</p></htmltext>
<tokenext>By accident , I happen to have the aforementioned ( TFA ) Lightroom 1.4.1.That was n't skill , that was luck.I am a Pro Photographer and I 've just purchased ( well , the Mrs. got it for me for Christmas ) a * Nikon Super Coolscan 9000 WITH matching photon torpedo sleds , complete with very lame software and support.I 'm about to begin transferring a considerable amount of slides to digital and this is news to me .
I still shoot film and plan to continue until pried from my cold dead hands etc. , etc.The article illustrates to me what I 've always seen as a result of a combination of 2.2 gamma and photo manipulation - degradation of image - but I 'm not expert in that - this guy is.We shoot digital for all the commercial clients and reasons ( Hasselblad w/ Sinar back ) .I still use 1.8 gamma , because it sounds cool , actually , because we use a lot of Macintoshes , but never-the-less this guy makes me happy with his findings.I can imagine if I    d just been a little bit more on the ball , so to speak , that I    d be finished by now only to be cursing him along with the rest of you , but for a different reason.So - this fellow is a God send to me.So , I imagine this is where I    m supposed to put in my Flickered link to show you how AWESOME I shoot.But I don    t roll that way.You    d be hard pressed to find but , maybe five or so released to swarming masses that is - us , basically and they    re heavily watermarked Stegoed , with an occasional Bear Trap.But I do have Goodies.Get from a friend [ or find ] a Fuji Finepix Pro S2 - S5 , and this ICC profile , install , you 'll be amazed.FinePix.iccThese are very nice , also.Ekta Space PS 5 , J. Holmes.icc : http : //www.josephholmes.com/propages/AvailableProducts.html [ josephholmes.com ] Beta RGB : http : //brucelindbloom.com/index.html ? BetaRGB.html [ brucelindbloom.com ] * The Nikon Super Coolscan 9000EN software has .icc profile problems , namely they    re out of spec with OS X 10.4.x ( ?
) up and will have to be substituted with clones to make it work properly.See examples below.Nikon Scan 4.0.2 [ M ] /Library/Application Support/Nikon/Profiles/ \ % \ _NKWide \ _CPS.icm       Header profile class is not correct .
      Tag 'desc ' : Tag size is not correct .
/Library/Application Support/Nikon/Profiles/NKAdobe.icm       Tag 'desc ' : Tag size is not correct .
/Library/Application Support/Nikon/Profiles/NKApple.icm       Tag 'desc ' : Tag size is not correct .
/Library/Application Support/Nikon/Profiles/NKApple \ _CPS.icm       Tag 'desc ' : Tag size is not correct .
/Library/Application Support/Nikon/Profiles/NKLch.icm       Header connection space is not correct .
      Header data space is not correct .
      Header profile class is not correct .
      Tag 'desc ' : Tag size is not correct .
      Tag 'A2B0 ' : Number of input channels is not correct .
      Tag 'A2B0 ' : Number of output channels is not correct .
      Tag 'B2A0 ' : Number of input channels is not correct .
      Tag 'B2A0 ' : Number of output channels is not correct .
... ad nauseamIf you know of someone who    s made the changes please post .</tokentext>
<sentencetext>By accident, I happen to have the aforementioned (TFA) Lightroom 1.4.1.That wasn't skill, that was luck.I am a Pro Photographer and I've just purchased (well, the Mrs. got it for me for Christmas) a *Nikon Super Coolscan 9000 WITH matching photon torpedo sleds, complete with very lame software and support.I'm about to begin transferring a considerable amount of slides to digital and this is news to me.
I still shoot film and plan to continue until pried from my cold dead hands etc., etc.The article illustrates to me what I've always seen as a result of a combination of 2.2 gamma and photo manipulation - degradation of image - but I'm not expert in that - this guy is.We shoot digital for all the commercial clients and reasons (Hasselblad w/ Sinar back).I still use 1.8 gamma, because it sounds cool, actually, because we use a lot of Macintoshes, but never-the-less this guy makes me happy with his findings.I can imagine if I’d just been a little bit more on the ball, so to speak, that I’d be finished by now only to be cursing him along with the rest of you, but for a different reason.So - this fellow is a God send to me.So, I imagine this is where I’m supposed to put in my Flickered link to show you how AWESOME I shoot.But I don’t roll that way.You’d be hard pressed to find but, maybe five or so released to swarming masses that is  - us, basically and they’re heavily watermarked Stegoed, with an occasional Bear Trap.But I do have Goodies.Get from a friend [or find] a Fuji Finepix Pro S2 - S5, and this ICC profile , install, you'll be amazed.FinePix.iccThese are very nice, also.Ekta Space PS 5, J. Holmes.icc:http://www.josephholmes.com/propages/AvailableProducts.html [josephholmes.com]Beta RGB:http://brucelindbloom.com/index.html?BetaRGB.html [brucelindbloom.com]*The Nikon Super Coolscan 9000EN software has .icc profile problems, namely they’re out of spec with OS X 10.4.x (?
) up and will have to be substituted with clones to make it work properly.See examples below.Nikon Scan 4.0.2 [M] /Library/Application Support/Nikon/Profiles/\%\_NKWide\_CPS.icm
      Header profile class is not correct.
      Tag 'desc': Tag size is not correct.
/Library/Application Support/Nikon/Profiles/NKAdobe.icm
      Tag 'desc': Tag size is not correct.
/Library/Application Support/Nikon/Profiles/NKApple.icm
      Tag 'desc': Tag size is not correct.
/Library/Application Support/Nikon/Profiles/NKApple\_CPS.icm
      Tag 'desc': Tag size is not correct.
/Library/Application Support/Nikon/Profiles/NKLch.icm
      Header connection space is not correct.
      Header data space is not correct.
      Header profile class is not correct.
      Tag 'desc': Tag size is not correct.
      Tag 'A2B0': Number of input channels is not correct.
      Tag 'A2B0': Number of output channels is not correct.
      Tag 'B2A0': Number of input channels is not correct.
      Tag 'B2A0': Number of output channels is not correct.
... ad nauseamIf you know of someone who’s made the changes please post.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258224</id>
	<title>Re:Editing in RGB is wrong too</title>
	<author>Anonymous</author>
	<datestamp>1265119800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It's not wrong if you edit in 16-bit RGB, which is what ImageMagick/GraphicsMagick does (or can do).  Start with an 8-bit JPEG out of the camera, read it into a 16-bit linear RGB, do your editing, scaling, and whatnot, then write out an 8-bit JPEG.</p></htmltext>
<tokenext>It 's not wrong if you edit in 16-bit RGB , which is what ImageMagick/GraphicsMagick does ( or can do ) .
Start with an 8-bit JPEG out of the camera , read it into a 16-bit linear RGB , do your editing , scaling , and whatnot , then write out an 8-bit JPEG .</tokentext>
<sentencetext>It's not wrong if you edit in 16-bit RGB, which is what ImageMagick/GraphicsMagick does (or can do).
Start with an 8-bit JPEG out of the camera, read it into a 16-bit linear RGB, do your editing, scaling, and whatnot, then write out an 8-bit JPEG.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254702</id>
	<title>Some look worse.</title>
	<author>Urza9814</author>
	<datestamp>1266940200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Is it just me or do some of the examples look \_better\_ with the "incorrect" scaling?</p><p>For example, this one of the NASA image:<br><a href="http://www.4p8.com/eric.brasseur/gamma\_21.html" title="4p8.com">http://www.4p8.com/eric.brasseur/gamma\_21.html</a> [4p8.com]</p><p>Look right at the center of that picture on the incorrect scaling one, and without moving your eyes switch it to correct. When I do it at least, it feels like my eyes instantly lose focus. With the incorrect scaling, everything looks perfectly crisp and clear. With the corrected one it takes a significant amount of effort to focus anywhere near the center of the image, and it takes significant effort to maintain that focus. The corrected one feels like I'm trying to look at a picture when I haven't put in my contacts yet...</p></htmltext>
<tokenext>Is it just me or do some of the examples look \ _better \ _ with the " incorrect " scaling ? For example , this one of the NASA image : http : //www.4p8.com/eric.brasseur/gamma \ _21.html [ 4p8.com ] Look right at the center of that picture on the incorrect scaling one , and without moving your eyes switch it to correct .
When I do it at least , it feels like my eyes instantly lose focus .
With the incorrect scaling , everything looks perfectly crisp and clear .
With the corrected one it takes a significant amount of effort to focus anywhere near the center of the image , and it takes significant effort to maintain that focus .
The corrected one feels like I 'm trying to look at a picture when I have n't put in my contacts yet.. .</tokentext>
<sentencetext>Is it just me or do some of the examples look \_better\_ with the "incorrect" scaling?For example, this one of the NASA image:http://www.4p8.com/eric.brasseur/gamma\_21.html [4p8.com]Look right at the center of that picture on the incorrect scaling one, and without moving your eyes switch it to correct.
When I do it at least, it feels like my eyes instantly lose focus.
With the incorrect scaling, everything looks perfectly crisp and clear.
With the corrected one it takes a significant amount of effort to focus anywhere near the center of the image, and it takes significant effort to maintain that focus.
The corrected one feels like I'm trying to look at a picture when I haven't put in my contacts yet...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31266402</id>
	<title>So what....who cares!?</title>
	<author>Anonymous</author>
	<datestamp>1265113500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"Photographs scaled with the affected software are degraded, because of incorrect algorithmic accounting for monitor gamma. The degradation is often faint, but probably most pictures contain at least an array where the degradation is clearly visible. I believe this has happened since the first versions of these programs, maybe 20 years ago."</p><p>And yet, for the last 20 years, the porn I look at is crystal clear!</p></htmltext>
<tokenext>" Photographs scaled with the affected software are degraded , because of incorrect algorithmic accounting for monitor gamma .
The degradation is often faint , but probably most pictures contain at least an array where the degradation is clearly visible .
I believe this has happened since the first versions of these programs , maybe 20 years ago .
" And yet , for the last 20 years , the porn I look at is crystal clear !</tokentext>
<sentencetext>"Photographs scaled with the affected software are degraded, because of incorrect algorithmic accounting for monitor gamma.
The degradation is often faint, but probably most pictures contain at least an array where the degradation is clearly visible.
I believe this has happened since the first versions of these programs, maybe 20 years ago.
"And yet, for the last 20 years, the porn I look at is crystal clear!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257592</id>
	<title>Re:Wrong</title>
	<author>Anonymous</author>
	<datestamp>1265112720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>In response to you and the many dozen other similar comments along this vein strewn all over, carelessly poo-pooing the work.</p><p>This is significant.  I, for one, appreciate the amount of detailed work that went into characterizing and explaining the problem.</p><p>I can also well believe that for most photos, and for most kinds of processing that's being done, the "erroneous" results are close enough, and/or it doesn't matter so much.</p><p>But that doesn't lessen the importance of getting this out there and known.  It is how progress is made.  I know I'll be thinking about these issues the next time I have a image-related coding task at hand.  Thank you to everyone involved in bringing these problems to our collective attention.  I appreciate it.</p></htmltext>
<tokenext>In response to you and the many dozen other similar comments along this vein strewn all over , carelessly poo-pooing the work.This is significant .
I , for one , appreciate the amount of detailed work that went into characterizing and explaining the problem.I can also well believe that for most photos , and for most kinds of processing that 's being done , the " erroneous " results are close enough , and/or it does n't matter so much.But that does n't lessen the importance of getting this out there and known .
It is how progress is made .
I know I 'll be thinking about these issues the next time I have a image-related coding task at hand .
Thank you to everyone involved in bringing these problems to our collective attention .
I appreciate it .</tokentext>
<sentencetext>In response to you and the many dozen other similar comments along this vein strewn all over, carelessly poo-pooing the work.This is significant.
I, for one, appreciate the amount of detailed work that went into characterizing and explaining the problem.I can also well believe that for most photos, and for most kinds of processing that's being done, the "erroneous" results are close enough, and/or it doesn't matter so much.But that doesn't lessen the importance of getting this out there and known.
It is how progress is made.
I know I'll be thinking about these issues the next time I have a image-related coding task at hand.
Thank you to everyone involved in bringing these problems to our collective attention.
I appreciate it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255250</id>
	<title>FILM is not dead it just smells funny</title>
	<author>Anonymous</author>
	<datestamp>1266944280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I shoot film so meh</p></htmltext>
<tokenext>I shoot film so meh</tokentext>
<sentencetext>I shoot film so meh</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255288</id>
	<title>what's the scope of this?</title>
	<author>ILuvRamen</author>
	<datestamp>1266944460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>So is this like RGB color schemes only in certain scaling modes only?  Like if I do a CMYK photoshop project and scale an image down with a non-standard scaling type optimized for reduction or gradient preservation or enlargement or whatever then would it be affected by the glitch?</htmltext>
<tokenext>So is this like RGB color schemes only in certain scaling modes only ?
Like if I do a CMYK photoshop project and scale an image down with a non-standard scaling type optimized for reduction or gradient preservation or enlargement or whatever then would it be affected by the glitch ?</tokentext>
<sentencetext>So is this like RGB color schemes only in certain scaling modes only?
Like if I do a CMYK photoshop project and scale an image down with a non-standard scaling type optimized for reduction or gradient preservation or enlargement or whatever then would it be affected by the glitch?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254632</id>
	<title>Not so common image</title>
	<author>Anonymous</author>
	<datestamp>1266939840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>1) this image was created to reproduce the bug. except for low definition display (TV/NTSC) its not common to have this "scanlines".
2) instead of resising, just blur the image before and then resise.
3) Interesting bug, thou.</htmltext>
<tokenext>1 ) this image was created to reproduce the bug .
except for low definition display ( TV/NTSC ) its not common to have this " scanlines " .
2 ) instead of resising , just blur the image before and then resise .
3 ) Interesting bug , thou .</tokentext>
<sentencetext>1) this image was created to reproduce the bug.
except for low definition display (TV/NTSC) its not common to have this "scanlines".
2) instead of resising, just blur the image before and then resise.
3) Interesting bug, thou.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258094</id>
	<title>Re:Wrong</title>
	<author>b4dc0d3r</author>
	<datestamp>1265118360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You polled professional users about this question, and none of them cared?  Let's see your raw data.</p><p>Professionals have been using lower-gamma monitors for a while because it looks more faithful to the original.  They thought it was because the monitor was better quality, or somehow represented the image data better.  Turns out it could have been just an artifact of scaling - resizing images in linear space results in a better looking image when your monitor gamma is closer to 1 than 2.2.  I'd say that indicates that some people noticed, although they didn't understand what the underlying cause was.</p><p>If you read half the comments here (you might need to adjust your threshold, this is not one of the more moderated topics right now) you'll see other examples.  It is important to some people, they have had problems with this.</p><p>When I scrolled through the examples, it was obvious to me that any image that has ever been resized should be re-processed with a fixed algorithm.  Images of famous paintings in art books would be a great start, since you're supposed to be seeing what the artist painted, not a rough approximation.  It's easy for subtle effects to get lost in the conversion.  The line drawing example was all I needed to see to be convinced.</p><p>I've scaled down images to save space before, and noticed that line drawing type images looked off somehow, but I figured if you make the lines thinner (by making everything smaller) you lose density.  Kinda like re-scaling a white box on a black background - less white box means the overall brightness decreases.  Turns out it just shouldn't be as obvious.</p><p>It bothered me, but since I'm not a professional I suppose you won't count me in your survey.</p></htmltext>
<tokenext>You polled professional users about this question , and none of them cared ?
Let 's see your raw data.Professionals have been using lower-gamma monitors for a while because it looks more faithful to the original .
They thought it was because the monitor was better quality , or somehow represented the image data better .
Turns out it could have been just an artifact of scaling - resizing images in linear space results in a better looking image when your monitor gamma is closer to 1 than 2.2 .
I 'd say that indicates that some people noticed , although they did n't understand what the underlying cause was.If you read half the comments here ( you might need to adjust your threshold , this is not one of the more moderated topics right now ) you 'll see other examples .
It is important to some people , they have had problems with this.When I scrolled through the examples , it was obvious to me that any image that has ever been resized should be re-processed with a fixed algorithm .
Images of famous paintings in art books would be a great start , since you 're supposed to be seeing what the artist painted , not a rough approximation .
It 's easy for subtle effects to get lost in the conversion .
The line drawing example was all I needed to see to be convinced.I 've scaled down images to save space before , and noticed that line drawing type images looked off somehow , but I figured if you make the lines thinner ( by making everything smaller ) you lose density .
Kinda like re-scaling a white box on a black background - less white box means the overall brightness decreases .
Turns out it just should n't be as obvious.It bothered me , but since I 'm not a professional I suppose you wo n't count me in your survey .</tokentext>
<sentencetext>You polled professional users about this question, and none of them cared?
Let's see your raw data.Professionals have been using lower-gamma monitors for a while because it looks more faithful to the original.
They thought it was because the monitor was better quality, or somehow represented the image data better.
Turns out it could have been just an artifact of scaling - resizing images in linear space results in a better looking image when your monitor gamma is closer to 1 than 2.2.
I'd say that indicates that some people noticed, although they didn't understand what the underlying cause was.If you read half the comments here (you might need to adjust your threshold, this is not one of the more moderated topics right now) you'll see other examples.
It is important to some people, they have had problems with this.When I scrolled through the examples, it was obvious to me that any image that has ever been resized should be re-processed with a fixed algorithm.
Images of famous paintings in art books would be a great start, since you're supposed to be seeing what the artist painted, not a rough approximation.
It's easy for subtle effects to get lost in the conversion.
The line drawing example was all I needed to see to be convinced.I've scaled down images to save space before, and noticed that line drawing type images looked off somehow, but I figured if you make the lines thinner (by making everything smaller) you lose density.
Kinda like re-scaling a white box on a black background - less white box means the overall brightness decreases.
Turns out it just shouldn't be as obvious.It bothered me, but since I'm not a professional I suppose you won't count me in your survey.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258628</id>
	<title>Re:Monitor gamma?</title>
	<author>jonadab</author>
	<datestamp>1265122740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>&gt; meanwhile, I see a grey rectangle in firefox,<br>&gt; and I still don't get what that signifies.<br><br>It mostly signifies that the image in question was carefully designed to be pathologically horrible.<br><br>In fact, it's not really one image, but two very different images, interleaved every-other-line.  Look closely (or, if your eyes are bad, zoom to 400\% or so).  The one image is tinted heavily toward green, and the other heavily toward magenta, and the brightness and contrast of each of them are heavily distorted, in a way that causes the average across two lines to always be the same shade of gray.  The green image is brighter in the bright areas and darker in the dark areas; the magenta image is not as bright in the bright areas and not as dark in the dark areas.  Additionally, the green image uses extra red in the dark areas to compensate for the darkness of the magenta image.  And that's just the obvious stuff.<br><br>Frankly, the fact that you see the Dalai Llama (when it's not scaled down) if you back up and view it from a distance is *arguably* an optical illusion, or at the very least a testament to the amazing design of your visual cortex, that it's able to make any kind of sense at all out of the distorted mess your eyes are giving it.  Designing software to do the same thing when scaling the image down is probably beyond the reach of the current state of the art in computer science, or certainly it would have to draw heavily on AI vision research.  Straightforward arithmetic isn't going to produce anything that looks like the Dalai Llama if it takes the whole image into account.<br><br>What you *can* do, to work around the interleaved design of the image, is use the most naive scaling algorithm of all, wherein the software just takes every other pixel and ignores the ones in between.  That will give you either the green or the magenta version of the image, depending on whether your software takes the first pixel or the second pixel of every pair.  The fact that this gives better results than VASTLY superior algorithms is a testament to the pathologically extreme design of the image.</htmltext>
<tokenext>&gt; meanwhile , I see a grey rectangle in firefox , &gt; and I still do n't get what that signifies.It mostly signifies that the image in question was carefully designed to be pathologically horrible.In fact , it 's not really one image , but two very different images , interleaved every-other-line .
Look closely ( or , if your eyes are bad , zoom to 400 \ % or so ) .
The one image is tinted heavily toward green , and the other heavily toward magenta , and the brightness and contrast of each of them are heavily distorted , in a way that causes the average across two lines to always be the same shade of gray .
The green image is brighter in the bright areas and darker in the dark areas ; the magenta image is not as bright in the bright areas and not as dark in the dark areas .
Additionally , the green image uses extra red in the dark areas to compensate for the darkness of the magenta image .
And that 's just the obvious stuff.Frankly , the fact that you see the Dalai Llama ( when it 's not scaled down ) if you back up and view it from a distance is * arguably * an optical illusion , or at the very least a testament to the amazing design of your visual cortex , that it 's able to make any kind of sense at all out of the distorted mess your eyes are giving it .
Designing software to do the same thing when scaling the image down is probably beyond the reach of the current state of the art in computer science , or certainly it would have to draw heavily on AI vision research .
Straightforward arithmetic is n't going to produce anything that looks like the Dalai Llama if it takes the whole image into account.What you * can * do , to work around the interleaved design of the image , is use the most naive scaling algorithm of all , wherein the software just takes every other pixel and ignores the ones in between .
That will give you either the green or the magenta version of the image , depending on whether your software takes the first pixel or the second pixel of every pair .
The fact that this gives better results than VASTLY superior algorithms is a testament to the pathologically extreme design of the image .</tokentext>
<sentencetext>&gt; meanwhile, I see a grey rectangle in firefox,&gt; and I still don't get what that signifies.It mostly signifies that the image in question was carefully designed to be pathologically horrible.In fact, it's not really one image, but two very different images, interleaved every-other-line.
Look closely (or, if your eyes are bad, zoom to 400\% or so).
The one image is tinted heavily toward green, and the other heavily toward magenta, and the brightness and contrast of each of them are heavily distorted, in a way that causes the average across two lines to always be the same shade of gray.
The green image is brighter in the bright areas and darker in the dark areas; the magenta image is not as bright in the bright areas and not as dark in the dark areas.
Additionally, the green image uses extra red in the dark areas to compensate for the darkness of the magenta image.
And that's just the obvious stuff.Frankly, the fact that you see the Dalai Llama (when it's not scaled down) if you back up and view it from a distance is *arguably* an optical illusion, or at the very least a testament to the amazing design of your visual cortex, that it's able to make any kind of sense at all out of the distorted mess your eyes are giving it.
Designing software to do the same thing when scaling the image down is probably beyond the reach of the current state of the art in computer science, or certainly it would have to draw heavily on AI vision research.
Straightforward arithmetic isn't going to produce anything that looks like the Dalai Llama if it takes the whole image into account.What you *can* do, to work around the interleaved design of the image, is use the most naive scaling algorithm of all, wherein the software just takes every other pixel and ignores the ones in between.
That will give you either the green or the magenta version of the image, depending on whether your software takes the first pixel or the second pixel of every pair.
The fact that this gives better results than VASTLY superior algorithms is a testament to the pathologically extreme design of the image.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256752</id>
	<title>you can't avoid it</title>
	<author>r00t</author>
	<datestamp>1265102580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You WILL assume a gamma value.</p><p>Dumb code implicitly assumes a 1.0 gamma. Most files are approximately the sRGB gamma, which is 2.2 if you ignore some oddities. Very rarely you may encounter an ancient Mac file with a 1.8 gamma.</p><p>So, if you do nothing, you're at 1.0 and sucking hard on most files.</p><p>Applying square and square root (gamma 2.0) is really cheap and easy, and it gets you a lot closer. I think you can even vectorize it in one direction.</p></htmltext>
<tokenext>You WILL assume a gamma value.Dumb code implicitly assumes a 1.0 gamma .
Most files are approximately the sRGB gamma , which is 2.2 if you ignore some oddities .
Very rarely you may encounter an ancient Mac file with a 1.8 gamma.So , if you do nothing , you 're at 1.0 and sucking hard on most files.Applying square and square root ( gamma 2.0 ) is really cheap and easy , and it gets you a lot closer .
I think you can even vectorize it in one direction .</tokentext>
<sentencetext>You WILL assume a gamma value.Dumb code implicitly assumes a 1.0 gamma.
Most files are approximately the sRGB gamma, which is 2.2 if you ignore some oddities.
Very rarely you may encounter an ancient Mac file with a 1.8 gamma.So, if you do nothing, you're at 1.0 and sucking hard on most files.Applying square and square root (gamma 2.0) is really cheap and easy, and it gets you a lot closer.
I think you can even vectorize it in one direction.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254892</id>
	<title>Re:Author expands scaling defination</title>
	<author>Anonymous</author>
	<datestamp>1266941460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>You're absolutely correct, AC. The reported issue isn't about a linear/nonlinear gamma bug at all - it's an averaging side effect.</p><p>The sample Dalai Lama image on TFA's page is intentionally constructed of interlaced lines of red and green data to thwart the averaging of source data used in common scaling algorithms. If you use the Gimp with the "None" scaling method, which will just pick-up every other row and column when scaling by 50\%, (instead of trying to average 2x2 grids) you get a mostly-green image instead of the grey image advertised.</p></htmltext>
<tokenext>You 're absolutely correct , AC .
The reported issue is n't about a linear/nonlinear gamma bug at all - it 's an averaging side effect.The sample Dalai Lama image on TFA 's page is intentionally constructed of interlaced lines of red and green data to thwart the averaging of source data used in common scaling algorithms .
If you use the Gimp with the " None " scaling method , which will just pick-up every other row and column when scaling by 50 \ % , ( instead of trying to average 2x2 grids ) you get a mostly-green image instead of the grey image advertised .</tokentext>
<sentencetext>You're absolutely correct, AC.
The reported issue isn't about a linear/nonlinear gamma bug at all - it's an averaging side effect.The sample Dalai Lama image on TFA's page is intentionally constructed of interlaced lines of red and green data to thwart the averaging of source data used in common scaling algorithms.
If you use the Gimp with the "None" scaling method, which will just pick-up every other row and column when scaling by 50\%, (instead of trying to average 2x2 grids) you get a mostly-green image instead of the grey image advertised.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254446</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31263888</id>
	<title>Re:Monitor gamma?</title>
	<author>amorsen</author>
	<datestamp>1265102340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Designing software to do the same thing when scaling the image down is probably beyond the reach of the current state of the art in computer science, or certainly it would have to draw heavily on AI vision research.</p></div><p>Fail. The author of the article designed software to do the scaling correctly. He didn't use AI or anything fancy, he just changed the gamma to 1, scaled, and converted back to 2.2.</p></div>
	</htmltext>
<tokenext>Designing software to do the same thing when scaling the image down is probably beyond the reach of the current state of the art in computer science , or certainly it would have to draw heavily on AI vision research.Fail .
The author of the article designed software to do the scaling correctly .
He did n't use AI or anything fancy , he just changed the gamma to 1 , scaled , and converted back to 2.2 .</tokentext>
<sentencetext>Designing software to do the same thing when scaling the image down is probably beyond the reach of the current state of the art in computer science, or certainly it would have to draw heavily on AI vision research.Fail.
The author of the article designed software to do the scaling correctly.
He didn't use AI or anything fancy, he just changed the gamma to 1, scaled, and converted back to 2.2.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258628</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257020</id>
	<title>RTFA</title>
	<author>Joce640k</author>
	<datestamp>1265105460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The article has seven lines of text followed a section on how to find out if your software has this problem.</p><p>I know this is slashdot but surely you could make it seven lines into an article...</p></htmltext>
<tokenext>The article has seven lines of text followed a section on how to find out if your software has this problem.I know this is slashdot but surely you could make it seven lines into an article.. .</tokentext>
<sentencetext>The article has seven lines of text followed a section on how to find out if your software has this problem.I know this is slashdot but surely you could make it seven lines into an article...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254424</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257398</id>
	<title>Re:Old news</title>
	<author>nrgy</author>
	<datestamp>1265110260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Bill where is my rss slashdot feed node for Nuke at?<nobr> <wbr></nobr>:)</htmltext>
<tokenext>Bill where is my rss slashdot feed node for Nuke at ?
: )</tokentext>
<sentencetext>Bill where is my rss slashdot feed node for Nuke at?
:)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255056</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255306</id>
	<title>This is what I got when I scaled the image down.</title>
	<author>Anonymous</author>
	<datestamp>1266944640000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p><a href="http://img714.imageshack.us/i/gammadalailamagrayscale.jpg" title="imageshack.us" rel="nofollow">SCALED IMAGE</a> [imageshack.us]</p></htmltext>
<tokenext>SCALED IMAGE [ imageshack.us ]</tokentext>
<sentencetext>SCALED IMAGE [imageshack.us]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254498</id>
	<title>Re:What about Irfanview and Picasa?</title>
	<author>Holmwood</author>
	<datestamp>1266938760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you read the fine article, you'll see they give a sample image you can test against your applications. I tested Irfanview (4.25) and it appears to suffer from the problem. Haven't tried Picasa yet; don't have it installed.</p></htmltext>
<tokenext>If you read the fine article , you 'll see they give a sample image you can test against your applications .
I tested Irfanview ( 4.25 ) and it appears to suffer from the problem .
Have n't tried Picasa yet ; do n't have it installed .</tokentext>
<sentencetext>If you read the fine article, you'll see they give a sample image you can test against your applications.
I tested Irfanview (4.25) and it appears to suffer from the problem.
Haven't tried Picasa yet; don't have it installed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254424</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259666</id>
	<title>My God is this OLD NEWS</title>
	<author>Theovon</author>
	<datestamp>1265127960000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p>Sensationalism.  This is the kind of sensationalism I used to get into, actually.  OMG!  GIMP antialiased lines look like ropes because they don't account for gamma properly!  (I noticed that 10 years ago.)</p><p>Yeah, so what happens is that these apps scale the image with colors in luma space.  In luma space, the color ramp of pixel values looks linear to the human eye.  The thing is, the human eye is not linear, so it's technically incorrect to do linear math to combine pixel values in luma space.</p><p>Instead, we should be working in luminance space.  Luminance is linear in terms of physical light intensity, and you can do linear math directly on luminance values and have them make physical sense.</p><p>The reason we use luma is because it's more compact to represent what you can SEE in a static image.  What takes 8 bits in luma space requires 14 bits in luminance space to get the fine just noticable differences at the right end of the spectrum.</p><p>Most people who have some reasonably advanced education in graphics know ALL about this.  And they also realize that it's just not worth worrying about most of the time.  I've messed around with gamma-aware scaling, antialiasing, and dithering.  Except for very rare circumstances, if you have fine-enough steps in your luma space, it's very difficult to tell the difference, if there even is any.  Even dithering to a 6x6x6 color cube looks almost as good in luma as it does in luminance.  The only situation where it's vital is if your color space is really small.  For instance, if you wanted to dither a color image to a 2x2x2 color cube (8 colors), then ignoring gamma makes it look completely wrong.</p><p>I'll make a physics analogy.  This guy is complaining that Newtonian mechanics is inaccurate compared to Einsteinian, except that we're dealing with speeds of 100s of miles per hour.  Not going to make a noticable difference.</p></htmltext>
<tokenext>Sensationalism .
This is the kind of sensationalism I used to get into , actually .
OMG ! GIMP antialiased lines look like ropes because they do n't account for gamma properly !
( I noticed that 10 years ago .
) Yeah , so what happens is that these apps scale the image with colors in luma space .
In luma space , the color ramp of pixel values looks linear to the human eye .
The thing is , the human eye is not linear , so it 's technically incorrect to do linear math to combine pixel values in luma space.Instead , we should be working in luminance space .
Luminance is linear in terms of physical light intensity , and you can do linear math directly on luminance values and have them make physical sense.The reason we use luma is because it 's more compact to represent what you can SEE in a static image .
What takes 8 bits in luma space requires 14 bits in luminance space to get the fine just noticable differences at the right end of the spectrum.Most people who have some reasonably advanced education in graphics know ALL about this .
And they also realize that it 's just not worth worrying about most of the time .
I 've messed around with gamma-aware scaling , antialiasing , and dithering .
Except for very rare circumstances , if you have fine-enough steps in your luma space , it 's very difficult to tell the difference , if there even is any .
Even dithering to a 6x6x6 color cube looks almost as good in luma as it does in luminance .
The only situation where it 's vital is if your color space is really small .
For instance , if you wanted to dither a color image to a 2x2x2 color cube ( 8 colors ) , then ignoring gamma makes it look completely wrong.I 'll make a physics analogy .
This guy is complaining that Newtonian mechanics is inaccurate compared to Einsteinian , except that we 're dealing with speeds of 100s of miles per hour .
Not going to make a noticable difference .</tokentext>
<sentencetext>Sensationalism.
This is the kind of sensationalism I used to get into, actually.
OMG!  GIMP antialiased lines look like ropes because they don't account for gamma properly!
(I noticed that 10 years ago.
)Yeah, so what happens is that these apps scale the image with colors in luma space.
In luma space, the color ramp of pixel values looks linear to the human eye.
The thing is, the human eye is not linear, so it's technically incorrect to do linear math to combine pixel values in luma space.Instead, we should be working in luminance space.
Luminance is linear in terms of physical light intensity, and you can do linear math directly on luminance values and have them make physical sense.The reason we use luma is because it's more compact to represent what you can SEE in a static image.
What takes 8 bits in luma space requires 14 bits in luminance space to get the fine just noticable differences at the right end of the spectrum.Most people who have some reasonably advanced education in graphics know ALL about this.
And they also realize that it's just not worth worrying about most of the time.
I've messed around with gamma-aware scaling, antialiasing, and dithering.
Except for very rare circumstances, if you have fine-enough steps in your luma space, it's very difficult to tell the difference, if there even is any.
Even dithering to a 6x6x6 color cube looks almost as good in luma as it does in luminance.
The only situation where it's vital is if your color space is really small.
For instance, if you wanted to dither a color image to a 2x2x2 color cube (8 colors), then ignoring gamma makes it look completely wrong.I'll make a physics analogy.
This guy is complaining that Newtonian mechanics is inaccurate compared to Einsteinian, except that we're dealing with speeds of 100s of miles per hour.
Not going to make a noticable difference.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256140</id>
	<title>Re:Monitor gamma?</title>
	<author>eelke\_klein</author>
	<datestamp>1266952920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Actually it is good to use a particular gamma for storing an image because the human eye is not linear. If a gamma of 1 was used to steps at the black end would be to large and at the white end to small (when using 8-bits per channel).</p><p>We are not using a gamma of 2.2 because the old CRT's did the CRT's had a gamma of 2.2 because it was determined that would work well.</p></htmltext>
<tokenext>Actually it is good to use a particular gamma for storing an image because the human eye is not linear .
If a gamma of 1 was used to steps at the black end would be to large and at the white end to small ( when using 8-bits per channel ) .We are not using a gamma of 2.2 because the old CRT 's did the CRT 's had a gamma of 2.2 because it was determined that would work well .</tokentext>
<sentencetext>Actually it is good to use a particular gamma for storing an image because the human eye is not linear.
If a gamma of 1 was used to steps at the black end would be to large and at the white end to small (when using 8-bits per channel).We are not using a gamma of 2.2 because the old CRT's did the CRT's had a gamma of 2.2 because it was determined that would work well.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255850</id>
	<title>How will this affect emulators?</title>
	<author>Schraegstrichpunkt</author>
	<datestamp>1266949800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I wonder how this will affect emulators.  Did old computers (say, the Amiga) use a gamma of 1.0?</htmltext>
<tokenext>I wonder how this will affect emulators .
Did old computers ( say , the Amiga ) use a gamma of 1.0 ?</tokentext>
<sentencetext>I wonder how this will affect emulators.
Did old computers (say, the Amiga) use a gamma of 1.0?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31269184</id>
	<title>Consequences....</title>
	<author>rickshaf</author>
	<datestamp>1265139360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Have any images exploded due to this glitch?  When will the recall be announced?</htmltext>
<tokenext>Have any images exploded due to this glitch ?
When will the recall be announced ?</tokentext>
<sentencetext>Have any images exploded due to this glitch?
When will the recall be announced?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504</id>
	<title>HA!</title>
	<author>TheDarkener</author>
	<datestamp>1266938820000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>Well, I am SURE glad I'm using Linux^H^H^H^H^HWindows^H^H^H^H^H^H^HMac^H^H^Hshit.</p></htmltext>
<tokenext>Well , I am SURE glad I 'm using Linux ^ H ^ H ^ H ^ H ^ HWindows ^ H ^ H ^ H ^ H ^ H ^ H ^ HMac ^ H ^ H ^ Hshit .</tokentext>
<sentencetext>Well, I am SURE glad I'm using Linux^H^H^H^H^HWindows^H^H^H^H^H^H^HMac^H^H^Hshit.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31296590</id>
	<title>Re:Oh dear. Linear color space again, 11 years lat</title>
	<author>Anonymous</author>
	<datestamp>1267285560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>From the gamma FAQ:<br>On the other hand, if your computation involves human perception, a nonlinear representation may be required. For example, if you perform a discrete cosine transform on image data as the first step in image compression, as in JPEG, then you ought to use nonlinear coding that exhibits perceptual uniformity, because you wish to minimize the perceptibility of the errors that will be introduced during quantization.</p></htmltext>
<tokenext>From the gamma FAQ : On the other hand , if your computation involves human perception , a nonlinear representation may be required .
For example , if you perform a discrete cosine transform on image data as the first step in image compression , as in JPEG , then you ought to use nonlinear coding that exhibits perceptual uniformity , because you wish to minimize the perceptibility of the errors that will be introduced during quantization .</tokentext>
<sentencetext>From the gamma FAQ:On the other hand, if your computation involves human perception, a nonlinear representation may be required.
For example, if you perform a discrete cosine transform on image data as the first step in image compression, as in JPEG, then you ought to use nonlinear coding that exhibits perceptual uniformity, because you wish to minimize the perceptibility of the errors that will be introduced during quantization.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254956</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256818</id>
	<title>Re:Monitor gamma?</title>
	<author>Anonymous</author>
	<datestamp>1265103420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Just turn off the color management thingy.</p></htmltext>
<tokenext>Just turn off the color management thingy .</tokentext>
<sentencetext>Just turn off the color management thingy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254956</id>
	<title>Re:Oh dear. Linear color space again, 11 years lat</title>
	<author>Skapare</author>
	<datestamp>1266942060000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>It's basically an implementation issue.  The algorithms may be fine as intended<nobr> <wbr></nobr>... in linear space.  The programmers that implemented them didn't understand linear vs. gamma, or didn't care, or had a fire breathing PHB on their back.  Hence we get junk software.</p><p>At least all MY image processing code always works in linear space.  Bu merely converting 8-bit gamma to 8-bit linear is no good because that now introduces some serious quantizing artifacts (major banding effects happen).  So I convert the 8-bit gammas to at least 30 or 31 bit integer if I need processing speed, or all the way to double precision floating point if I need as close to correct as possible.  After processing, then I convert back to 8-bit gammas.  Even then, you can't totally eliminate some banding effects that result from being in 8-bit.  If you can get more bits from the raw images from your camera, that's the best to use.  Apparently many JPEG compressors are also doing their DCT calculations in the non-unit gamma space instead of the linear space, too (which reduces the effectiveness of the compression somewhat, and may add more compression artifacts).</p></htmltext>
<tokenext>It 's basically an implementation issue .
The algorithms may be fine as intended ... in linear space .
The programmers that implemented them did n't understand linear vs. gamma , or did n't care , or had a fire breathing PHB on their back .
Hence we get junk software.At least all MY image processing code always works in linear space .
Bu merely converting 8-bit gamma to 8-bit linear is no good because that now introduces some serious quantizing artifacts ( major banding effects happen ) .
So I convert the 8-bit gammas to at least 30 or 31 bit integer if I need processing speed , or all the way to double precision floating point if I need as close to correct as possible .
After processing , then I convert back to 8-bit gammas .
Even then , you ca n't totally eliminate some banding effects that result from being in 8-bit .
If you can get more bits from the raw images from your camera , that 's the best to use .
Apparently many JPEG compressors are also doing their DCT calculations in the non-unit gamma space instead of the linear space , too ( which reduces the effectiveness of the compression somewhat , and may add more compression artifacts ) .</tokentext>
<sentencetext>It's basically an implementation issue.
The algorithms may be fine as intended ... in linear space.
The programmers that implemented them didn't understand linear vs. gamma, or didn't care, or had a fire breathing PHB on their back.
Hence we get junk software.At least all MY image processing code always works in linear space.
Bu merely converting 8-bit gamma to 8-bit linear is no good because that now introduces some serious quantizing artifacts (major banding effects happen).
So I convert the 8-bit gammas to at least 30 or 31 bit integer if I need processing speed, or all the way to double precision floating point if I need as close to correct as possible.
After processing, then I convert back to 8-bit gammas.
Even then, you can't totally eliminate some banding effects that result from being in 8-bit.
If you can get more bits from the raw images from your camera, that's the best to use.
Apparently many JPEG compressors are also doing their DCT calculations in the non-unit gamma space instead of the linear space, too (which reduces the effectiveness of the compression somewhat, and may add more compression artifacts).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254664</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256804</id>
	<title>Re:HA!</title>
	<author>Anonymous</author>
	<datestamp>1265103240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p> --- and I'm not saying this as platform evangelism -- for one, you'd be hard pressed to disagree that Mac OS X's font-rendering, kerning, and anti-aliasing abilities are far superior to those provided by Windows when presented with side-by-side examples.</p> </div><p>Ever looked at the font-rendering, kerning, and anti-aliasing in a well configured Linux-system. It make Mac OS X look like shit. It takes some twiddling to get it right in both systems and Mac OS X, delivered with a standardised hardware(*), usually has a better default configuration. <b>But</b> Linux <i>could</i> be superior out of the box, <i>if</i> if it were bundled with computers that had decent hardware and was well configured and it still looks good enough on hardware where OS X would look like a disaster.</p><p>(*) As proved by running Apple software on non-Apple hardware.</p></div>
	</htmltext>
<tokenext>--- and I 'm not saying this as platform evangelism -- for one , you 'd be hard pressed to disagree that Mac OS X 's font-rendering , kerning , and anti-aliasing abilities are far superior to those provided by Windows when presented with side-by-side examples .
Ever looked at the font-rendering , kerning , and anti-aliasing in a well configured Linux-system .
It make Mac OS X look like shit .
It takes some twiddling to get it right in both systems and Mac OS X , delivered with a standardised hardware ( * ) , usually has a better default configuration .
But Linux could be superior out of the box , if if it were bundled with computers that had decent hardware and was well configured and it still looks good enough on hardware where OS X would look like a disaster .
( * ) As proved by running Apple software on non-Apple hardware .</tokentext>
<sentencetext> --- and I'm not saying this as platform evangelism -- for one, you'd be hard pressed to disagree that Mac OS X's font-rendering, kerning, and anti-aliasing abilities are far superior to those provided by Windows when presented with side-by-side examples.
Ever looked at the font-rendering, kerning, and anti-aliasing in a well configured Linux-system.
It make Mac OS X look like shit.
It takes some twiddling to get it right in both systems and Mac OS X, delivered with a standardised hardware(*), usually has a better default configuration.
But Linux could be superior out of the box, if if it were bundled with computers that had decent hardware and was well configured and it still looks good enough on hardware where OS X would look like a disaster.
(*) As proved by running Apple software on non-Apple hardware.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255416</id>
	<title>Re:Nitpicking</title>
	<author>Anonymous</author>
	<datestamp>1266945600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>But at least now I know there was some actual reason for why I had to sharpen pics when making my thumbnail images, it wasn't just because I thought it looked off. Contrast really is lost with the standard scaling/resizing routine. Having an alternate method that evaluates the image histogram and correctly adjusts some factors before/during scaling based on that would be nice to have. (Apparently this is what the author is demonstrating.) Now the question is how soon until we can get the plugins to download for our image manipulating software(s)?</p></htmltext>
<tokenext>But at least now I know there was some actual reason for why I had to sharpen pics when making my thumbnail images , it was n't just because I thought it looked off .
Contrast really is lost with the standard scaling/resizing routine .
Having an alternate method that evaluates the image histogram and correctly adjusts some factors before/during scaling based on that would be nice to have .
( Apparently this is what the author is demonstrating .
) Now the question is how soon until we can get the plugins to download for our image manipulating software ( s ) ?</tokentext>
<sentencetext>But at least now I know there was some actual reason for why I had to sharpen pics when making my thumbnail images, it wasn't just because I thought it looked off.
Contrast really is lost with the standard scaling/resizing routine.
Having an alternate method that evaluates the image histogram and correctly adjusts some factors before/during scaling based on that would be nice to have.
(Apparently this is what the author is demonstrating.
) Now the question is how soon until we can get the plugins to download for our image manipulating software(s)?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31264734</id>
	<title>Re:Oh calm down..</title>
	<author>Anonymous</author>
	<datestamp>1265106000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Funny thing is, I noticed this when scaling an image in Photoshop CS4 yesterday for the first time -- and it was quite noticeable.  Scattered pixels in an RGB image scaled to an incredibly wrong shade.  This compounds with each repeated scaling.  I think I've noticed this in the past as well, but never put my finger on what was wrong then.</p></htmltext>
<tokenext>Funny thing is , I noticed this when scaling an image in Photoshop CS4 yesterday for the first time -- and it was quite noticeable .
Scattered pixels in an RGB image scaled to an incredibly wrong shade .
This compounds with each repeated scaling .
I think I 've noticed this in the past as well , but never put my finger on what was wrong then .</tokentext>
<sentencetext>Funny thing is, I noticed this when scaling an image in Photoshop CS4 yesterday for the first time -- and it was quite noticeable.
Scattered pixels in an RGB image scaled to an incredibly wrong shade.
This compounds with each repeated scaling.
I think I've noticed this in the past as well, but never put my finger on what was wrong then.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258372</id>
	<title>Re:Gamma</title>
	<author>uglyduckling</author>
	<datestamp>1265120880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Well, I probably can't explain \_exactly\_ what it is.  My understanding is that the human eye doesn't perceive brightness linearly, so gamma correction is used to logarithmically scale the values so that we can get more detail in the low-brightness regions.  The upshot is that e.g. in 8-bit colour, a value of 128 in a given colour isn't perceived as half as bright as a value of 256, but as much darker.  So an algorithm that just takes the mean of a given number of pixels in order to scale will generally produce much darker pictures than expected.  A good compromise would be to take the root of the mean of the squares (i.e. a gamma of 2.0) because the commonest gamma is 2.2, and on older Macs it is 1.8.</htmltext>
<tokenext>Well , I probably ca n't explain \ _exactly \ _ what it is .
My understanding is that the human eye does n't perceive brightness linearly , so gamma correction is used to logarithmically scale the values so that we can get more detail in the low-brightness regions .
The upshot is that e.g .
in 8-bit colour , a value of 128 in a given colour is n't perceived as half as bright as a value of 256 , but as much darker .
So an algorithm that just takes the mean of a given number of pixels in order to scale will generally produce much darker pictures than expected .
A good compromise would be to take the root of the mean of the squares ( i.e .
a gamma of 2.0 ) because the commonest gamma is 2.2 , and on older Macs it is 1.8 .</tokentext>
<sentencetext>Well, I probably can't explain \_exactly\_ what it is.
My understanding is that the human eye doesn't perceive brightness linearly, so gamma correction is used to logarithmically scale the values so that we can get more detail in the low-brightness regions.
The upshot is that e.g.
in 8-bit colour, a value of 128 in a given colour isn't perceived as half as bright as a value of 256, but as much darker.
So an algorithm that just takes the mean of a given number of pixels in order to scale will generally produce much darker pictures than expected.
A good compromise would be to take the root of the mean of the squares (i.e.
a gamma of 2.0) because the commonest gamma is 2.2, and on older Macs it is 1.8.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254740</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255056</id>
	<title>Old news</title>
	<author>Anonymous</author>
	<datestamp>1266942780000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>My software has been calculating in linear space for over a decade now (this is the Nuke Compositor currenlty produced by The Foundry but at the time it was used by Digital Domain for Titanic). You can see some pages I wrote on the effect here: <a href="http://mysite.verizon.net/~spitzak/conversion/composite.html" title="verizon.net">http://mysite.verizon.net/~spitzak/conversion/composite.html</a> [verizon.net]. See here for the overall paper: <a href="http://mysite.verizon.net/~spitzak/conversion/index.html" title="verizon.net">http://mysite.verizon.net/~spitzak/conversion/index.html</a> [verizon.net] and a Siggraph paper on the conversion of such images here: <a href="http://mysite.verizon.net/~spitzak/conversion/sketches\_0265.pdf" title="verizon.net">http://mysite.verizon.net/~spitzak/conversion/sketches\_0265.pdf</a> [verizon.net], in fact a lot more work went into figuring out how to get such linear images to show on the screen on hardware of that era than on the obvious need to do the math in linear. Initial work on this was done for Apollo 13 as the problems with gamma were quite obvious when scaling images of small bright objects against the black of space.</p><p>For typical photographs the effect is not very visible in scaling, as the gamma curve is very close to a straight line for two close points and thus the result is not very much different. Only widely separated points (ie very high contrast images with sharp edges) will show a visible difference. This probably means you are trying to scale line art, there are screenshots in the html pages showing the results of this. Far worse errors can be found in lighting calculations and in filtering operations such as blur. At the time even the most expensive professional 3D renderers were doing lighting completely wrong, but things have gotten better now that they can use floating point intermediate images.</p><p>One big annoyance is that you better do the math in floating point. Even 16 bits is insufficient for linear light levels as the black points will be too far apart and visible (the space is wasted on many many more white levels than you ever would need). A logarithmic system is needed, and on modern hardware you might as well use IEEE floating point, or the ILM "half" standard for 16-bit floating point.</p></htmltext>
<tokenext>My software has been calculating in linear space for over a decade now ( this is the Nuke Compositor currenlty produced by The Foundry but at the time it was used by Digital Domain for Titanic ) .
You can see some pages I wrote on the effect here : http : //mysite.verizon.net/ ~ spitzak/conversion/composite.html [ verizon.net ] .
See here for the overall paper : http : //mysite.verizon.net/ ~ spitzak/conversion/index.html [ verizon.net ] and a Siggraph paper on the conversion of such images here : http : //mysite.verizon.net/ ~ spitzak/conversion/sketches \ _0265.pdf [ verizon.net ] , in fact a lot more work went into figuring out how to get such linear images to show on the screen on hardware of that era than on the obvious need to do the math in linear .
Initial work on this was done for Apollo 13 as the problems with gamma were quite obvious when scaling images of small bright objects against the black of space.For typical photographs the effect is not very visible in scaling , as the gamma curve is very close to a straight line for two close points and thus the result is not very much different .
Only widely separated points ( ie very high contrast images with sharp edges ) will show a visible difference .
This probably means you are trying to scale line art , there are screenshots in the html pages showing the results of this .
Far worse errors can be found in lighting calculations and in filtering operations such as blur .
At the time even the most expensive professional 3D renderers were doing lighting completely wrong , but things have gotten better now that they can use floating point intermediate images.One big annoyance is that you better do the math in floating point .
Even 16 bits is insufficient for linear light levels as the black points will be too far apart and visible ( the space is wasted on many many more white levels than you ever would need ) .
A logarithmic system is needed , and on modern hardware you might as well use IEEE floating point , or the ILM " half " standard for 16-bit floating point .</tokentext>
<sentencetext>My software has been calculating in linear space for over a decade now (this is the Nuke Compositor currenlty produced by The Foundry but at the time it was used by Digital Domain for Titanic).
You can see some pages I wrote on the effect here: http://mysite.verizon.net/~spitzak/conversion/composite.html [verizon.net].
See here for the overall paper: http://mysite.verizon.net/~spitzak/conversion/index.html [verizon.net] and a Siggraph paper on the conversion of such images here: http://mysite.verizon.net/~spitzak/conversion/sketches\_0265.pdf [verizon.net], in fact a lot more work went into figuring out how to get such linear images to show on the screen on hardware of that era than on the obvious need to do the math in linear.
Initial work on this was done for Apollo 13 as the problems with gamma were quite obvious when scaling images of small bright objects against the black of space.For typical photographs the effect is not very visible in scaling, as the gamma curve is very close to a straight line for two close points and thus the result is not very much different.
Only widely separated points (ie very high contrast images with sharp edges) will show a visible difference.
This probably means you are trying to scale line art, there are screenshots in the html pages showing the results of this.
Far worse errors can be found in lighting calculations and in filtering operations such as blur.
At the time even the most expensive professional 3D renderers were doing lighting completely wrong, but things have gotten better now that they can use floating point intermediate images.One big annoyance is that you better do the math in floating point.
Even 16 bits is insufficient for linear light levels as the black points will be too far apart and visible (the space is wasted on many many more white levels than you ever would need).
A logarithmic system is needed, and on modern hardware you might as well use IEEE floating point, or the ILM "half" standard for 16-bit floating point.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256112</id>
	<title>Re:Not so common image</title>
	<author>im\_thatoneguy</author>
	<datestamp>1266952620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No but fine detail is common.  The 'bug' causes an increase in moire effects.</p></htmltext>
<tokenext>No but fine detail is common .
The 'bug ' causes an increase in moire effects .</tokentext>
<sentencetext>No but fine detail is common.
The 'bug' causes an increase in moire effects.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254632</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259786</id>
	<title>Re:Is there a way to use this for steganography?</title>
	<author>omnichad</author>
	<datestamp>1265128320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No.</p></htmltext>
<tokenext>No .</tokentext>
<sentencetext>No.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255714</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255936</id>
	<title>This is nothing new in 3D rendering world</title>
	<author>Technomancer</author>
	<datestamp>1266950640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Even R300 which was released in 2002 has bits to turn gamma and degamma on on textures and AA resolve. See here http://www.x.org/docs/AMD/R3xx\_3D\_Registers.pdf and search for gamma.</p></htmltext>
<tokenext>Even R300 which was released in 2002 has bits to turn gamma and degamma on on textures and AA resolve .
See here http : //www.x.org/docs/AMD/R3xx \ _3D \ _Registers.pdf and search for gamma .</tokentext>
<sentencetext>Even R300 which was released in 2002 has bits to turn gamma and degamma on on textures and AA resolve.
See here http://www.x.org/docs/AMD/R3xx\_3D\_Registers.pdf and search for gamma.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255582</id>
	<title>Re:Oh calm down..</title>
	<author>SEWilco</author>
	<datestamp>1266947340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I have an explanation, but it will not fit in this margin.</htmltext>
<tokenext>I have an explanation , but it will not fit in this margin .</tokentext>
<sentencetext>I have an explanation, but it will not fit in this margin.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256352</id>
	<title>Re:Monitor gamma?</title>
	<author>socsoc</author>
	<datestamp>1265140860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Is this new?  The meta tag for the date attribute on that page says 20080203</htmltext>
<tokenext>Is this new ?
The meta tag for the date attribute on that page says 20080203</tokentext>
<sentencetext>Is this new?
The meta tag for the date attribute on that page says 20080203</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257112</id>
	<title>Re:Monitor gamma?</title>
	<author>OrangeCatholic</author>
	<datestamp>1265106840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>&gt;To display the pictures, it makes sense to use the monitor gamma. But to actually modify the data using that information which is probably flawed in 99.9999999\% of cases? That's just wrong.
<br> <br>
You should read his explanation.  It's pretty lucid.  It has to do with the fact that the gamma is <i>expected</i> and <i>built into the system</i>.  It's also largely standardized at 2.2.  So to ignore the gamma is a fundamental mistake.
<br> <br>
The purpose of the gamma is to allow fine gradients of very dark and very white.  Without it, 8-bit color is basically useless....you would have black, lots of grey, and then white.  No off-whites or decent shadows.</htmltext>
<tokenext>&gt; To display the pictures , it makes sense to use the monitor gamma .
But to actually modify the data using that information which is probably flawed in 99.9999999 \ % of cases ?
That 's just wrong .
You should read his explanation .
It 's pretty lucid .
It has to do with the fact that the gamma is expected and built into the system .
It 's also largely standardized at 2.2 .
So to ignore the gamma is a fundamental mistake .
The purpose of the gamma is to allow fine gradients of very dark and very white .
Without it , 8-bit color is basically useless....you would have black , lots of grey , and then white .
No off-whites or decent shadows .</tokentext>
<sentencetext>&gt;To display the pictures, it makes sense to use the monitor gamma.
But to actually modify the data using that information which is probably flawed in 99.9999999\% of cases?
That's just wrong.
You should read his explanation.
It's pretty lucid.
It has to do with the fact that the gamma is expected and built into the system.
It's also largely standardized at 2.2.
So to ignore the gamma is a fundamental mistake.
The purpose of the gamma is to allow fine gradients of very dark and very white.
Without it, 8-bit color is basically useless....you would have black, lots of grey, and then white.
No off-whites or decent shadows.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254462</id>
	<title>Re:What about Irfanview and Picasa?</title>
	<author>tenton</author>
	<datestamp>1266938640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I know this is<nobr> <wbr></nobr>/. and to say "RTFA" is kind of pointless, but, please, RTFA. There's a tweaked sample there for you to try. It will be obvious if you try their sample with whatever graphics program you want to use.</p></htmltext>
<tokenext>I know this is / .
and to say " RTFA " is kind of pointless , but , please , RTFA .
There 's a tweaked sample there for you to try .
It will be obvious if you try their sample with whatever graphics program you want to use .</tokentext>
<sentencetext>I know this is /.
and to say "RTFA" is kind of pointless, but, please, RTFA.
There's a tweaked sample there for you to try.
It will be obvious if you try their sample with whatever graphics program you want to use.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254424</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254486</id>
	<title>Re:Monitor gamma?</title>
	<author>Anonymous</author>
	<datestamp>1266938700000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>Excellent point.  Just to be safe though, I'm going to take another look through my porn crypt to see if that's true.</p><p>BRB.</p></htmltext>
<tokenext>Excellent point .
Just to be safe though , I 'm going to take another look through my porn crypt to see if that 's true.BRB .</tokentext>
<sentencetext>Excellent point.
Just to be safe though, I'm going to take another look through my porn crypt to see if that's true.BRB.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257360</id>
	<title>Re:Gamma and sRGB: Hardware to the rescue?</title>
	<author>Terje Mathisen</author>
	<datestamp>1265109840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Afair all modern graphics cards (DX10-11) must have the ability to handle gamma for all sampling operations, i.e. internally they must convert each contributing sample into a linear value before averaging them. BTW, the conversions are not quite log/antilog, a small part of the input range is linear!</p><p>DX10 requires these conversions to be very accurate, with at least 10-bit precision in both directions, so as long as you can live with any of the supported sampling algorithms you can just let the graphics driver do all the hard work.</p><p>If you cannot depend on the hardware, maybe because you'd like to use Sinc sampling, then it isn't too hard or expensive to do it in sw either:</p><p>I have written SSE-based sw to do these operations, which means that the conversions had to use polynomial approximations to make it possible to do multiple of them in parallel. (For a single sample at the time, a lookup table is the obvious choice: A 256-entry table of 16-bit values uses just 512 bytes, while a reverse table going from 11-bit averages to 8-bit gamma samples needs 1KB.)</p><p>Using a very simple polynomial makes it possible to convert 4 individual samples simultaneously.</p><p>Terje</p></htmltext>
<tokenext>Afair all modern graphics cards ( DX10-11 ) must have the ability to handle gamma for all sampling operations , i.e .
internally they must convert each contributing sample into a linear value before averaging them .
BTW , the conversions are not quite log/antilog , a small part of the input range is linear ! DX10 requires these conversions to be very accurate , with at least 10-bit precision in both directions , so as long as you can live with any of the supported sampling algorithms you can just let the graphics driver do all the hard work.If you can not depend on the hardware , maybe because you 'd like to use Sinc sampling , then it is n't too hard or expensive to do it in sw either : I have written SSE-based sw to do these operations , which means that the conversions had to use polynomial approximations to make it possible to do multiple of them in parallel .
( For a single sample at the time , a lookup table is the obvious choice : A 256-entry table of 16-bit values uses just 512 bytes , while a reverse table going from 11-bit averages to 8-bit gamma samples needs 1KB .
) Using a very simple polynomial makes it possible to convert 4 individual samples simultaneously.Terje</tokentext>
<sentencetext>Afair all modern graphics cards (DX10-11) must have the ability to handle gamma for all sampling operations, i.e.
internally they must convert each contributing sample into a linear value before averaging them.
BTW, the conversions are not quite log/antilog, a small part of the input range is linear!DX10 requires these conversions to be very accurate, with at least 10-bit precision in both directions, so as long as you can live with any of the supported sampling algorithms you can just let the graphics driver do all the hard work.If you cannot depend on the hardware, maybe because you'd like to use Sinc sampling, then it isn't too hard or expensive to do it in sw either:I have written SSE-based sw to do these operations, which means that the conversions had to use polynomial approximations to make it possible to do multiple of them in parallel.
(For a single sample at the time, a lookup table is the obvious choice: A 256-entry table of 16-bit values uses just 512 bytes, while a reverse table going from 11-bit averages to 8-bit gamma samples needs 1KB.
)Using a very simple polynomial makes it possible to convert 4 individual samples simultaneously.Terje</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254802</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257782</id>
	<title>Re:Editing in RGB is wrong too</title>
	<author>Twinbee</author>
	<datestamp>1265114940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>[quote]Sound makes a good analogy. When you play music through any given combination of source, amp and speakers, it sounds different.[/quote]</p><p>Yes, but the point is, the best speakers should be able to *emulate* the other types. In other words, they can do the anything, and it's up to the sample data itself to provide the differences.</p><p>If some kind of distortion sounds 'interesting', or even subjectively better, I want the wave data to be doing that (or maybe some plugin sound alterer), not the hardware. The speaker should only play what it's being fed - no more, no less.</p></div>
	</htmltext>
<tokenext>[ quote ] Sound makes a good analogy .
When you play music through any given combination of source , amp and speakers , it sounds different .
[ /quote ] Yes , but the point is , the best speakers should be able to * emulate * the other types .
In other words , they can do the anything , and it 's up to the sample data itself to provide the differences.If some kind of distortion sounds 'interesting ' , or even subjectively better , I want the wave data to be doing that ( or maybe some plugin sound alterer ) , not the hardware .
The speaker should only play what it 's being fed - no more , no less .</tokentext>
<sentencetext>[quote]Sound makes a good analogy.
When you play music through any given combination of source, amp and speakers, it sounds different.
[/quote]Yes, but the point is, the best speakers should be able to *emulate* the other types.
In other words, they can do the anything, and it's up to the sample data itself to provide the differences.If some kind of distortion sounds 'interesting', or even subjectively better, I want the wave data to be doing that (or maybe some plugin sound alterer), not the hardware.
The speaker should only play what it's being fed - no more, no less.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258784</id>
	<title>Re:Monitor gamma?</title>
	<author>jonadab</author>
	<datestamp>1265123460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>IMO, what it actually means is that the so-called image is deliberately designed to be as catastrophically horrible as possible when scaled down.  (Actually, it's two images, interleaved line-by-line, both of them horrible in a wide variety of ways.)  A flat gray rectangle is arguably the *correct* result.  If you get a magenta or green version of the image, your browser is using a very naive scaling algorithm that ignores half of the information in the image.  I do not see any reasonable way to construct an algorithm that would reproduce at a smaller scale the same optical illusion contained in the original, since the illusion relies heavily on the precise pixel-by-pixel construction of the image.</htmltext>
<tokenext>IMO , what it actually means is that the so-called image is deliberately designed to be as catastrophically horrible as possible when scaled down .
( Actually , it 's two images , interleaved line-by-line , both of them horrible in a wide variety of ways .
) A flat gray rectangle is arguably the * correct * result .
If you get a magenta or green version of the image , your browser is using a very naive scaling algorithm that ignores half of the information in the image .
I do not see any reasonable way to construct an algorithm that would reproduce at a smaller scale the same optical illusion contained in the original , since the illusion relies heavily on the precise pixel-by-pixel construction of the image .</tokentext>
<sentencetext>IMO, what it actually means is that the so-called image is deliberately designed to be as catastrophically horrible as possible when scaled down.
(Actually, it's two images, interleaved line-by-line, both of them horrible in a wide variety of ways.
)  A flat gray rectangle is arguably the *correct* result.
If you get a magenta or green version of the image, your browser is using a very naive scaling algorithm that ignores half of the information in the image.
I do not see any reasonable way to construct an algorithm that would reproduce at a smaller scale the same optical illusion contained in the original, since the illusion relies heavily on the precise pixel-by-pixel construction of the image.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255144</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255408</id>
	<title>Easy fix without changing software</title>
	<author>istartedi</author>
	<datestamp>1266945540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have a Shareware app that I purchased something like 15 years ago.
It exhibits the bug; but it also has a gamma correction function.
I corrected the gamm to 0.46 (approx inverse of 2.2) then scaled
the test image of the Dalai Lama, then corrected back to 2.2.
It looks fine.  YMMV I suppose;
but if your app supports gamma correction then by all means try this
trick before doing anything more drastic.  That's assuming of course
that it's really critical for you; which as others have pointed out
it probably isn't.  Stil though, it's nice to see this pointed out.
</p></htmltext>
<tokenext>I have a Shareware app that I purchased something like 15 years ago .
It exhibits the bug ; but it also has a gamma correction function .
I corrected the gamm to 0.46 ( approx inverse of 2.2 ) then scaled the test image of the Dalai Lama , then corrected back to 2.2 .
It looks fine .
YMMV I suppose ; but if your app supports gamma correction then by all means try this trick before doing anything more drastic .
That 's assuming of course that it 's really critical for you ; which as others have pointed out it probably is n't .
Stil though , it 's nice to see this pointed out .</tokentext>
<sentencetext>I have a Shareware app that I purchased something like 15 years ago.
It exhibits the bug; but it also has a gamma correction function.
I corrected the gamm to 0.46 (approx inverse of 2.2) then scaled
the test image of the Dalai Lama, then corrected back to 2.2.
It looks fine.
YMMV I suppose;
but if your app supports gamma correction then by all means try this
trick before doing anything more drastic.
That's assuming of course
that it's really critical for you; which as others have pointed out
it probably isn't.
Stil though, it's nice to see this pointed out.
</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255114</id>
	<title>That *should* read:</title>
	<author>GrahamCox</author>
	<datestamp>1266943140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>That <i>should<i> read: "There is an <b>UN</b>important error in most photography scaling algorithms".</i></i></htmltext>
<tokenext>That should read : " There is an UNimportant error in most photography scaling algorithms " .</tokentext>
<sentencetext>That should read: "There is an UNimportant error in most photography scaling algorithms".</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255300</id>
	<title>omg you're right! it's not news.</title>
	<author>decora</author>
	<datestamp>1266944520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>i demand immediate recall!</htmltext>
<tokenext>i demand immediate recall !</tokentext>
<sentencetext>i demand immediate recall!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254880</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648</id>
	<title>Editing in RGB is wrong too</title>
	<author>buzzn</author>
	<datestamp>1266947940000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>Several people have spoken about "linear" RGB. That's nice and gets rid of some small level of distortion introduced by the non-linearity. However, it only starts there. For example, the eye sees R, G, and B differently. It is more sensitive to green than red, and to red more than blue, but it's not even that simple as the equations in your eye's processor are much more complicated. Many algorithms that treat the three "equally" are going to change the perceptual mixture. One can use other color spaces, such as HSV, Yuv, xyY, etc. with different advantages and disadvantages<br> <br>

Sound makes a good analogy. When you play music through any given combination of source, amp and speakers, it sounds different. Sometimes we actually like a particular type of sonic "distortion". It's never exactly like the "original" live music, though.<br> <br>

Likewise, any graphics manipulation is "distorting" the original. In fact, when I take a digital image and run it through Lightroom, do a range expansion/equalization, and do a bunch of tweaks to make the image look good, I'm making much larger changes than those little scaling problems listed in the article. The point is, do you think the result looks good?<br> <br>

There's other important variables, such as what colors are next to other colors in the image, how long you look at the image, what else is around you, how tired you are, etc. There's no such thing as color fidelity, there's only approximations to it. Color is hard, and I mean, really hard. See Hunt, "The Reproduction of Colour", or any number of other fine texts to learn more.</htmltext>
<tokenext>Several people have spoken about " linear " RGB .
That 's nice and gets rid of some small level of distortion introduced by the non-linearity .
However , it only starts there .
For example , the eye sees R , G , and B differently .
It is more sensitive to green than red , and to red more than blue , but it 's not even that simple as the equations in your eye 's processor are much more complicated .
Many algorithms that treat the three " equally " are going to change the perceptual mixture .
One can use other color spaces , such as HSV , Yuv , xyY , etc .
with different advantages and disadvantages Sound makes a good analogy .
When you play music through any given combination of source , amp and speakers , it sounds different .
Sometimes we actually like a particular type of sonic " distortion " .
It 's never exactly like the " original " live music , though .
Likewise , any graphics manipulation is " distorting " the original .
In fact , when I take a digital image and run it through Lightroom , do a range expansion/equalization , and do a bunch of tweaks to make the image look good , I 'm making much larger changes than those little scaling problems listed in the article .
The point is , do you think the result looks good ?
There 's other important variables , such as what colors are next to other colors in the image , how long you look at the image , what else is around you , how tired you are , etc .
There 's no such thing as color fidelity , there 's only approximations to it .
Color is hard , and I mean , really hard .
See Hunt , " The Reproduction of Colour " , or any number of other fine texts to learn more .</tokentext>
<sentencetext>Several people have spoken about "linear" RGB.
That's nice and gets rid of some small level of distortion introduced by the non-linearity.
However, it only starts there.
For example, the eye sees R, G, and B differently.
It is more sensitive to green than red, and to red more than blue, but it's not even that simple as the equations in your eye's processor are much more complicated.
Many algorithms that treat the three "equally" are going to change the perceptual mixture.
One can use other color spaces, such as HSV, Yuv, xyY, etc.
with different advantages and disadvantages 

Sound makes a good analogy.
When you play music through any given combination of source, amp and speakers, it sounds different.
Sometimes we actually like a particular type of sonic "distortion".
It's never exactly like the "original" live music, though.
Likewise, any graphics manipulation is "distorting" the original.
In fact, when I take a digital image and run it through Lightroom, do a range expansion/equalization, and do a bunch of tweaks to make the image look good, I'm making much larger changes than those little scaling problems listed in the article.
The point is, do you think the result looks good?
There's other important variables, such as what colors are next to other colors in the image, how long you look at the image, what else is around you, how tired you are, etc.
There's no such thing as color fidelity, there's only approximations to it.
Color is hard, and I mean, really hard.
See Hunt, "The Reproduction of Colour", or any number of other fine texts to learn more.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255906</id>
	<title>Re:short version</title>
	<author>Anonymous</author>
	<datestamp>1266950340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Reading the comments I finally start to understand what tfs is trying to say.</p><p>All and all I would not call this a bug. Also not a feature. It's an artifact at best. Bugs in the common use of the word are either small animals, or programming errors. This is neither. It's an algorithm that has certain artifacts, and some software (the long list in tfs) apparently uses a different algorithm.</p><p>So how is this news? What does it have to do on<nobr> <wbr></nobr>/. really? This is more something to discuss for graphics people, not for computer people like we are.</p></div><p>News for <b>NERDS</b>. Just sayin'. Also, who do you think <i>wrote</i> the graphics software?</p></div>
	</htmltext>
<tokenext>Reading the comments I finally start to understand what tfs is trying to say.All and all I would not call this a bug .
Also not a feature .
It 's an artifact at best .
Bugs in the common use of the word are either small animals , or programming errors .
This is neither .
It 's an algorithm that has certain artifacts , and some software ( the long list in tfs ) apparently uses a different algorithm.So how is this news ?
What does it have to do on / .
really ? This is more something to discuss for graphics people , not for computer people like we are.News for NERDS .
Just sayin' .
Also , who do you think wrote the graphics software ?</tokentext>
<sentencetext>Reading the comments I finally start to understand what tfs is trying to say.All and all I would not call this a bug.
Also not a feature.
It's an artifact at best.
Bugs in the common use of the word are either small animals, or programming errors.
This is neither.
It's an algorithm that has certain artifacts, and some software (the long list in tfs) apparently uses a different algorithm.So how is this news?
What does it have to do on /.
really? This is more something to discuss for graphics people, not for computer people like we are.News for NERDS.
Just sayin'.
Also, who do you think wrote the graphics software?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254880</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256242</id>
	<title>Ultra Summary</title>
	<author>Tablizer</author>
	<datestamp>1266954120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The overall error result of the "wrong" scaling is that <b>dark areas "bleed" into adjacent light areas</b>. Light objects shrink slightly relative to everything else in the image and dark objects expand slightly. If there are a lot of high-contrast patterns next to each other, the result of this bleed is an overall darkening of that area.<br>
&nbsp; &nbsp;</p></htmltext>
<tokenext>The overall error result of the " wrong " scaling is that dark areas " bleed " into adjacent light areas .
Light objects shrink slightly relative to everything else in the image and dark objects expand slightly .
If there are a lot of high-contrast patterns next to each other , the result of this bleed is an overall darkening of that area .
   </tokentext>
<sentencetext>The overall error result of the "wrong" scaling is that dark areas "bleed" into adjacent light areas.
Light objects shrink slightly relative to everything else in the image and dark objects expand slightly.
If there are a lot of high-contrast patterns next to each other, the result of this bleed is an overall darkening of that area.
   </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256622</id>
	<title>This is not really a bug...</title>
	<author>Anonymous</author>
	<datestamp>1265144100000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Eric Brasseur, in his otherwise excellent web page on the subject writes:</p><p>
   Technically speaking, the problem is that "the computations are performed as if the scale<br>
    of brightnesses was linear while in fact it is an exponential scale." In mathematical terms:<br>
   "a gamma of 1.0 is assumed while it is 2.2." Lots of filters, plug-ins and scripts probably<br>
   make the same error."</p><p>That is sort of right in a sense, but the assumption that it is a problem with the scaling algorithm and worse, that it should be fixed in the algorithm are both wrong and a dangerous way to start thinking about this situation.  In fact, this isn't really a bug anymore than the misuse of any tool is a bug.  What is really needed is more knowledge and expertise on how to create and utilize a color managed imaging workflow.</p><p>Almost all image processing algorithms contain simple addition somewhere and for simple addition to wok as you might expect, the image encoding function must be linear.  (Adding logarithms for instance actually results in a multiplication operation.  That is how slide rules work.  What's a slide rule?  Hmmm...)  Scaling involves filtering and nearly all filtering algorithms involve addition, but so does a blur, over, blend, key, mask, matte, color correction, dithering and quite a few others.  This problem isn't limited to scaling operations and it isn't a actually a bug at all, but rather an education and understanding issue.</p><p>The right thing to do is to linearize your images prior to doing any kind of manipulation and keep them linear throughout the whole image manipulation process, only transforming them to something else for final delivery (or approval if that is an issue).  By linear, I mean that the relationship between the luminance encoding and light is linear.  You see, it isn't the image or the file that is linear or non-linear, it is that transfer function between code values and light.  When I say linear, I mean that for a function f, f(a) + f(b) = f(a+b) and a*f(b) = f(a*b).  Anything else is non-linear.</p><p>The problem with working with linear images is that unless you are careful and know what you are doing, your monitor being a non-linear viewing device, won't display the images correctly, so if you linearize your images, they won't look "right" on the monitor, although they will be correct mathematically.  The solution to that is to employ LUT in the viewer such that linear data is displayed correctly.   Photoshop can do this and has been able to do it for years.  The fact that people don't understand how to use that feature and why doesn't mean that there is a bug in the scaling algorithm.</p><p>By the way, high end 2D image manipulation tools like Shake and Nuke to name two are used in the most demanding imaging application possible, motion picture visual effects.  With regard to internal image processing, they presume that data is linear.  They rely on the expert knowledge of the user to insure that the proper image encoding, that being linear to light is used.</p><p>Lastly, I think thinking of this as a bug in scaling algorithms and trying to fix it by revising scaling algorithms rather than recognizing that it is really a problem with image data being encoded non-linearly is a completely wrong headed approach.  It only addresses one use case, that being scaling and ignores all of the other problems with non-linearity.  A better approach would be to educate people as to what is actually happening, why it is happening and then just teach them to use the tools they have.</p><p>Just presuming that the input image has a 2.2 gamma and correcting for it in just the case of scaling but not in other 2D image manipulations only just serves to muddle the issue.</p><p>Ultimately, you want is a display subsystem (display card and monitor) that has been profiled and then corrected to match as closely as possible some idealized target like sRGB or Rec. 709 just to name two.  The easiest way to do that i</p></htmltext>
<tokenext>Eric Brasseur , in his otherwise excellent web page on the subject writes : Technically speaking , the problem is that " the computations are performed as if the scale of brightnesses was linear while in fact it is an exponential scale .
" In mathematical terms : " a gamma of 1.0 is assumed while it is 2.2 .
" Lots of filters , plug-ins and scripts probably make the same error .
" That is sort of right in a sense , but the assumption that it is a problem with the scaling algorithm and worse , that it should be fixed in the algorithm are both wrong and a dangerous way to start thinking about this situation .
In fact , this is n't really a bug anymore than the misuse of any tool is a bug .
What is really needed is more knowledge and expertise on how to create and utilize a color managed imaging workflow.Almost all image processing algorithms contain simple addition somewhere and for simple addition to wok as you might expect , the image encoding function must be linear .
( Adding logarithms for instance actually results in a multiplication operation .
That is how slide rules work .
What 's a slide rule ?
Hmmm... ) Scaling involves filtering and nearly all filtering algorithms involve addition , but so does a blur , over , blend , key , mask , matte , color correction , dithering and quite a few others .
This problem is n't limited to scaling operations and it is n't a actually a bug at all , but rather an education and understanding issue.The right thing to do is to linearize your images prior to doing any kind of manipulation and keep them linear throughout the whole image manipulation process , only transforming them to something else for final delivery ( or approval if that is an issue ) .
By linear , I mean that the relationship between the luminance encoding and light is linear .
You see , it is n't the image or the file that is linear or non-linear , it is that transfer function between code values and light .
When I say linear , I mean that for a function f , f ( a ) + f ( b ) = f ( a + b ) and a * f ( b ) = f ( a * b ) .
Anything else is non-linear.The problem with working with linear images is that unless you are careful and know what you are doing , your monitor being a non-linear viewing device , wo n't display the images correctly , so if you linearize your images , they wo n't look " right " on the monitor , although they will be correct mathematically .
The solution to that is to employ LUT in the viewer such that linear data is displayed correctly .
Photoshop can do this and has been able to do it for years .
The fact that people do n't understand how to use that feature and why does n't mean that there is a bug in the scaling algorithm.By the way , high end 2D image manipulation tools like Shake and Nuke to name two are used in the most demanding imaging application possible , motion picture visual effects .
With regard to internal image processing , they presume that data is linear .
They rely on the expert knowledge of the user to insure that the proper image encoding , that being linear to light is used.Lastly , I think thinking of this as a bug in scaling algorithms and trying to fix it by revising scaling algorithms rather than recognizing that it is really a problem with image data being encoded non-linearly is a completely wrong headed approach .
It only addresses one use case , that being scaling and ignores all of the other problems with non-linearity .
A better approach would be to educate people as to what is actually happening , why it is happening and then just teach them to use the tools they have.Just presuming that the input image has a 2.2 gamma and correcting for it in just the case of scaling but not in other 2D image manipulations only just serves to muddle the issue.Ultimately , you want is a display subsystem ( display card and monitor ) that has been profiled and then corrected to match as closely as possible some idealized target like sRGB or Rec .
709 just to name two .
The easiest way to do that i</tokentext>
<sentencetext>Eric Brasseur, in his otherwise excellent web page on the subject writes:
   Technically speaking, the problem is that "the computations are performed as if the scale
    of brightnesses was linear while in fact it is an exponential scale.
" In mathematical terms:
   "a gamma of 1.0 is assumed while it is 2.2.
" Lots of filters, plug-ins and scripts probably
   make the same error.
"That is sort of right in a sense, but the assumption that it is a problem with the scaling algorithm and worse, that it should be fixed in the algorithm are both wrong and a dangerous way to start thinking about this situation.
In fact, this isn't really a bug anymore than the misuse of any tool is a bug.
What is really needed is more knowledge and expertise on how to create and utilize a color managed imaging workflow.Almost all image processing algorithms contain simple addition somewhere and for simple addition to wok as you might expect, the image encoding function must be linear.
(Adding logarithms for instance actually results in a multiplication operation.
That is how slide rules work.
What's a slide rule?
Hmmm...)  Scaling involves filtering and nearly all filtering algorithms involve addition, but so does a blur, over, blend, key, mask, matte, color correction, dithering and quite a few others.
This problem isn't limited to scaling operations and it isn't a actually a bug at all, but rather an education and understanding issue.The right thing to do is to linearize your images prior to doing any kind of manipulation and keep them linear throughout the whole image manipulation process, only transforming them to something else for final delivery (or approval if that is an issue).
By linear, I mean that the relationship between the luminance encoding and light is linear.
You see, it isn't the image or the file that is linear or non-linear, it is that transfer function between code values and light.
When I say linear, I mean that for a function f, f(a) + f(b) = f(a+b) and a*f(b) = f(a*b).
Anything else is non-linear.The problem with working with linear images is that unless you are careful and know what you are doing, your monitor being a non-linear viewing device, won't display the images correctly, so if you linearize your images, they won't look "right" on the monitor, although they will be correct mathematically.
The solution to that is to employ LUT in the viewer such that linear data is displayed correctly.
Photoshop can do this and has been able to do it for years.
The fact that people don't understand how to use that feature and why doesn't mean that there is a bug in the scaling algorithm.By the way, high end 2D image manipulation tools like Shake and Nuke to name two are used in the most demanding imaging application possible, motion picture visual effects.
With regard to internal image processing, they presume that data is linear.
They rely on the expert knowledge of the user to insure that the proper image encoding, that being linear to light is used.Lastly, I think thinking of this as a bug in scaling algorithms and trying to fix it by revising scaling algorithms rather than recognizing that it is really a problem with image data being encoded non-linearly is a completely wrong headed approach.
It only addresses one use case, that being scaling and ignores all of the other problems with non-linearity.
A better approach would be to educate people as to what is actually happening, why it is happening and then just teach them to use the tools they have.Just presuming that the input image has a 2.2 gamma and correcting for it in just the case of scaling but not in other 2D image manipulations only just serves to muddle the issue.Ultimately, you want is a display subsystem (display card and monitor) that has been profiled and then corrected to match as closely as possible some idealized target like sRGB or Rec.
709 just to name two.
The easiest way to do that i</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31262600</id>
	<title>thank you</title>
	<author>Anonymous</author>
	<datestamp>1265139840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Thank you, that is a very clear explanation of this problem and how gamma affects output. I can finally visualize what is being talked about here<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>Thank you , that is a very clear explanation of this problem and how gamma affects output .
I can finally visualize what is being talked about here : )</tokentext>
<sentencetext>Thank you, that is a very clear explanation of this problem and how gamma affects output.
I can finally visualize what is being talked about here :)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254584</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256960</id>
	<title>Re:Oh calm down..</title>
	<author>Anonymous</author>
	<datestamp>1265104920000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>oh you think this was never noticed?<nobr> <wbr></nobr>:P</p><p>the entire graphic community (as in printing companies, layouters,<nobr> <wbr></nobr>...) knows about this for years.</p><p>errrrrr make that... SHOULD have known about this for years...</p></htmltext>
<tokenext>oh you think this was never noticed ?
: Pthe entire graphic community ( as in printing companies , layouters , ... ) knows about this for years.errrrrr make that... SHOULD have known about this for years.. .</tokentext>
<sentencetext>oh you think this was never noticed?
:Pthe entire graphic community (as in printing companies, layouters, ...) knows about this for years.errrrrr make that... SHOULD have known about this for years...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257676</id>
	<title>Re:Gamma and sRGB</title>
	<author>xaxa</author>
	<datestamp>1265113680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think it does make a difference. I have lots of photographs of plants (landscapes, up close, etc). I've often been annoyed that they seemed dull when scaled down, e.g. in the gallery on my website, and this would appear to be the cause.</p><p>Try it for yourself. Find a photograph (I used <a href="http://imagebin.org/86313" title="imagebin.org">this one</a> [imagebin.org] -- had to crop it a bit to fit it on ImageBin), and do these:<br><tt>convert image.jpg -scale 25\% out1.png<br>convert image.jpg -depth 16 -gamma 0.454545 -scale 25\% -gamma 2.2 -depth 8 out2.png<br>compare out1.png out2.png outD.png</tt></p><p>out1.png is scaled the existing way. out2.png is scaled taking into account the gamma. Flip between them. I especially see differences around the stems of plants.<br>outD.png is the differences.</p></htmltext>
<tokenext>I think it does make a difference .
I have lots of photographs of plants ( landscapes , up close , etc ) .
I 've often been annoyed that they seemed dull when scaled down , e.g .
in the gallery on my website , and this would appear to be the cause.Try it for yourself .
Find a photograph ( I used this one [ imagebin.org ] -- had to crop it a bit to fit it on ImageBin ) , and do these : convert image.jpg -scale 25 \ % out1.pngconvert image.jpg -depth 16 -gamma 0.454545 -scale 25 \ % -gamma 2.2 -depth 8 out2.pngcompare out1.png out2.png outD.pngout1.png is scaled the existing way .
out2.png is scaled taking into account the gamma .
Flip between them .
I especially see differences around the stems of plants.outD.png is the differences .</tokentext>
<sentencetext>I think it does make a difference.
I have lots of photographs of plants (landscapes, up close, etc).
I've often been annoyed that they seemed dull when scaled down, e.g.
in the gallery on my website, and this would appear to be the cause.Try it for yourself.
Find a photograph (I used this one [imagebin.org] -- had to crop it a bit to fit it on ImageBin), and do these:convert image.jpg -scale 25\% out1.pngconvert image.jpg -depth 16 -gamma 0.454545 -scale 25\% -gamma 2.2 -depth 8 out2.pngcompare out1.png out2.png outD.pngout1.png is scaled the existing way.
out2.png is scaled taking into account the gamma.
Flip between them.
I especially see differences around the stems of plants.outD.png is the differences.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254802</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256792</id>
	<title>Re:Monitor gamma?</title>
	<author>rvw</author>
	<datestamp>1265103060000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>meanwhile, I see a grey rectangle in firefox, and I still don't get what that signifies.</p></div><p>Right-click the image, then click view image. You'll see the image full-scale, like the first image. Scaling it down 50\% shouldn't make it gray.</p></div>
	</htmltext>
<tokenext>meanwhile , I see a grey rectangle in firefox , and I still do n't get what that signifies.Right-click the image , then click view image .
You 'll see the image full-scale , like the first image .
Scaling it down 50 \ % should n't make it gray .</tokentext>
<sentencetext>meanwhile, I see a grey rectangle in firefox, and I still don't get what that signifies.Right-click the image, then click view image.
You'll see the image full-scale, like the first image.
Scaling it down 50\% shouldn't make it gray.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256722</id>
	<title>Re:HA!</title>
	<author>Anonymous</author>
	<datestamp>1265102220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Shit is right... no one in their right mind would go from Linux to Windows. Linux to OSX maybe, but not Windows.</p></htmltext>
<tokenext>Shit is right... no one in their right mind would go from Linux to Windows .
Linux to OSX maybe , but not Windows .</tokentext>
<sentencetext>Shit is right... no one in their right mind would go from Linux to Windows.
Linux to OSX maybe, but not Windows.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254446</id>
	<title>Author expands scaling defination</title>
	<author>Anonymous</author>
	<datestamp>1266938580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The author seems to think filtering is part of scaling which it is not. A "scaling" algorithm does not include low-pass filtering. The grey square shown is the correct result for a bicubic scale.</p></htmltext>
<tokenext>The author seems to think filtering is part of scaling which it is not .
A " scaling " algorithm does not include low-pass filtering .
The grey square shown is the correct result for a bicubic scale .</tokentext>
<sentencetext>The author seems to think filtering is part of scaling which it is not.
A "scaling" algorithm does not include low-pass filtering.
The grey square shown is the correct result for a bicubic scale.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255836</id>
	<title>Re:HA!</title>
	<author>onefriedrice</author>
	<datestamp>1266949680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Well, I am SURE glad I'm using Linux^H^H^H^H^HWindows^H^H^H^H^H^H^HMac^H^H^Hshit.</p></div><p>Ha!  It sure is great being a BSD user.  I mean, sure, it sucks kinda to use a dead operating system... but at least none of my imaging software has bugs that plague lesser operating systems, right?... right?</p></div>
	</htmltext>
<tokenext>Well , I am SURE glad I 'm using Linux ^ H ^ H ^ H ^ H ^ HWindows ^ H ^ H ^ H ^ H ^ H ^ H ^ HMac ^ H ^ H ^ Hshit.Ha !
It sure is great being a BSD user .
I mean , sure , it sucks kinda to use a dead operating system... but at least none of my imaging software has bugs that plague lesser operating systems , right ? .. .
right ?</tokentext>
<sentencetext>Well, I am SURE glad I'm using Linux^H^H^H^H^HWindows^H^H^H^H^H^H^HMac^H^H^Hshit.Ha!
It sure is great being a BSD user.
I mean, sure, it sucks kinda to use a dead operating system... but at least none of my imaging software has bugs that plague lesser operating systems, right?...
right?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258158</id>
	<title>Re:Monitor gamma?</title>
	<author>DarkOx</author>
	<datestamp>1265119080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Either that or be contented with the mere fact it exists and is connected in some way with all the rest of the digital world, no matter how it is displayed.</p></htmltext>
<tokenext>Either that or be contented with the mere fact it exists and is connected in some way with all the rest of the digital world , no matter how it is displayed .</tokentext>
<sentencetext>Either that or be contented with the mere fact it exists and is connected in some way with all the rest of the digital world, no matter how it is displayed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257820</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257458</id>
	<title>Re:I look better in person.</title>
	<author>Anonymous</author>
	<datestamp>1265110920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I have the same problem with pictures of your mom.</p><p>The scaling required to make them fit on my screen is simply, well...</p></htmltext>
<tokenext>I have the same problem with pictures of your mom.The scaling required to make them fit on my screen is simply , well.. .</tokentext>
<sentencetext>I have the same problem with pictures of your mom.The scaling required to make them fit on my screen is simply, well...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254570</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256182</id>
	<title>Re:HA!</title>
	<author>MobileTatsu-NJG</author>
	<datestamp>1266953340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>This is one of many reasons why creative professionals prefer macs over PCs</p> </div><p>No, it's not.  Nobody even knows about this bug.</p></div>
	</htmltext>
<tokenext>This is one of many reasons why creative professionals prefer macs over PCs No , it 's not .
Nobody even knows about this bug .</tokentext>
<sentencetext>This is one of many reasons why creative professionals prefer macs over PCs No, it's not.
Nobody even knows about this bug.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259292</id>
	<title>Re:Old news</title>
	<author>Shinobi</author>
	<datestamp>1265126280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I may be remembering it wrong but isn't the problem with linear models for colour/alpha/others, besides the obvious such as gamut, that it makes it harder to replicate features that the human vision expects to catch?</p><p>I wrote a little thing myself waaaaay back for BMRT, for doing self-shadowing particles that also faked photon emission to work with the global illumination system, and with a linear space, things got really ugly, such as "blobbing"/"smearing", odd intersections and similar. Guess I should dig through the archive storage to see if I still have any of that stuff left, to show what I mean.</p></htmltext>
<tokenext>I may be remembering it wrong but is n't the problem with linear models for colour/alpha/others , besides the obvious such as gamut , that it makes it harder to replicate features that the human vision expects to catch ? I wrote a little thing myself waaaaay back for BMRT , for doing self-shadowing particles that also faked photon emission to work with the global illumination system , and with a linear space , things got really ugly , such as " blobbing " / " smearing " , odd intersections and similar .
Guess I should dig through the archive storage to see if I still have any of that stuff left , to show what I mean .</tokentext>
<sentencetext>I may be remembering it wrong but isn't the problem with linear models for colour/alpha/others, besides the obvious such as gamut, that it makes it harder to replicate features that the human vision expects to catch?I wrote a little thing myself waaaaay back for BMRT, for doing self-shadowing particles that also faked photon emission to work with the global illumination system, and with a linear space, things got really ugly, such as "blobbing"/"smearing", odd intersections and similar.
Guess I should dig through the archive storage to see if I still have any of that stuff left, to show what I mean.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255056</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257618</id>
	<title>It's already been discussed on Slashdot</title>
	<author>N Monkey</author>
	<datestamp>1265112960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p><div class="quote"><p>Photographs scaled with the affected software are degraded, because of incorrect algorithmic accounting for monitor gamma.</p></div><p>Seriously!</p><p>I have a theory on why this has gone unnoticed for so long, but I'll keep it to myself...</p></div><p>It's already been <a href="http://slashdot.org/comments.pl?sid=1542298&amp;cid=31071960" title="slashdot.org">discussed here in relation to the Gimp</a> [slashdot.org], but the maintainers only seem interested in fiddling with the interface. Sigh.</p></div>
	</htmltext>
<tokenext>Photographs scaled with the affected software are degraded , because of incorrect algorithmic accounting for monitor gamma.Seriously ! I have a theory on why this has gone unnoticed for so long , but I 'll keep it to myself...It 's already been discussed here in relation to the Gimp [ slashdot.org ] , but the maintainers only seem interested in fiddling with the interface .
Sigh .</tokentext>
<sentencetext>Photographs scaled with the affected software are degraded, because of incorrect algorithmic accounting for monitor gamma.Seriously!I have a theory on why this has gone unnoticed for so long, but I'll keep it to myself...It's already been discussed here in relation to the Gimp [slashdot.org], but the maintainers only seem interested in fiddling with the interface.
Sigh.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255144</id>
	<title>Re:Monitor gamma?</title>
	<author>DocHoncho</author>
	<datestamp>1266943440000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>meanwhile, I see a grey rectangle in firefox, and I still don't get what that signifies.</p></div><p>It means the scaling algorithm in your web browser and other commercial softwares is disastrously broken.  Duh, didn't you RTFA?</p></div>
	</htmltext>
<tokenext>meanwhile , I see a grey rectangle in firefox , and I still do n't get what that signifies.It means the scaling algorithm in your web browser and other commercial softwares is disastrously broken .
Duh , did n't you RTFA ?</tokentext>
<sentencetext>meanwhile, I see a grey rectangle in firefox, and I still don't get what that signifies.It means the scaling algorithm in your web browser and other commercial softwares is disastrously broken.
Duh, didn't you RTFA?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255354</id>
	<title>Re:HA!</title>
	<author>Anonymous</author>
	<datestamp>1266945060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Well, I am SURE glad I'm using Linux^H^H^H^H^HWindows^H^H^H^H^H^H^HMac^H^H^Hshit.</p></div><p>Actually, since Preview (the default Apple image viewer) and Aperture (the fancy $$$ Apple image manager) are not affected, I'd say this is a win for ColorSync, and by extension, the Mac and its use in all things graphical.</p></div>
	</htmltext>
<tokenext>Well , I am SURE glad I 'm using Linux ^ H ^ H ^ H ^ H ^ HWindows ^ H ^ H ^ H ^ H ^ H ^ H ^ HMac ^ H ^ H ^ Hshit.Actually , since Preview ( the default Apple image viewer ) and Aperture ( the fancy $ $ $ Apple image manager ) are not affected , I 'd say this is a win for ColorSync , and by extension , the Mac and its use in all things graphical .</tokentext>
<sentencetext>Well, I am SURE glad I'm using Linux^H^H^H^H^HWindows^H^H^H^H^H^H^HMac^H^H^Hshit.Actually, since Preview (the default Apple image viewer) and Aperture (the fancy $$$ Apple image manager) are not affected, I'd say this is a win for ColorSync, and by extension, the Mac and its use in all things graphical.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256200</id>
	<title>Re:HA!</title>
	<author>Anonymous</author>
	<datestamp>1266953580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Well, I am SURE glad I'm using Linux^H^H^H^H^HWindows^H^H^H^H^H^H^HMac^H^H^Hshit.</p></div><p>Where did you get this shit of which you speak?<br>I am very interested in analyzing it and check it for bugs.</p></div>
	</htmltext>
<tokenext>Well , I am SURE glad I 'm using Linux ^ H ^ H ^ H ^ H ^ HWindows ^ H ^ H ^ H ^ H ^ H ^ H ^ HMac ^ H ^ H ^ Hshit.Where did you get this shit of which you speak ? I am very interested in analyzing it and check it for bugs .</tokentext>
<sentencetext>Well, I am SURE glad I'm using Linux^H^H^H^H^HWindows^H^H^H^H^H^H^HMac^H^H^Hshit.Where did you get this shit of which you speak?I am very interested in analyzing it and check it for bugs.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255684</id>
	<title>Re:Nitpicking</title>
	<author>im\_thatoneguy</author>
	<datestamp>1266948240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>It's not at all complicated.   Many applications do properly handle it.  Nuke, Shake and other compositing apps have no problem.</p><p>Pixel^(GAMMA) -&gt; Scale -&gt; Pixel^(1/GAMMA) I wouldn't call that a terribly complicated process.</p><p>Or even better.   On open convert it to linear.   Then on save convert it back.   Maybe then Photoshop and company would actually handle alpha channels correctly *grumble* *grumble*...</p></htmltext>
<tokenext>It 's not at all complicated .
Many applications do properly handle it .
Nuke , Shake and other compositing apps have no problem.Pixel ^ ( GAMMA ) - &gt; Scale - &gt; Pixel ^ ( 1/GAMMA ) I would n't call that a terribly complicated process.Or even better .
On open convert it to linear .
Then on save convert it back .
Maybe then Photoshop and company would actually handle alpha channels correctly * grumble * * grumble * .. .</tokentext>
<sentencetext>It's not at all complicated.
Many applications do properly handle it.
Nuke, Shake and other compositing apps have no problem.Pixel^(GAMMA) -&gt; Scale -&gt; Pixel^(1/GAMMA) I wouldn't call that a terribly complicated process.Or even better.
On open convert it to linear.
Then on save convert it back.
Maybe then Photoshop and company would actually handle alpha channels correctly *grumble* *grumble*...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255076</id>
	<title>Re:Monitor gamma?</title>
	<author>asifyoucare</author>
	<datestamp>1266942840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It seems crazy to me to embed a particular Gamma value into an image.  Surely the point for Gamma correction is just before it is output to a device, and then the adjustment should be device specific (if possible).  It is even crazier these days to use a Gamma based on the attributes of CRTs.</p><p>In fact it seems so crazy I must be missing something.  Am I?</p></htmltext>
<tokenext>It seems crazy to me to embed a particular Gamma value into an image .
Surely the point for Gamma correction is just before it is output to a device , and then the adjustment should be device specific ( if possible ) .
It is even crazier these days to use a Gamma based on the attributes of CRTs.In fact it seems so crazy I must be missing something .
Am I ?</tokentext>
<sentencetext>It seems crazy to me to embed a particular Gamma value into an image.
Surely the point for Gamma correction is just before it is output to a device, and then the adjustment should be device specific (if possible).
It is even crazier these days to use a Gamma based on the attributes of CRTs.In fact it seems so crazy I must be missing something.
Am I?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256450</id>
	<title>2.2?</title>
	<author>Tablizer</author>
	<datestamp>1265142180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Where did this 2.2 gamma value come from? Is it based on some mathematical or physics truth, or based on tests of human perception?<br>
&nbsp; &nbsp; &nbsp;</p></htmltext>
<tokenext>Where did this 2.2 gamma value come from ?
Is it based on some mathematical or physics truth , or based on tests of human perception ?
     </tokentext>
<sentencetext>Where did this 2.2 gamma value come from?
Is it based on some mathematical or physics truth, or based on tests of human perception?
     </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258956</id>
	<title>Love the summary</title>
	<author>rwa2</author>
	<datestamp>1265124420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>lol... Just skip down all the way to the end of the article and scale the last image 50\%:</p><p><a href="http://www.4p8.com/eric.brasseur/gamma-1.0-or-2.2.png" title="4p8.com">http://www.4p8.com/eric.brasseur/gamma-1.0-or-2.2.png</a> [4p8.com]</p></htmltext>
<tokenext>lol... Just skip down all the way to the end of the article and scale the last image 50 \ % : http : //www.4p8.com/eric.brasseur/gamma-1.0-or-2.2.png [ 4p8.com ]</tokentext>
<sentencetext>lol... Just skip down all the way to the end of the article and scale the last image 50\%:http://www.4p8.com/eric.brasseur/gamma-1.0-or-2.2.png [4p8.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256076</id>
	<title>And this is still why, even in 2010.</title>
	<author>Anonymous</author>
	<datestamp>1266952080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The 70 year old professional photographer tells me there is just a difference and digital hasn't made it up yet. He isn't giving up his film camera and I really can't blame him. I guess this isn't the camera's fault and it shouldn't effect them if left unedited, but it makes you question if analog devices and systems should be abandoned at the current rates.</htmltext>
<tokenext>The 70 year old professional photographer tells me there is just a difference and digital has n't made it up yet .
He is n't giving up his film camera and I really ca n't blame him .
I guess this is n't the camera 's fault and it should n't effect them if left unedited , but it makes you question if analog devices and systems should be abandoned at the current rates .</tokentext>
<sentencetext>The 70 year old professional photographer tells me there is just a difference and digital hasn't made it up yet.
He isn't giving up his film camera and I really can't blame him.
I guess this isn't the camera's fault and it shouldn't effect them if left unedited, but it makes you question if analog devices and systems should be abandoned at the current rates.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256920</id>
	<title>Re:HA!</title>
	<author>bheer</author>
	<datestamp>1265104500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>you'd be hard pressed to disagree that Mac OS X's font-rendering, kerning, and anti-aliasing abilities are far superior to those provided by Windows when presented with side-by-side examples.</p></div></blockquote><p>That's \_your\_ opinion. For me, Windows's rendering looks great (OS X looks 'fuzzy' to me). I know my philistinism may hurt typography geeks, but really, most people don't care.</p></div>
	</htmltext>
<tokenext>you 'd be hard pressed to disagree that Mac OS X 's font-rendering , kerning , and anti-aliasing abilities are far superior to those provided by Windows when presented with side-by-side examples.That 's \ _your \ _ opinion .
For me , Windows 's rendering looks great ( OS X looks 'fuzzy ' to me ) .
I know my philistinism may hurt typography geeks , but really , most people do n't care .</tokentext>
<sentencetext>you'd be hard pressed to disagree that Mac OS X's font-rendering, kerning, and anti-aliasing abilities are far superior to those provided by Windows when presented with side-by-side examples.That's \_your\_ opinion.
For me, Windows's rendering looks great (OS X looks 'fuzzy' to me).
I know my philistinism may hurt typography geeks, but really, most people don't care.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260182</id>
	<title>Re:What about Irfanview and Picasa?</title>
	<author>Anonymous</author>
	<datestamp>1265129940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Irfanview works fine if you are willing to adjust gamma before and after resize.</p><p>1. Adjust gamma to<nobr> <wbr></nobr>.45<br>2. Resize<br>3. Adjust gamma to 2.2</p><p>I suspect many other apps will work too if the same steps are performed.</p></htmltext>
<tokenext>Irfanview works fine if you are willing to adjust gamma before and after resize.1 .
Adjust gamma to .452 .
Resize3. Adjust gamma to 2.2I suspect many other apps will work too if the same steps are performed .</tokentext>
<sentencetext>Irfanview works fine if you are willing to adjust gamma before and after resize.1.
Adjust gamma to .452.
Resize3. Adjust gamma to 2.2I suspect many other apps will work too if the same steps are performed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254424</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259426</id>
	<title>Re:Correct, yes. Expected, maybe. Desired, no.</title>
	<author>omnichad</author>
	<datestamp>1265126940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's not really filtering.  It's that the scaling is being done in the wrong color space.  If you resize an image, you should convert it to linear data before using averaging and resizing algorithms, and then back to gamma-based values after.</p></htmltext>
<tokenext>It 's not really filtering .
It 's that the scaling is being done in the wrong color space .
If you resize an image , you should convert it to linear data before using averaging and resizing algorithms , and then back to gamma-based values after .</tokentext>
<sentencetext>It's not really filtering.
It's that the scaling is being done in the wrong color space.
If you resize an image, you should convert it to linear data before using averaging and resizing algorithms, and then back to gamma-based values after.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255282</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540</id>
	<title>Wrong</title>
	<author>Anonymous</author>
	<datestamp>1266946920000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>"There is an important error in most photography scaling algorithms."</p><p>No, there isn't. If millions of professional users haven't been bothered by it over the course of two decades, it is CLEARLY not important.</p></htmltext>
<tokenext>" There is an important error in most photography scaling algorithms .
" No , there is n't .
If millions of professional users have n't been bothered by it over the course of two decades , it is CLEARLY not important .</tokentext>
<sentencetext>"There is an important error in most photography scaling algorithms.
"No, there isn't.
If millions of professional users haven't been bothered by it over the course of two decades, it is CLEARLY not important.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257156</id>
	<title>Oh for fucks sake</title>
	<author>DeanLearner</author>
	<datestamp>1265107380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>20 years of image processing wasted, just like that! Spose I better start again...</htmltext>
<tokenext>20 years of image processing wasted , just like that !
Spose I better start again.. .</tokentext>
<sentencetext>20 years of image processing wasted, just like that!
Spose I better start again...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560</id>
	<title>Re:Monitor gamma?</title>
	<author>Anonymous</author>
	<datestamp>1266939180000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>The responses that they post are also inaccurate it seems.</p><p>From</p><blockquote><div><p> <a href="http://www.4p8.com/eric.brasseur/gamma\_dalai\_lama.html" title="4p8.com">All four images in this page are the same one but your browser was instructed to scale it. The one below is scaled 1:2. On Opera and some versions of Internet Explorer it will show a gray rectangle.</a> [4p8.com]</p></div> </blockquote><p>meanwhile, I see a grey rectangle in firefox, and I still don't get what that signifies.</p></div>
	</htmltext>
<tokenext>The responses that they post are also inaccurate it seems.From All four images in this page are the same one but your browser was instructed to scale it .
The one below is scaled 1 : 2 .
On Opera and some versions of Internet Explorer it will show a gray rectangle .
[ 4p8.com ] meanwhile , I see a grey rectangle in firefox , and I still do n't get what that signifies .</tokentext>
<sentencetext>The responses that they post are also inaccurate it seems.From All four images in this page are the same one but your browser was instructed to scale it.
The one below is scaled 1:2.
On Opera and some versions of Internet Explorer it will show a gray rectangle.
[4p8.com] meanwhile, I see a grey rectangle in firefox, and I still don't get what that signifies.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254990</id>
	<title>Re:Monitor gamma?</title>
	<author>X0563511</author>
	<datestamp>1266942240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Here:<br><a href="http://img43.imageshack.us/img43/2586/scalerbug.png" title="imageshack.us">http://img43.imageshack.us/img43/2586/scalerbug.png</a> [imageshack.us]</p><p>Look at the third set of images. Had this bug not been present, they should have been nearly identical. As you can see, in my case, they are radically different.</p><p>As the text reads:</p><p><i>(dali picture)</i><br>All four images in this page are the same one but your browser was instructed to scale it. The one below is scaled 1:2. On Opera and some versions of Internet Explorer it will show a gray rectangle. KDE Konqueror, Firefox and SeaMonkey will display it either pink or green or half-green, half-pink:</p><p><i>(smaller dali or grey box)</i><br><i>(mine draws in grey, despite using Firefox)</i></p><p>Below it is scaled down 1:4. The right one is one pixel wider and higher than the left one. Some browsers display them quite differently:</p><p><i>(two images, differing if you have the bug)</i></p></htmltext>
<tokenext>Here : http : //img43.imageshack.us/img43/2586/scalerbug.png [ imageshack.us ] Look at the third set of images .
Had this bug not been present , they should have been nearly identical .
As you can see , in my case , they are radically different.As the text reads : ( dali picture ) All four images in this page are the same one but your browser was instructed to scale it .
The one below is scaled 1 : 2 .
On Opera and some versions of Internet Explorer it will show a gray rectangle .
KDE Konqueror , Firefox and SeaMonkey will display it either pink or green or half-green , half-pink : ( smaller dali or grey box ) ( mine draws in grey , despite using Firefox ) Below it is scaled down 1 : 4 .
The right one is one pixel wider and higher than the left one .
Some browsers display them quite differently : ( two images , differing if you have the bug )</tokentext>
<sentencetext>Here:http://img43.imageshack.us/img43/2586/scalerbug.png [imageshack.us]Look at the third set of images.
Had this bug not been present, they should have been nearly identical.
As you can see, in my case, they are radically different.As the text reads:(dali picture)All four images in this page are the same one but your browser was instructed to scale it.
The one below is scaled 1:2.
On Opera and some versions of Internet Explorer it will show a gray rectangle.
KDE Konqueror, Firefox and SeaMonkey will display it either pink or green or half-green, half-pink:(smaller dali or grey box)(mine draws in grey, despite using Firefox)Below it is scaled down 1:4.
The right one is one pixel wider and higher than the left one.
Some browsers display them quite differently:(two images, differing if you have the bug)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254584</id>
	<title>Re:Monitor gamma?</title>
	<author>Anonymous</author>
	<datestamp>1266939540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>The data in the pictures is not linear data.  It assumes that it will be displayed on a system that introduces a gamma of 2.2.  (If your display system does not do that physically, it should correct for this.)  That is, a gray 127 should not display as halfway between a white 255 and a black zero, in terms of light output.  (It should *appear* halfway between them visually, because your eyes aren't linear &mdash; that's (part of) why gamma is in use in the first place.)  So, a checkerboard pattern of white / black squares will have half the luminosity of the white squares.  When scaling down, software will turn it into a bunch of gray pixels.  But they should be gray pixels of value 186, not 127.</p><p>The page is not well written, but his example images make the issue very clear.  It's not about your monitor gamma; it's about the "standard gamma" that all image files assume your monitor has.</p></htmltext>
<tokenext>The data in the pictures is not linear data .
It assumes that it will be displayed on a system that introduces a gamma of 2.2 .
( If your display system does not do that physically , it should correct for this .
) That is , a gray 127 should not display as halfway between a white 255 and a black zero , in terms of light output .
( It should * appear * halfway between them visually , because your eyes are n't linear    that 's ( part of ) why gamma is in use in the first place .
) So , a checkerboard pattern of white / black squares will have half the luminosity of the white squares .
When scaling down , software will turn it into a bunch of gray pixels .
But they should be gray pixels of value 186 , not 127.The page is not well written , but his example images make the issue very clear .
It 's not about your monitor gamma ; it 's about the " standard gamma " that all image files assume your monitor has .</tokentext>
<sentencetext>The data in the pictures is not linear data.
It assumes that it will be displayed on a system that introduces a gamma of 2.2.
(If your display system does not do that physically, it should correct for this.
)  That is, a gray 127 should not display as halfway between a white 255 and a black zero, in terms of light output.
(It should *appear* halfway between them visually, because your eyes aren't linear — that's (part of) why gamma is in use in the first place.
)  So, a checkerboard pattern of white / black squares will have half the luminosity of the white squares.
When scaling down, software will turn it into a bunch of gray pixels.
But they should be gray pixels of value 186, not 127.The page is not well written, but his example images make the issue very clear.
It's not about your monitor gamma; it's about the "standard gamma" that all image files assume your monitor has.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31264038</id>
	<title>Re:HA!</title>
	<author>Anonymous</author>
	<datestamp>1265103060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I use a mac. I love it. Everything except the font rendering, which is \_objectively\_ crap.</p><p>I suspect that the algorithms were developed for CRTs, where they might have worked, due to CRTs blurring. On an LCD, however, they are horrid.</p><p>If you are on a Mac, look up now are your File menu.  Notice the F has a grey bar under the top horizontal stroke. Move back, notice that the stroke now looks out of focus.</p><p>Move back to where you can't reach the keyboard. Hey! Now it looks good.</p><p>Damn, I'd finally trained myself to ignore it. Now it's bugging me again.</p></htmltext>
<tokenext>I use a mac .
I love it .
Everything except the font rendering , which is \ _objectively \ _ crap.I suspect that the algorithms were developed for CRTs , where they might have worked , due to CRTs blurring .
On an LCD , however , they are horrid.If you are on a Mac , look up now are your File menu .
Notice the F has a grey bar under the top horizontal stroke .
Move back , notice that the stroke now looks out of focus.Move back to where you ca n't reach the keyboard .
Hey ! Now it looks good.Damn , I 'd finally trained myself to ignore it .
Now it 's bugging me again .</tokentext>
<sentencetext>I use a mac.
I love it.
Everything except the font rendering, which is \_objectively\_ crap.I suspect that the algorithms were developed for CRTs, where they might have worked, due to CRTs blurring.
On an LCD, however, they are horrid.If you are on a Mac, look up now are your File menu.
Notice the F has a grey bar under the top horizontal stroke.
Move back, notice that the stroke now looks out of focus.Move back to where you can't reach the keyboard.
Hey! Now it looks good.Damn, I'd finally trained myself to ignore it.
Now it's bugging me again.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382</id>
	<title>Re:HA!</title>
	<author>Anonymous</author>
	<datestamp>1266945240000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext><p>Actually, according to TFA, Apple's built-in toolkits (used by Aperture and Pixelmator) seem to be immune to this bug.  Photoshop ceased being a mac-like application a very long time ago.</p><p>This is one of many reasons why creative professionals prefer macs over PCs --- and I'm not saying this as platform evangelism -- for one, you'd be hard pressed to disagree that Mac OS X's font-rendering, kerning, and anti-aliasing abilities are far superior to those provided by Windows when presented with side-by-side examples.  It's lots of little things (many of which likely took the programmers a great deal of time to get right) that make the platform so nice to work with.  Likewise, Adobe is quickly exhausting its remaining good will with the graphic design community, as recent Photoshop releases have declined significantly in quality, and have generally added little value to the application, as well as the abomination that is the Flash player.</p><p>I haven't checked Win7 yet, but as of Vista, Windows still presented windows-3.1-style dialog boxes when adding fonts.  Although this is a fairly superficial example, it provides a great example of Microsoft's general neglect of its existing codebases.  Once a feature becomes "stable," it rarely if ever gets refined or tweaked in subsequent releases, while poorly-integrated features get piled on top (although it must be said that when Microsoft finally does choose to overhaul part of the UI, they generally do a pretty good job of it.  IE7 and 8 are notable exceptions, and actually seem to have been made intentionally confusing -- even the KDE, GIMP, and Blender folks would struggle to make a UI so cryptic, inconsistent, and foreign-looking)</p></htmltext>
<tokenext>Actually , according to TFA , Apple 's built-in toolkits ( used by Aperture and Pixelmator ) seem to be immune to this bug .
Photoshop ceased being a mac-like application a very long time ago.This is one of many reasons why creative professionals prefer macs over PCs --- and I 'm not saying this as platform evangelism -- for one , you 'd be hard pressed to disagree that Mac OS X 's font-rendering , kerning , and anti-aliasing abilities are far superior to those provided by Windows when presented with side-by-side examples .
It 's lots of little things ( many of which likely took the programmers a great deal of time to get right ) that make the platform so nice to work with .
Likewise , Adobe is quickly exhausting its remaining good will with the graphic design community , as recent Photoshop releases have declined significantly in quality , and have generally added little value to the application , as well as the abomination that is the Flash player.I have n't checked Win7 yet , but as of Vista , Windows still presented windows-3.1-style dialog boxes when adding fonts .
Although this is a fairly superficial example , it provides a great example of Microsoft 's general neglect of its existing codebases .
Once a feature becomes " stable , " it rarely if ever gets refined or tweaked in subsequent releases , while poorly-integrated features get piled on top ( although it must be said that when Microsoft finally does choose to overhaul part of the UI , they generally do a pretty good job of it .
IE7 and 8 are notable exceptions , and actually seem to have been made intentionally confusing -- even the KDE , GIMP , and Blender folks would struggle to make a UI so cryptic , inconsistent , and foreign-looking )</tokentext>
<sentencetext>Actually, according to TFA, Apple's built-in toolkits (used by Aperture and Pixelmator) seem to be immune to this bug.
Photoshop ceased being a mac-like application a very long time ago.This is one of many reasons why creative professionals prefer macs over PCs --- and I'm not saying this as platform evangelism -- for one, you'd be hard pressed to disagree that Mac OS X's font-rendering, kerning, and anti-aliasing abilities are far superior to those provided by Windows when presented with side-by-side examples.
It's lots of little things (many of which likely took the programmers a great deal of time to get right) that make the platform so nice to work with.
Likewise, Adobe is quickly exhausting its remaining good will with the graphic design community, as recent Photoshop releases have declined significantly in quality, and have generally added little value to the application, as well as the abomination that is the Flash player.I haven't checked Win7 yet, but as of Vista, Windows still presented windows-3.1-style dialog boxes when adding fonts.
Although this is a fairly superficial example, it provides a great example of Microsoft's general neglect of its existing codebases.
Once a feature becomes "stable," it rarely if ever gets refined or tweaked in subsequent releases, while poorly-integrated features get piled on top (although it must be said that when Microsoft finally does choose to overhaul part of the UI, they generally do a pretty good job of it.
IE7 and 8 are notable exceptions, and actually seem to have been made intentionally confusing -- even the KDE, GIMP, and Blender folks would struggle to make a UI so cryptic, inconsistent, and foreign-looking)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254438</id>
	<title>short version</title>
	<author>Anonymous</author>
	<datestamp>1266938520000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Most scaling algorithms treat brightness as a linear space, so e.g. if you're doing downscaling to 1/2 the size in each dimension, collapse 4 pixels into 1 by setting the 1 pixel to the numerical average of the original 4 pixels. But, most images are <i>displayed</i> with an assumption that brightness is a nonlinear space, i.e. gamma &gt; 1. Therefore, scaling changes the perceived brightness, an unexpected result.</p></htmltext>
<tokenext>Most scaling algorithms treat brightness as a linear space , so e.g .
if you 're doing downscaling to 1/2 the size in each dimension , collapse 4 pixels into 1 by setting the 1 pixel to the numerical average of the original 4 pixels .
But , most images are displayed with an assumption that brightness is a nonlinear space , i.e .
gamma &gt; 1 .
Therefore , scaling changes the perceived brightness , an unexpected result .</tokentext>
<sentencetext>Most scaling algorithms treat brightness as a linear space, so e.g.
if you're doing downscaling to 1/2 the size in each dimension, collapse 4 pixels into 1 by setting the 1 pixel to the numerical average of the original 4 pixels.
But, most images are displayed with an assumption that brightness is a nonlinear space, i.e.
gamma &gt; 1.
Therefore, scaling changes the perceived brightness, an unexpected result.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257064</id>
	<title>Re:Oh calm down..</title>
	<author>Anonymous</author>
	<datestamp>1265106180000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>He seems to take it seriously.</p><p>Probably a gamma nazi...</p></htmltext>
<tokenext>He seems to take it seriously.Probably a gamma nazi.. .</tokentext>
<sentencetext>He seems to take it seriously.Probably a gamma nazi...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256080</id>
	<title>Re:Oh dear. Linear color space again, 11 years lat</title>
	<author>Anonymous</author>
	<datestamp>1266952140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I care and I've seen this on day to day photos.</p><p>If a "save for web" or email picture function adds 22\% brightness and lowers the picture contrast that sucks.</p></htmltext>
<tokenext>I care and I 've seen this on day to day photos.If a " save for web " or email picture function adds 22 \ % brightness and lowers the picture contrast that sucks .</tokentext>
<sentencetext>I care and I've seen this on day to day photos.If a "save for web" or email picture function adds 22\% brightness and lowers the picture contrast that sucks.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254664</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255400</id>
	<title>Re:Nitpicking</title>
	<author>jabberw0k</author>
	<datestamp>1266945420000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>so many softwares</p></div></blockquote><p>

Urrgh... it's "so many programs" or "so many software packages"<nobr> <wbr></nobr>... you don't have "one software" -- you have a piece of software.  It's a collective noun like "hardware" and "clothing."  There is no word, "softwares."</p></div>
	</htmltext>
<tokenext>so many softwares Urrgh... it 's " so many programs " or " so many software packages " ... you do n't have " one software " -- you have a piece of software .
It 's a collective noun like " hardware " and " clothing .
" There is no word , " softwares .
"</tokentext>
<sentencetext>so many softwares

Urrgh... it's "so many programs" or "so many software packages" ... you don't have "one software" -- you have a piece of software.
It's a collective noun like "hardware" and "clothing.
"  There is no word, "softwares.
"
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256188</id>
	<title>Re:Author expands scaling defination</title>
	<author>kappa962</author>
	<datestamp>1266953400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Unless I seriously misread TFA, this error has nothing to do with the spectral content of the data. Spectral content certainly influences the way things scale, but it seems to have no connection to this particular bug.</p><p>Look at the test picture in the "Explanation" part of the article. <a href="http://www.4p8.com/eric.brasseur/gamma.html#explanation" title="4p8.com" rel="nofollow">http://www.4p8.com/eric.brasseur/gamma.html#explanation</a> [4p8.com] Filtered or not, the test picture should not result in the second column being dark grey.</p></htmltext>
<tokenext>Unless I seriously misread TFA , this error has nothing to do with the spectral content of the data .
Spectral content certainly influences the way things scale , but it seems to have no connection to this particular bug.Look at the test picture in the " Explanation " part of the article .
http : //www.4p8.com/eric.brasseur/gamma.html # explanation [ 4p8.com ] Filtered or not , the test picture should not result in the second column being dark grey .</tokentext>
<sentencetext>Unless I seriously misread TFA, this error has nothing to do with the spectral content of the data.
Spectral content certainly influences the way things scale, but it seems to have no connection to this particular bug.Look at the test picture in the "Explanation" part of the article.
http://www.4p8.com/eric.brasseur/gamma.html#explanation [4p8.com] Filtered or not, the test picture should not result in the second column being dark grey.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254446</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257820</id>
	<title>Re:Monitor gamma?</title>
	<author>orasio</author>
	<datestamp>1265115300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>(dali picture)</p></div><p>If it were actually a Dali picture, he would have intended it to show this way.</p></div>
	</htmltext>
<tokenext>( dali picture ) If it were actually a Dali picture , he would have intended it to show this way .</tokentext>
<sentencetext>(dali picture)If it were actually a Dali picture, he would have intended it to show this way.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254990</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420</id>
	<title>Monitor gamma?</title>
	<author>Yvan256</author>
	<datestamp>1266938460000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>To <em>display</em> the pictures, it makes sense to use the monitor gamma. But to actually <em>modify the data</em> using that information which is  probably flawed in 99.9999999\% of cases? That's just wrong.</p></htmltext>
<tokenext>To display the pictures , it makes sense to use the monitor gamma .
But to actually modify the data using that information which is probably flawed in 99.9999999 \ % of cases ?
That 's just wrong .</tokentext>
<sentencetext>To display the pictures, it makes sense to use the monitor gamma.
But to actually modify the data using that information which is  probably flawed in 99.9999999\% of cases?
That's just wrong.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256986</id>
	<title>works like a charm</title>
	<author>Anonymous</author>
	<datestamp>1265105160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>wget -O - http://www.4p8.com/eric.brasseur/gamma\_dalai\_lama\_gray.jpg | djpeg | pnmdepth 65535 | pnmgamma -ungamma -srgbramp | pamscale -filter=sinc 0.5 | pnmgamma -srgbramp | display</p></htmltext>
<tokenext>wget -O - http : //www.4p8.com/eric.brasseur/gamma \ _dalai \ _lama \ _gray.jpg | djpeg | pnmdepth 65535 | pnmgamma -ungamma -srgbramp | pamscale -filter = sinc 0.5 | pnmgamma -srgbramp | display</tokentext>
<sentencetext>wget -O - http://www.4p8.com/eric.brasseur/gamma\_dalai\_lama\_gray.jpg | djpeg | pnmdepth 65535 | pnmgamma -ungamma -srgbramp | pamscale -filter=sinc 0.5 | pnmgamma -srgbramp | display</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254510</id>
	<title>Nitpicking</title>
	<author>Ekuryua</author>
	<datestamp>1266938880000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext>Is what this article is about.<br>
This matter has been known for a long time, and there's a reason why so many softwares ignore it:<br>
it hardly matters. That and it's also way more complicated to do it properly.<br>
Gain / Pain is clearly inferior to 1 there.</htmltext>
<tokenext>Is what this article is about .
This matter has been known for a long time , and there 's a reason why so many softwares ignore it : it hardly matters .
That and it 's also way more complicated to do it properly .
Gain / Pain is clearly inferior to 1 there .</tokentext>
<sentencetext>Is what this article is about.
This matter has been known for a long time, and there's a reason why so many softwares ignore it:
it hardly matters.
That and it's also way more complicated to do it properly.
Gain / Pain is clearly inferior to 1 there.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256996</id>
	<title>Not a bug</title>
	<author>Anonymous</author>
	<datestamp>1265105220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This isn't a bug. Don't call it a bug. It's a specific way of operation.</p><p>The results in these program differ from what a *single* person expects - and this person is not a computer graphics person. On the other hand, the results are exactly what many computer graphics people expect.</p><p>The operating domain of these scaling algorithms is a computer image. It has nothing to do with "real" things, and nothing to do with the mistaken imagination of the author of TFA.</p></htmltext>
<tokenext>This is n't a bug .
Do n't call it a bug .
It 's a specific way of operation.The results in these program differ from what a * single * person expects - and this person is not a computer graphics person .
On the other hand , the results are exactly what many computer graphics people expect.The operating domain of these scaling algorithms is a computer image .
It has nothing to do with " real " things , and nothing to do with the mistaken imagination of the author of TFA .</tokentext>
<sentencetext>This isn't a bug.
Don't call it a bug.
It's a specific way of operation.The results in these program differ from what a *single* person expects - and this person is not a computer graphics person.
On the other hand, the results are exactly what many computer graphics people expect.The operating domain of these scaling algorithms is a computer image.
It has nothing to do with "real" things, and nothing to do with the mistaken imagination of the author of TFA.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255538</id>
	<title>But is it Art?</title>
	<author>NotQuiteReal</author>
	<datestamp>1266946920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Looks ok to me. Besides, I think I just read a Slashdot article that all digital media is going to rot in the long run.
<br>
<br>
In the short run, look for a new marketing bullet point.</htmltext>
<tokenext>Looks ok to me .
Besides , I think I just read a Slashdot article that all digital media is going to rot in the long run .
In the short run , look for a new marketing bullet point .</tokentext>
<sentencetext>Looks ok to me.
Besides, I think I just read a Slashdot article that all digital media is going to rot in the long run.
In the short run, look for a new marketing bullet point.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257520</id>
	<title>Re:Gamma and sRGB</title>
	<author>Anonymous</author>
	<datestamp>1265111760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><i><br>So does this matter? Well, in some pathological cases where there are repeated sharp boundaries (such as alternating black-white lines or fine checkerboard patterns), this would make a difference. This is because the linear average of the pixels (what most image scalers use) yields a different result than if the gamma value was taken into account. For most images (both photographic and computer generated), this shouldn't be a big problem. Most samples are close in value to other nearby samples, so the error resulting from the gamma curve is very small. Sparse light-dark transitions also wouldn't be noticeable as there would only be an error right on the boundary. Only when you exercise this case over a large area does it become obvious.<br></i></p><p>One thing that worsen the situation though is noise, which unless excessive or colored (in the frequency spectral sense), won't have a huge effect on low-frequency content (ie, averages out in the larger details), but will bump the differences between neighboring pixels, making the gamma issue more obvious. I think this could be happening with the natural photograph examples posted in the original link (ie, "noise" here would be fine details of nature that scaling largely removes; but since the filter is acting on non-linear scale it gets wrong results).</p></htmltext>
<tokenext>So does this matter ?
Well , in some pathological cases where there are repeated sharp boundaries ( such as alternating black-white lines or fine checkerboard patterns ) , this would make a difference .
This is because the linear average of the pixels ( what most image scalers use ) yields a different result than if the gamma value was taken into account .
For most images ( both photographic and computer generated ) , this should n't be a big problem .
Most samples are close in value to other nearby samples , so the error resulting from the gamma curve is very small .
Sparse light-dark transitions also would n't be noticeable as there would only be an error right on the boundary .
Only when you exercise this case over a large area does it become obvious.One thing that worsen the situation though is noise , which unless excessive or colored ( in the frequency spectral sense ) , wo n't have a huge effect on low-frequency content ( ie , averages out in the larger details ) , but will bump the differences between neighboring pixels , making the gamma issue more obvious .
I think this could be happening with the natural photograph examples posted in the original link ( ie , " noise " here would be fine details of nature that scaling largely removes ; but since the filter is acting on non-linear scale it gets wrong results ) .</tokentext>
<sentencetext>So does this matter?
Well, in some pathological cases where there are repeated sharp boundaries (such as alternating black-white lines or fine checkerboard patterns), this would make a difference.
This is because the linear average of the pixels (what most image scalers use) yields a different result than if the gamma value was taken into account.
For most images (both photographic and computer generated), this shouldn't be a big problem.
Most samples are close in value to other nearby samples, so the error resulting from the gamma curve is very small.
Sparse light-dark transitions also wouldn't be noticeable as there would only be an error right on the boundary.
Only when you exercise this case over a large area does it become obvious.One thing that worsen the situation though is noise, which unless excessive or colored (in the frequency spectral sense), won't have a huge effect on low-frequency content (ie, averages out in the larger details), but will bump the differences between neighboring pixels, making the gamma issue more obvious.
I think this could be happening with the natural photograph examples posted in the original link (ie, "noise" here would be fine details of nature that scaling largely removes; but since the filter is acting on non-linear scale it gets wrong results).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254802</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257682</id>
	<title>Re:Wrong</title>
	<author>genik76</author>
	<datestamp>1265113800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Romans were poisoned by lead for years and weren't bothered by it.</htmltext>
<tokenext>Romans were poisoned by lead for years and were n't bothered by it .</tokentext>
<sentencetext>Romans were poisoned by lead for years and weren't bothered by it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31263192</id>
	<title>Does he even realize...</title>
	<author>Anonymous</author>
	<datestamp>1265142480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>zomg!<br>His inversion routine used on every other line in the original image is flawed!<br>It's working in linear space (gasp) otherwise the resulting images would have been switched.</p><p>(tongue in cheek)</p></htmltext>
<tokenext>zomg ! His inversion routine used on every other line in the original image is flawed ! It 's working in linear space ( gasp ) otherwise the resulting images would have been switched .
( tongue in cheek )</tokentext>
<sentencetext>zomg!His inversion routine used on every other line in the original image is flawed!It's working in linear space (gasp) otherwise the resulting images would have been switched.
(tongue in cheek)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31268620</id>
	<title>Questions is...</title>
	<author>fly1ngtux</author>
	<datestamp>1265132940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Who wrote the original buggy code and who all copied it?
<br>
This may be a real bug and might have come from some wrong/incomplete understanding that existed 20 years back. But somehow I can get the above question out of my mind...</htmltext>
<tokenext>Who wrote the original buggy code and who all copied it ?
This may be a real bug and might have come from some wrong/incomplete understanding that existed 20 years back .
But somehow I can get the above question out of my mind.. .</tokentext>
<sentencetext>Who wrote the original buggy code and who all copied it?
This may be a real bug and might have come from some wrong/incomplete understanding that existed 20 years back.
But somehow I can get the above question out of my mind...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31270472</id>
	<title>Re:Gamma and sRGB</title>
	<author>Anonymous</author>
	<datestamp>1267100580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Correct but not on all aspects. The gamma 2.2 in sRGB has another curve than the 2.2 gamma of AdobeRGB. Applying one and the same compensation does more harm than the flaw would on normal images.</p><p>The title of the topic shouldn't have mentioned scaling but resampling. Photoshop allows a virtual size definition that can be altered without resampling and by that without the flaw mentioned. If the file then later on is printed through Qimage the resampling to the native resolution of the printer happens, while the size can be kept as defined in PS. That at least reduces the effect to one step only. Qimage has no compensation for this effect but there are far more severe problems in aliasing, sharpening for size etc that it can handle. The image samples discussed wouldn't pass any downsampling if anti-aliasing isn't used and Qimage does that.</p></htmltext>
<tokenext>Correct but not on all aspects .
The gamma 2.2 in sRGB has another curve than the 2.2 gamma of AdobeRGB .
Applying one and the same compensation does more harm than the flaw would on normal images.The title of the topic should n't have mentioned scaling but resampling .
Photoshop allows a virtual size definition that can be altered without resampling and by that without the flaw mentioned .
If the file then later on is printed through Qimage the resampling to the native resolution of the printer happens , while the size can be kept as defined in PS .
That at least reduces the effect to one step only .
Qimage has no compensation for this effect but there are far more severe problems in aliasing , sharpening for size etc that it can handle .
The image samples discussed would n't pass any downsampling if anti-aliasing is n't used and Qimage does that .</tokentext>
<sentencetext>Correct but not on all aspects.
The gamma 2.2 in sRGB has another curve than the 2.2 gamma of AdobeRGB.
Applying one and the same compensation does more harm than the flaw would on normal images.The title of the topic shouldn't have mentioned scaling but resampling.
Photoshop allows a virtual size definition that can be altered without resampling and by that without the flaw mentioned.
If the file then later on is printed through Qimage the resampling to the native resolution of the printer happens, while the size can be kept as defined in PS.
That at least reduces the effect to one step only.
Qimage has no compensation for this effect but there are far more severe problems in aliasing, sharpening for size etc that it can handle.
The image samples discussed wouldn't pass any downsampling if anti-aliasing isn't used and Qimage does that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254802</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258788</id>
	<title>vfx crew know this</title>
	<author>Anonymous</author>
	<datestamp>1265123460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>this and many other reasons are why vfx dudes work in linear light and have been for a very long time</p></htmltext>
<tokenext>this and many other reasons are why vfx dudes work in linear light and have been for a very long time</tokentext>
<sentencetext>this and many other reasons are why vfx dudes work in linear light and have been for a very long time</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256192</id>
	<title>Re:Author expands scaling defination</title>
	<author>Anonymous</author>
	<datestamp>1266953460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Actually a good scaling algorithm should perform a lowpass filter when downscaling. This is similar to downsampling of digital audio where you do need to filter out frequencies above half the sampling rate. Leafing these higher frequencies in would cause noise because they can not be faithfully represented in a lower resolution file.</p></htmltext>
<tokenext>Actually a good scaling algorithm should perform a lowpass filter when downscaling .
This is similar to downsampling of digital audio where you do need to filter out frequencies above half the sampling rate .
Leafing these higher frequencies in would cause noise because they can not be faithfully represented in a lower resolution file .</tokentext>
<sentencetext>Actually a good scaling algorithm should perform a lowpass filter when downscaling.
This is similar to downsampling of digital audio where you do need to filter out frequencies above half the sampling rate.
Leafing these higher frequencies in would cause noise because they can not be faithfully represented in a lower resolution file.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254446</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254880</id>
	<title>Re:short version</title>
	<author>Anonymous</author>
	<datestamp>1266941400000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Reading the comments I finally start to understand what tfs is trying to say.
</p><p>All and all I would not call this a bug. Also not a feature. It's an artifact at best. Bugs in the common use of the word are either small animals, or programming errors. This is neither. It's an algorithm that has certain artifacts, and some software (the long list in tfs) apparently uses a different algorithm.
</p><p>So how is this news? What does it have to do on<nobr> <wbr></nobr>/. really? This is more something to discuss for graphics people, not for computer people like we are.</p></htmltext>
<tokenext>Reading the comments I finally start to understand what tfs is trying to say .
All and all I would not call this a bug .
Also not a feature .
It 's an artifact at best .
Bugs in the common use of the word are either small animals , or programming errors .
This is neither .
It 's an algorithm that has certain artifacts , and some software ( the long list in tfs ) apparently uses a different algorithm .
So how is this news ?
What does it have to do on / .
really ? This is more something to discuss for graphics people , not for computer people like we are .</tokentext>
<sentencetext>Reading the comments I finally start to understand what tfs is trying to say.
All and all I would not call this a bug.
Also not a feature.
It's an artifact at best.
Bugs in the common use of the word are either small animals, or programming errors.
This is neither.
It's an algorithm that has certain artifacts, and some software (the long list in tfs) apparently uses a different algorithm.
So how is this news?
What does it have to do on /.
really? This is more something to discuss for graphics people, not for computer people like we are.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254438</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254398</id>
	<title>FP!</title>
	<author>Anonymous</author>
	<datestamp>1266938280000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>FP!</htmltext>
<tokenext>FP !</tokentext>
<sentencetext>FP!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259484</id>
	<title>Re:Oh calm down..</title>
	<author>Anonymous</author>
	<datestamp>1265127300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I have a theory about why this has not been flagged previously, too, but there isn't room in this narrow box to explain it.</p></htmltext>
<tokenext>I have a theory about why this has not been flagged previously , too , but there is n't room in this narrow box to explain it .</tokentext>
<sentencetext>I have a theory about why this has not been flagged previously, too, but there isn't room in this narrow box to explain it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258626</id>
	<title>Re:Editing in RGB is wrong too</title>
	<author>noidentity</author>
	<datestamp>1265122740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Several people have spoken about "linear" RGB. That's nice and gets rid of some small level of distortion introduced by the non-linearity. However, it only starts there. For example, the eye sees R, G, and B differently. It is more sensitive to green than red, and to red more than blue, but it's not even that simple as the equations in your eye's processor are much more complicated.</p></div> </blockquote><p>Actually, you've got it backwards. Image scaling is just simulating moving closer to or farther from the image, not the eye. But most images are stored in sRGB (or similar), which accommodates the eye's non-linear sensitivity. Thus, when scaling, this a image must be converted to linear first, in order to eliminate this accommodation.</p></div>
	</htmltext>
<tokenext>Several people have spoken about " linear " RGB .
That 's nice and gets rid of some small level of distortion introduced by the non-linearity .
However , it only starts there .
For example , the eye sees R , G , and B differently .
It is more sensitive to green than red , and to red more than blue , but it 's not even that simple as the equations in your eye 's processor are much more complicated .
Actually , you 've got it backwards .
Image scaling is just simulating moving closer to or farther from the image , not the eye .
But most images are stored in sRGB ( or similar ) , which accommodates the eye 's non-linear sensitivity .
Thus , when scaling , this a image must be converted to linear first , in order to eliminate this accommodation .</tokentext>
<sentencetext>Several people have spoken about "linear" RGB.
That's nice and gets rid of some small level of distortion introduced by the non-linearity.
However, it only starts there.
For example, the eye sees R, G, and B differently.
It is more sensitive to green than red, and to red more than blue, but it's not even that simple as the equations in your eye's processor are much more complicated.
Actually, you've got it backwards.
Image scaling is just simulating moving closer to or farther from the image, not the eye.
But most images are stored in sRGB (or similar), which accommodates the eye's non-linear sensitivity.
Thus, when scaling, this a image must be converted to linear first, in order to eliminate this accommodation.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255626</id>
	<title>By the way:</title>
	<author>Hurricane78</author>
	<datestamp>1266947760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Has anyone else noticed the &ldquo;dark gamma&rdquo; cancer that came over the Internet since the dawn of cheap LCD displays?</p><p>I have a couple of very carefully calibrated displays here, and pretty much everything on YouTube and every image on the net has a way too deep gamma.<br>Which is caused by the LCDs (especially the cheap ones) all being extremely white and having a distorted gamma by default.</p><p>It&rsquo;s really annoying, since I always have to switch the color profile when I want to see anything in those videos. And images I create for the web end up looking very white on Joe Sixpack&rsquo;s displays.<nobr> <wbr></nobr>:(</p><p>I wish there was a way to punch every display company boss in the face for not enforcing proper calibration.<nobr> <wbr></nobr>:/</p></htmltext>
<tokenext>Has anyone else noticed the    dark gamma    cancer that came over the Internet since the dawn of cheap LCD displays ? I have a couple of very carefully calibrated displays here , and pretty much everything on YouTube and every image on the net has a way too deep gamma.Which is caused by the LCDs ( especially the cheap ones ) all being extremely white and having a distorted gamma by default.It    s really annoying , since I always have to switch the color profile when I want to see anything in those videos .
And images I create for the web end up looking very white on Joe Sixpack    s displays .
: ( I wish there was a way to punch every display company boss in the face for not enforcing proper calibration .
: /</tokentext>
<sentencetext>Has anyone else noticed the “dark gamma” cancer that came over the Internet since the dawn of cheap LCD displays?I have a couple of very carefully calibrated displays here, and pretty much everything on YouTube and every image on the net has a way too deep gamma.Which is caused by the LCDs (especially the cheap ones) all being extremely white and having a distorted gamma by default.It’s really annoying, since I always have to switch the color profile when I want to see anything in those videos.
And images I create for the web end up looking very white on Joe Sixpack’s displays.
:(I wish there was a way to punch every display company boss in the face for not enforcing proper calibration.
:/</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260730</id>
	<title>what is the affect on color distances calculations</title>
	<author>swframe</author>
	<datestamp>1265132520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>
(This may be off topic, in which case, please ignore.)
I was wondering if rgb should be converted to linear before
computing color distances? And, what is the better way to
compute color distances (since the euclidean distance of
rgb values seems like a really bad choice).</htmltext>
<tokenext>( This may be off topic , in which case , please ignore .
) I was wondering if rgb should be converted to linear before computing color distances ?
And , what is the better way to compute color distances ( since the euclidean distance of rgb values seems like a really bad choice ) .</tokentext>
<sentencetext>
(This may be off topic, in which case, please ignore.
)
I was wondering if rgb should be converted to linear before
computing color distances?
And, what is the better way to
compute color distances (since the euclidean distance of
rgb values seems like a really bad choice).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256944</id>
	<title>And if you're in China . . .</title>
	<author>Anonymous</author>
	<datestamp>1265104800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>. . . you'll probably see an error message.</p><p>Great idea using the Dalai Lama image.</p></htmltext>
<tokenext>.
. .
you 'll probably see an error message.Great idea using the Dalai Lama image .</tokentext>
<sentencetext>.
. .
you'll probably see an error message.Great idea using the Dalai Lama image.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254740</id>
	<title>Gamma</title>
	<author>Anonymous</author>
	<datestamp>1266940500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>As a programmer who knows nothing about graphics algorithms, can somebody explain to me exactly what gamma is? I've been told I should by worrying about it for at least a couple decades, but thus far my lack of knowledge has not caused by any bodily injury. Use small words.</p></htmltext>
<tokenext>As a programmer who knows nothing about graphics algorithms , can somebody explain to me exactly what gamma is ?
I 've been told I should by worrying about it for at least a couple decades , but thus far my lack of knowledge has not caused by any bodily injury .
Use small words .</tokentext>
<sentencetext>As a programmer who knows nothing about graphics algorithms, can somebody explain to me exactly what gamma is?
I've been told I should by worrying about it for at least a couple decades, but thus far my lack of knowledge has not caused by any bodily injury.
Use small words.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260066</id>
	<title>Could some one explain the following then</title>
	<author>goombah99</author>
	<datestamp>1265129580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I work with RGB for science purposes all the time.  To me the RGB levels are directly proportional to the number of photons collected.   If I fuse ccd pixels then the number of photons collected is just the sum of the pixels counts.  If this were my eyeball and not a photon collector then the same logic applies.  If you reduce the number of pixels it's like focusing the image to a smaller portion of your retina.  Each rod will collect proportionally more photons.</p><p>thus if I reproduce this image on scree so that the number of photons leaving a pixel is proportional to the RGB level then this is reproducing the image as I would have seen it.</p><p>When I go to display this on a screen, then the graphics system will naturally apply a gamma correction. This correction is there to correct for the screen phosphor response NOT the eyeball's response.  It is intended to make it so that 128 has twice the photons as 64.</p><p>Thus I think the analysis is wrong.  You don't want to average in gamma space.  you want to average in RGB space then apply a gamma just like is being done.</p><p>I think the real problem is that when you sum pixels their not only is the sum larger but the dynamic range of the image pixels is larger.   Thus if you need to compress this back to 8 bit dynamic range you have a problem.</p><p>So my conclusion is the problem is NOT that you want to do<br>pixel^gamma -&gt;  average regions -&gt;  pixel ^(1/gamma)</p><p>but instead average lineraly<br>pixel^gamma -&gt;  sum regions -&gt;   ???  some how compress dynamic range ????</p><p>I don't know what the correct compression is.  But it makes no sense to me to non-linearly sum things that correspond to actual photon counts.</p><p>SO either I misread the article  (don't think so), or the article is right about there being an overall dynamic range compression issue but wrong about the solution,  or I am not understanding some key concept here.</p><p>can someone explain to me what I'm missing?</p></htmltext>
<tokenext>I work with RGB for science purposes all the time .
To me the RGB levels are directly proportional to the number of photons collected .
If I fuse ccd pixels then the number of photons collected is just the sum of the pixels counts .
If this were my eyeball and not a photon collector then the same logic applies .
If you reduce the number of pixels it 's like focusing the image to a smaller portion of your retina .
Each rod will collect proportionally more photons.thus if I reproduce this image on scree so that the number of photons leaving a pixel is proportional to the RGB level then this is reproducing the image as I would have seen it.When I go to display this on a screen , then the graphics system will naturally apply a gamma correction .
This correction is there to correct for the screen phosphor response NOT the eyeball 's response .
It is intended to make it so that 128 has twice the photons as 64.Thus I think the analysis is wrong .
You do n't want to average in gamma space .
you want to average in RGB space then apply a gamma just like is being done.I think the real problem is that when you sum pixels their not only is the sum larger but the dynamic range of the image pixels is larger .
Thus if you need to compress this back to 8 bit dynamic range you have a problem.So my conclusion is the problem is NOT that you want to dopixel ^ gamma - &gt; average regions - &gt; pixel ^ ( 1/gamma ) but instead average lineralypixel ^ gamma - &gt; sum regions - &gt; ? ? ?
some how compress dynamic range ? ? ?
? I do n't know what the correct compression is .
But it makes no sense to me to non-linearly sum things that correspond to actual photon counts.SO either I misread the article ( do n't think so ) , or the article is right about there being an overall dynamic range compression issue but wrong about the solution , or I am not understanding some key concept here.can someone explain to me what I 'm missing ?</tokentext>
<sentencetext>I work with RGB for science purposes all the time.
To me the RGB levels are directly proportional to the number of photons collected.
If I fuse ccd pixels then the number of photons collected is just the sum of the pixels counts.
If this were my eyeball and not a photon collector then the same logic applies.
If you reduce the number of pixels it's like focusing the image to a smaller portion of your retina.
Each rod will collect proportionally more photons.thus if I reproduce this image on scree so that the number of photons leaving a pixel is proportional to the RGB level then this is reproducing the image as I would have seen it.When I go to display this on a screen, then the graphics system will naturally apply a gamma correction.
This correction is there to correct for the screen phosphor response NOT the eyeball's response.
It is intended to make it so that 128 has twice the photons as 64.Thus I think the analysis is wrong.
You don't want to average in gamma space.
you want to average in RGB space then apply a gamma just like is being done.I think the real problem is that when you sum pixels their not only is the sum larger but the dynamic range of the image pixels is larger.
Thus if you need to compress this back to 8 bit dynamic range you have a problem.So my conclusion is the problem is NOT that you want to dopixel^gamma -&gt;  average regions -&gt;  pixel ^(1/gamma)but instead average lineralypixel^gamma -&gt;  sum regions -&gt;   ???
some how compress dynamic range ???
?I don't know what the correct compression is.
But it makes no sense to me to non-linearly sum things that correspond to actual photon counts.SO either I misread the article  (don't think so), or the article is right about there being an overall dynamic range compression issue but wrong about the solution,  or I am not understanding some key concept here.can someone explain to me what I'm missing?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255474</id>
	<title>Re:HA!</title>
	<author>Korin43</author>
	<datestamp>1266946080000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext>Well I tested the site in lynx and I didn't see any problems..</htmltext>
<tokenext>Well I tested the site in lynx and I did n't see any problems. .</tokentext>
<sentencetext>Well I tested the site in lynx and I didn't see any problems..</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256230</id>
	<title>Re:HA!</title>
	<author>Anonymous</author>
	<datestamp>1266954060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>This is one of many reasons why creative professionals prefer macs over PCs</p></div><p>macs are PCs...</p></div>
	</htmltext>
<tokenext>This is one of many reasons why creative professionals prefer macs over PCsmacs are PCs.. .</tokentext>
<sentencetext>This is one of many reasons why creative professionals prefer macs over PCsmacs are PCs...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259528</id>
	<title>Re:Not so common image</title>
	<author>omnichad</author>
	<datestamp>1265127420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>These aren't scanlines.  These are deliberate adjacent rows that are vastly different, for the purpose of exaggerating the problem.  Even blurring (in Photoshop CS3) gives an almost completely gray image.  The alternating rows are set up so that the red row + the green row = grey, unless you correct for gamma before you add them, in which case you get the intended colors.</p></htmltext>
<tokenext>These are n't scanlines .
These are deliberate adjacent rows that are vastly different , for the purpose of exaggerating the problem .
Even blurring ( in Photoshop CS3 ) gives an almost completely gray image .
The alternating rows are set up so that the red row + the green row = grey , unless you correct for gamma before you add them , in which case you get the intended colors .</tokentext>
<sentencetext>These aren't scanlines.
These are deliberate adjacent rows that are vastly different, for the purpose of exaggerating the problem.
Even blurring (in Photoshop CS3) gives an almost completely gray image.
The alternating rows are set up so that the red row + the green row = grey, unless you correct for gamma before you add them, in which case you get the intended colors.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254632</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257916</id>
	<title>Re:Editing in RGB is wrong too</title>
	<author>Anonymous</author>
	<datestamp>1265116440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That's exactly the point.</p><p>Almost all image operations are defined on numbers and thus result in different colors when applied in different color spaces. That's by definition. So if you have pixels 0/0/0 and 0/0/100, linearly interpolating the middle point gives 0/0/50. You can't "fix" that.</p><p>Sure, in sRGB this means that the result not only has "wrong" brightness (because of the gamma) but it has a "wrong" hue as well (compared to 0/0/100). Therefore, if you need a perceptually linear color space, you should use L*a*b* - this is what it's there for.</p></htmltext>
<tokenext>That 's exactly the point.Almost all image operations are defined on numbers and thus result in different colors when applied in different color spaces .
That 's by definition .
So if you have pixels 0/0/0 and 0/0/100 , linearly interpolating the middle point gives 0/0/50 .
You ca n't " fix " that.Sure , in sRGB this means that the result not only has " wrong " brightness ( because of the gamma ) but it has a " wrong " hue as well ( compared to 0/0/100 ) .
Therefore , if you need a perceptually linear color space , you should use L * a * b * - this is what it 's there for .</tokentext>
<sentencetext>That's exactly the point.Almost all image operations are defined on numbers and thus result in different colors when applied in different color spaces.
That's by definition.
So if you have pixels 0/0/0 and 0/0/100, linearly interpolating the middle point gives 0/0/50.
You can't "fix" that.Sure, in sRGB this means that the result not only has "wrong" brightness (because of the gamma) but it has a "wrong" hue as well (compared to 0/0/100).
Therefore, if you need a perceptually linear color space, you should use L*a*b* - this is what it's there for.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254570</id>
	<title>I look better in person.</title>
	<author>ipquickly</author>
	<datestamp>1266939240000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>I've been telling people for years that I look better in person.<br>I told them that there's something wrong with pictures of me.</p><p>HA!</p><p>Now I know.</p><p>It's the Scaling Algorithm BUG!</p></htmltext>
<tokenext>I 've been telling people for years that I look better in person.I told them that there 's something wrong with pictures of me.HA ! Now I know.It 's the Scaling Algorithm BUG !</tokentext>
<sentencetext>I've been telling people for years that I look better in person.I told them that there's something wrong with pictures of me.HA!Now I know.It's the Scaling Algorithm BUG!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31261702</id>
	<title>Re:Monitor gamma?</title>
	<author>sounds</author>
	<datestamp>1265136300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think you're wrong.  The data in the pictures is "supposed" to be linear, but with amateurs creating content on non-compliant systems the reality is that it rarely is.</p><p>Our eyes are certainly non-linear, and so is a CRT, but the whole point of having the gamma curve is so that all processing (math) can be done on linear data.  If the data in the picture is not linear, then why bother with gamma conversion at all?  Another part of the problem is that different systems have different gamma, from 0.0 to 1.4 to 2.2, and that means you can't assume the image data was properly captured to a linear curve.  I think that the biggest culprit here is Windows, which had no color management, and so users just adjusted colors to look right on their CRT, and eventually the web standard became this non-professional workflow.</p><p>In other words, the assumption of 2.2 gamma that you refer to is only valid when the output device has a gamma of 2.2, while another gamma would need to be assumed when a different output device is used.  The only constant is that the data should be linear, but we all know that you can't rely on data to follow standards when every grandma has a camera or scanner.</p></htmltext>
<tokenext>I think you 're wrong .
The data in the pictures is " supposed " to be linear , but with amateurs creating content on non-compliant systems the reality is that it rarely is.Our eyes are certainly non-linear , and so is a CRT , but the whole point of having the gamma curve is so that all processing ( math ) can be done on linear data .
If the data in the picture is not linear , then why bother with gamma conversion at all ?
Another part of the problem is that different systems have different gamma , from 0.0 to 1.4 to 2.2 , and that means you ca n't assume the image data was properly captured to a linear curve .
I think that the biggest culprit here is Windows , which had no color management , and so users just adjusted colors to look right on their CRT , and eventually the web standard became this non-professional workflow.In other words , the assumption of 2.2 gamma that you refer to is only valid when the output device has a gamma of 2.2 , while another gamma would need to be assumed when a different output device is used .
The only constant is that the data should be linear , but we all know that you ca n't rely on data to follow standards when every grandma has a camera or scanner .</tokentext>
<sentencetext>I think you're wrong.
The data in the pictures is "supposed" to be linear, but with amateurs creating content on non-compliant systems the reality is that it rarely is.Our eyes are certainly non-linear, and so is a CRT, but the whole point of having the gamma curve is so that all processing (math) can be done on linear data.
If the data in the picture is not linear, then why bother with gamma conversion at all?
Another part of the problem is that different systems have different gamma, from 0.0 to 1.4 to 2.2, and that means you can't assume the image data was properly captured to a linear curve.
I think that the biggest culprit here is Windows, which had no color management, and so users just adjusted colors to look right on their CRT, and eventually the web standard became this non-professional workflow.In other words, the assumption of 2.2 gamma that you refer to is only valid when the output device has a gamma of 2.2, while another gamma would need to be assumed when a different output device is used.
The only constant is that the data should be linear, but we all know that you can't rely on data to follow standards when every grandma has a camera or scanner.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254584</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260220</id>
	<title>Re:Wrong</title>
	<author>Anonymous</author>
	<datestamp>1265130120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Millions of "users" also thought the earth was flat (or, the center of Everything) for well over two decades...</p></htmltext>
<tokenext>Millions of " users " also thought the earth was flat ( or , the center of Everything ) for well over two decades.. .</tokentext>
<sentencetext>Millions of "users" also thought the earth was flat (or, the center of Everything) for well over two decades...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254424</id>
	<title>What about Irfanview and Picasa?</title>
	<author>cytoman</author>
	<datestamp>1266938460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'm not a pro photosoftware user so I guess none of what's discussed here really affects me. But still, I'm curious about Irfanview and Picasa, the two programs that I use for my photo needs. Are these affected? How do I detect the effect?</htmltext>
<tokenext>I 'm not a pro photosoftware user so I guess none of what 's discussed here really affects me .
But still , I 'm curious about Irfanview and Picasa , the two programs that I use for my photo needs .
Are these affected ?
How do I detect the effect ?</tokentext>
<sentencetext>I'm not a pro photosoftware user so I guess none of what's discussed here really affects me.
But still, I'm curious about Irfanview and Picasa, the two programs that I use for my photo needs.
Are these affected?
How do I detect the effect?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258136</id>
	<title>Out of interest...</title>
	<author>Anonymous</author>
	<datestamp>1265118720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Since I'm not currently in a position to do the experiment, does Photoshop do the filtering operations for scaling in a linear space if you set the "Blend RGB Colors Using Gamma..." option under the color settings? (Or does that only work for blending images?)

I can understand the filtering mode being optional, because converting to a linear space and back (especially doing it properly, rather than with a gamma hack) is going to be slow for an arbitrary color space, but I'd not appreciated that Photoshop lacked the option entirely.</htmltext>
<tokenext>Since I 'm not currently in a position to do the experiment , does Photoshop do the filtering operations for scaling in a linear space if you set the " Blend RGB Colors Using Gamma... " option under the color settings ?
( Or does that only work for blending images ?
) I can understand the filtering mode being optional , because converting to a linear space and back ( especially doing it properly , rather than with a gamma hack ) is going to be slow for an arbitrary color space , but I 'd not appreciated that Photoshop lacked the option entirely .</tokentext>
<sentencetext>Since I'm not currently in a position to do the experiment, does Photoshop do the filtering operations for scaling in a linear space if you set the "Blend RGB Colors Using Gamma..." option under the color settings?
(Or does that only work for blending images?
)

I can understand the filtering mode being optional, because converting to a linear space and back (especially doing it properly, rather than with a gamma hack) is going to be slow for an arbitrary color space, but I'd not appreciated that Photoshop lacked the option entirely.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254580</id>
	<title>A great demo...</title>
	<author>Interoperable</author>
	<datestamp>1266939360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>for people with poor quality displays!</p><p>I now have a much better understanding of why I have to constantly adjust the angle of my (laptop) monitor every time I move my head. Some of the demos on that page are great for illustrating the effect of a poor quality display (or poor scaling algorithm) on picture quality. I'll keep that page in mind the next time I shop for a laptop.</p></htmltext>
<tokenext>for people with poor quality displays ! I now have a much better understanding of why I have to constantly adjust the angle of my ( laptop ) monitor every time I move my head .
Some of the demos on that page are great for illustrating the effect of a poor quality display ( or poor scaling algorithm ) on picture quality .
I 'll keep that page in mind the next time I shop for a laptop .</tokentext>
<sentencetext>for people with poor quality displays!I now have a much better understanding of why I have to constantly adjust the angle of my (laptop) monitor every time I move my head.
Some of the demos on that page are great for illustrating the effect of a poor quality display (or poor scaling algorithm) on picture quality.
I'll keep that page in mind the next time I shop for a laptop.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256594</id>
	<title>Re:HA!</title>
	<author>ByteSlicer</author>
	<datestamp>1265143620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Well, I am SURE glad I'm using Linux^H^H^H^H^HWindows^H^H^H^H^H^H^HMac^H^H^H<b>Emacs</b>.</p></div></blockquote><p>
There, fixed that for you.</p></div>
	</htmltext>
<tokenext>Well , I am SURE glad I 'm using Linux ^ H ^ H ^ H ^ H ^ HWindows ^ H ^ H ^ H ^ H ^ H ^ H ^ HMac ^ H ^ H ^ HEmacs .
There , fixed that for you .</tokentext>
<sentencetext>Well, I am SURE glad I'm using Linux^H^H^H^H^HWindows^H^H^H^H^H^H^HMac^H^H^HEmacs.
There, fixed that for you.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255652</id>
	<title>Re:Oh dear. Linear color space again, 11 years lat</title>
	<author>Anonymous</author>
	<datestamp>1266948000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Helmut Dersch (of Panorama Tools fame) certainly posted about this before;<br><a href="http://www.all-in-one.ee/~dersch/gamma/gamma.html" title="all-in-one.ee" rel="nofollow">http://www.all-in-one.ee/~dersch/gamma/gamma.html</a> [all-in-one.ee] - Interpolation and Gamma Correction</p></div><p>255 / 2 = 122 ???</p></div>
	</htmltext>
<tokenext>Helmut Dersch ( of Panorama Tools fame ) certainly posted about this before ; http : //www.all-in-one.ee/ ~ dersch/gamma/gamma.html [ all-in-one.ee ] - Interpolation and Gamma Correction255 / 2 = 122 ? ?
?</tokentext>
<sentencetext>Helmut Dersch (of Panorama Tools fame) certainly posted about this before;http://www.all-in-one.ee/~dersch/gamma/gamma.html [all-in-one.ee] - Interpolation and Gamma Correction255 / 2 = 122 ??
?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254664</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255332</id>
	<title>Re:Some look worse.</title>
	<author>PitaBred</author>
	<datestamp>1266944820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Look at the islands and the lakes. They're crisp at the edges, whereas those islands disappear when you use the incorrect scaling. I wouldn't use the incorrectly scaled image for anything important.</p></htmltext>
<tokenext>Look at the islands and the lakes .
They 're crisp at the edges , whereas those islands disappear when you use the incorrect scaling .
I would n't use the incorrectly scaled image for anything important .</tokentext>
<sentencetext>Look at the islands and the lakes.
They're crisp at the edges, whereas those islands disappear when you use the incorrect scaling.
I wouldn't use the incorrectly scaled image for anything important.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254702</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255054</id>
	<title>MSPaint scales selection differently</title>
	<author>Anonymous</author>
	<datestamp>1266942780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Taking the Dalai Lama picture, I get two different results from MSPaint.  If I paste the image into a new file, deselect, and then scale the entire image by 50\% (in each direction so to 1/4 size), I get the gray box.  However, if I select all, and then scale by 50\%, I get an odd magenta Dalai Lama.

Any insights as to why this  might be?</htmltext>
<tokenext>Taking the Dalai Lama picture , I get two different results from MSPaint .
If I paste the image into a new file , deselect , and then scale the entire image by 50 \ % ( in each direction so to 1/4 size ) , I get the gray box .
However , if I select all , and then scale by 50 \ % , I get an odd magenta Dalai Lama .
Any insights as to why this might be ?</tokentext>
<sentencetext>Taking the Dalai Lama picture, I get two different results from MSPaint.
If I paste the image into a new file, deselect, and then scale the entire image by 50\% (in each direction so to 1/4 size), I get the gray box.
However, if I select all, and then scale by 50\%, I get an odd magenta Dalai Lama.
Any insights as to why this  might be?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255500</id>
	<title>Re:Monitor gamma?</title>
	<author>Anonymous</author>
	<datestamp>1266946440000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Yes, you are missing something. Human perception isn't linear either. Twice the amount of light does not look twice as bright. Our eyes see differences between dark tones more clearly. The result is that we need many more dark tones than light tones for an "evenly" distributed tone curve (which is a tone curve where two neighboring light tones appear to be the same brightness difference as two neighboring dark colors). A physically linear gradient has the perceptual half tone shifted close to the black point.</p><p>One consequence is that if you store an image with linear gamma, you need more bits to cover the same dynamic range with the same minimal distance between two dark tones. You can immediately see the decrease in resolution for the dark tones when you create an 8-bit image with a black-white gradient in Photoshop and then convert this image to a color profile with gamma 1.0.</p><p>So not only is the 2.2 gamma which is used in the sRGB standard a sensible choice for the display technology of yesteryear, it also makes better use of the allocated bits than a gamma 1.0 image would.</p></htmltext>
<tokenext>Yes , you are missing something .
Human perception is n't linear either .
Twice the amount of light does not look twice as bright .
Our eyes see differences between dark tones more clearly .
The result is that we need many more dark tones than light tones for an " evenly " distributed tone curve ( which is a tone curve where two neighboring light tones appear to be the same brightness difference as two neighboring dark colors ) .
A physically linear gradient has the perceptual half tone shifted close to the black point.One consequence is that if you store an image with linear gamma , you need more bits to cover the same dynamic range with the same minimal distance between two dark tones .
You can immediately see the decrease in resolution for the dark tones when you create an 8-bit image with a black-white gradient in Photoshop and then convert this image to a color profile with gamma 1.0.So not only is the 2.2 gamma which is used in the sRGB standard a sensible choice for the display technology of yesteryear , it also makes better use of the allocated bits than a gamma 1.0 image would .</tokentext>
<sentencetext>Yes, you are missing something.
Human perception isn't linear either.
Twice the amount of light does not look twice as bright.
Our eyes see differences between dark tones more clearly.
The result is that we need many more dark tones than light tones for an "evenly" distributed tone curve (which is a tone curve where two neighboring light tones appear to be the same brightness difference as two neighboring dark colors).
A physically linear gradient has the perceptual half tone shifted close to the black point.One consequence is that if you store an image with linear gamma, you need more bits to cover the same dynamic range with the same minimal distance between two dark tones.
You can immediately see the decrease in resolution for the dark tones when you create an 8-bit image with a black-white gradient in Photoshop and then convert this image to a color profile with gamma 1.0.So not only is the 2.2 gamma which is used in the sRGB standard a sensible choice for the display technology of yesteryear, it also makes better use of the allocated bits than a gamma 1.0 image would.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257094</id>
	<title>Simple explanation</title>
	<author>Anonymous</author>
	<datestamp>1265106660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>All of the software in question had already started development before there was any kind of standardization of monitor gamma, at a time where most image formats, even if they had considered the question of how to contain information about the gamma, were written incorrectly without proper gamma information.</p><p>The standard for Apple monitor gamma was 1.6, and for IBM PC monitors it varied but was almost invariably larger.</p><p>So it was <b>impossible</b> for these software projects to "do it right", so they just did it the only way they could, by assuming that gray levels were linear. BTW, that was the case for NetPBM also for at least the first 10 years of its life.</p></htmltext>
<tokenext>All of the software in question had already started development before there was any kind of standardization of monitor gamma , at a time where most image formats , even if they had considered the question of how to contain information about the gamma , were written incorrectly without proper gamma information.The standard for Apple monitor gamma was 1.6 , and for IBM PC monitors it varied but was almost invariably larger.So it was impossible for these software projects to " do it right " , so they just did it the only way they could , by assuming that gray levels were linear .
BTW , that was the case for NetPBM also for at least the first 10 years of its life .</tokentext>
<sentencetext>All of the software in question had already started development before there was any kind of standardization of monitor gamma, at a time where most image formats, even if they had considered the question of how to contain information about the gamma, were written incorrectly without proper gamma information.The standard for Apple monitor gamma was 1.6, and for IBM PC monitors it varied but was almost invariably larger.So it was impossible for these software projects to "do it right", so they just did it the only way they could, by assuming that gray levels were linear.
BTW, that was the case for NetPBM also for at least the first 10 years of its life.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259360</id>
	<title>Re:short version</title>
	<author>omnichad</author>
	<datestamp>1265126640000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>No, it's an algorithm that's just plain wrong.  It's doing linear calculations on values that represent an exponential curve.  It's a pretty big screw-up, made by almost everyone that designs resampling algorithms.  Given that graphics people don't usually write software, it's not for them.  People that try to blend colors 0 and 255 in software need to know that the result should be 186 and not 127.</p></htmltext>
<tokenext>No , it 's an algorithm that 's just plain wrong .
It 's doing linear calculations on values that represent an exponential curve .
It 's a pretty big screw-up , made by almost everyone that designs resampling algorithms .
Given that graphics people do n't usually write software , it 's not for them .
People that try to blend colors 0 and 255 in software need to know that the result should be 186 and not 127 .</tokentext>
<sentencetext>No, it's an algorithm that's just plain wrong.
It's doing linear calculations on values that represent an exponential curve.
It's a pretty big screw-up, made by almost everyone that designs resampling algorithms.
Given that graphics people don't usually write software, it's not for them.
People that try to blend colors 0 and 255 in software need to know that the result should be 186 and not 127.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254880</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254802</id>
	<title>Gamma and sRGB</title>
	<author>Anonymous</author>
	<datestamp>1266940920000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>The basic issue here has to do with gamma curves and the way they're being handled (they're not).<br> <br>
Most image files on your computer (BMP, JPG, PNG, etc.) are stored in the <a href="http://en.wikipedia.org/wiki/SRGB" title="wikipedia.org">sRGB</a> [wikipedia.org] color space.  sRGB defines the use of a gamma curve, which is a nonlinear transformation applied to each of the components (R, G, and B).  The issue here is that most scalers make the assumption that the components are linear, rather than try to process the gamma curve.  While this does save processing time (undoing the gamma curve then redoing it), it does add some error, especially when the values being scaled are not near each other.<br> <br>
So does this matter?  Well, in some pathological cases where there are repeated sharp boundaries (such as alternating black-white lines or fine checkerboard patterns), this would make a difference.  This is because the linear average of the pixels (what most image scalers use) yields a different result than if the gamma value was taken into account.
For most images (both photographic and computer generated), this shouldn't be a big problem.  Most samples are close in value to other nearby samples, so the error resulting from the gamma curve is very small.  Sparse light-dark transitions also wouldn't be noticeable as there would only be an error right on the boundary.  Only when you exercise this case over a large area does it become obvious.<br> <br>
One final point: this gamma scaling effect would occur regardless of the actual scaling algorithm.  Bilinear, bicubic, and sinc would all have the same issue.  Nearest neighbor interpolation would be unaffected, but in these cases, the output would look far worse.</htmltext>
<tokenext>The basic issue here has to do with gamma curves and the way they 're being handled ( they 're not ) .
Most image files on your computer ( BMP , JPG , PNG , etc .
) are stored in the sRGB [ wikipedia.org ] color space .
sRGB defines the use of a gamma curve , which is a nonlinear transformation applied to each of the components ( R , G , and B ) .
The issue here is that most scalers make the assumption that the components are linear , rather than try to process the gamma curve .
While this does save processing time ( undoing the gamma curve then redoing it ) , it does add some error , especially when the values being scaled are not near each other .
So does this matter ?
Well , in some pathological cases where there are repeated sharp boundaries ( such as alternating black-white lines or fine checkerboard patterns ) , this would make a difference .
This is because the linear average of the pixels ( what most image scalers use ) yields a different result than if the gamma value was taken into account .
For most images ( both photographic and computer generated ) , this should n't be a big problem .
Most samples are close in value to other nearby samples , so the error resulting from the gamma curve is very small .
Sparse light-dark transitions also would n't be noticeable as there would only be an error right on the boundary .
Only when you exercise this case over a large area does it become obvious .
One final point : this gamma scaling effect would occur regardless of the actual scaling algorithm .
Bilinear , bicubic , and sinc would all have the same issue .
Nearest neighbor interpolation would be unaffected , but in these cases , the output would look far worse .</tokentext>
<sentencetext>The basic issue here has to do with gamma curves and the way they're being handled (they're not).
Most image files on your computer (BMP, JPG, PNG, etc.
) are stored in the sRGB [wikipedia.org] color space.
sRGB defines the use of a gamma curve, which is a nonlinear transformation applied to each of the components (R, G, and B).
The issue here is that most scalers make the assumption that the components are linear, rather than try to process the gamma curve.
While this does save processing time (undoing the gamma curve then redoing it), it does add some error, especially when the values being scaled are not near each other.
So does this matter?
Well, in some pathological cases where there are repeated sharp boundaries (such as alternating black-white lines or fine checkerboard patterns), this would make a difference.
This is because the linear average of the pixels (what most image scalers use) yields a different result than if the gamma value was taken into account.
For most images (both photographic and computer generated), this shouldn't be a big problem.
Most samples are close in value to other nearby samples, so the error resulting from the gamma curve is very small.
Sparse light-dark transitions also wouldn't be noticeable as there would only be an error right on the boundary.
Only when you exercise this case over a large area does it become obvious.
One final point: this gamma scaling effect would occur regardless of the actual scaling algorithm.
Bilinear, bicubic, and sinc would all have the same issue.
Nearest neighbor interpolation would be unaffected, but in these cases, the output would look far worse.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257852</id>
	<title>Geek bias</title>
	<author>Anonymous</author>
	<datestamp>1265115660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Funny to see everybody pooh-poohing this effect: "oh you can barely even notice that, who cares", when it's outside their particular expertise, but when the discussion turns to flavours of Linux, or different coding methods or Star Wars trivia, the same geeks get sand all over their vaginas for even more subtle distinctions!</p><p>FWIW, I'm a graphic design professional, and I'd prefer my tools to do what it says on the box. I have little patience with "meh, it's close enough" when we are talking about MATHEMATICS where it's possible get it ABSOLUTELY correct. And if all you need to do is a gamma conversion before and after the operation, I don't buy the "too much work" argument either.</p></htmltext>
<tokenext>Funny to see everybody pooh-poohing this effect : " oh you can barely even notice that , who cares " , when it 's outside their particular expertise , but when the discussion turns to flavours of Linux , or different coding methods or Star Wars trivia , the same geeks get sand all over their vaginas for even more subtle distinctions ! FWIW , I 'm a graphic design professional , and I 'd prefer my tools to do what it says on the box .
I have little patience with " meh , it 's close enough " when we are talking about MATHEMATICS where it 's possible get it ABSOLUTELY correct .
And if all you need to do is a gamma conversion before and after the operation , I do n't buy the " too much work " argument either .</tokentext>
<sentencetext>Funny to see everybody pooh-poohing this effect: "oh you can barely even notice that, who cares", when it's outside their particular expertise, but when the discussion turns to flavours of Linux, or different coding methods or Star Wars trivia, the same geeks get sand all over their vaginas for even more subtle distinctions!FWIW, I'm a graphic design professional, and I'd prefer my tools to do what it says on the box.
I have little patience with "meh, it's close enough" when we are talking about MATHEMATICS where it's possible get it ABSOLUTELY correct.
And if all you need to do is a gamma conversion before and after the operation, I don't buy the "too much work" argument either.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257706</id>
	<title>Re:Wrong</title>
	<author>Inda</author>
	<datestamp>1265114040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If it looks right, it is right.<br><br>They all look fine to me, Joe Average.</htmltext>
<tokenext>If it looks right , it is right.They all look fine to me , Joe Average .</tokentext>
<sentencetext>If it looks right, it is right.They all look fine to me, Joe Average.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256212</id>
	<title>Re:Not so common image</title>
	<author>kappa962</author>
	<datestamp>1266953700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Look at the article for more normal pictures, and how they generally become darker when they are scaled by algorithms with the bug.</p><p>Also, the blurring algorithm seems just as likely to have this averaging bug as the scaling algorithm.</p></htmltext>
<tokenext>Look at the article for more normal pictures , and how they generally become darker when they are scaled by algorithms with the bug.Also , the blurring algorithm seems just as likely to have this averaging bug as the scaling algorithm .</tokentext>
<sentencetext>Look at the article for more normal pictures, and how they generally become darker when they are scaled by algorithms with the bug.Also, the blurring algorithm seems just as likely to have this averaging bug as the scaling algorithm.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254632</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259132</id>
	<title>Re:Monitor gamma?</title>
	<author>omnichad</author>
	<datestamp>1265125320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>the values from 0-255 <i>are</i> intended to be displayed with a gamma correction.  They aren't in themselves actually brightness values.  This isn't really the *monitor* gamma.  It's just the gamma.  The formula for converting the stored numerical value to the actual brightness value.  The way the eye interprets brightness is logarithmic and this is just how a computer does the same.</p></htmltext>
<tokenext>the values from 0-255 are intended to be displayed with a gamma correction .
They are n't in themselves actually brightness values .
This is n't really the * monitor * gamma .
It 's just the gamma .
The formula for converting the stored numerical value to the actual brightness value .
The way the eye interprets brightness is logarithmic and this is just how a computer does the same .</tokentext>
<sentencetext>the values from 0-255 are intended to be displayed with a gamma correction.
They aren't in themselves actually brightness values.
This isn't really the *monitor* gamma.
It's just the gamma.
The formula for converting the stored numerical value to the actual brightness value.
The way the eye interprets brightness is logarithmic and this is just how a computer does the same.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258334</id>
	<title>Re:HA!</title>
	<author>pbhj</author>
	<datestamp>1265120580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://www.4p8.com/eric.brasseur/gamma\_dalai\_lama.html" title="4p8.com">http://www.4p8.com/eric.brasseur/gamma\_dalai\_lama.html</a> [4p8.com] displays all images identically in links2 using the svgalib driver, but it doesn't scale them so it would whether it has the bug or not.</p></htmltext>
<tokenext>http : //www.4p8.com/eric.brasseur/gamma \ _dalai \ _lama.html [ 4p8.com ] displays all images identically in links2 using the svgalib driver , but it does n't scale them so it would whether it has the bug or not .</tokentext>
<sentencetext>http://www.4p8.com/eric.brasseur/gamma\_dalai\_lama.html [4p8.com] displays all images identically in links2 using the svgalib driver, but it doesn't scale them so it would whether it has the bug or not.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255474</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255590</id>
	<title>Re:Nitpicking</title>
	<author>Anonymous</author>
	<datestamp>1266947400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Correct, we have known this for a long time. If desired, it can be fixed, but that does make the scaling slower and for most images, where the colour differences between neighbouring pixels are low, there is no visible difference. The tuned image that he used is very uncharacteristic for real life imagery.<br>Still, I suppose it would be need to get it fixed in ImageMagick. And the situation is yet another reminder that you shouldn't store scaled images. Store the original, and let the software scale it on the fly when you need it. That way you'll get the benefit of future improved scaling algorithms automatically.</p></htmltext>
<tokenext>Correct , we have known this for a long time .
If desired , it can be fixed , but that does make the scaling slower and for most images , where the colour differences between neighbouring pixels are low , there is no visible difference .
The tuned image that he used is very uncharacteristic for real life imagery.Still , I suppose it would be need to get it fixed in ImageMagick .
And the situation is yet another reminder that you should n't store scaled images .
Store the original , and let the software scale it on the fly when you need it .
That way you 'll get the benefit of future improved scaling algorithms automatically .</tokentext>
<sentencetext>Correct, we have known this for a long time.
If desired, it can be fixed, but that does make the scaling slower and for most images, where the colour differences between neighbouring pixels are low, there is no visible difference.
The tuned image that he used is very uncharacteristic for real life imagery.Still, I suppose it would be need to get it fixed in ImageMagick.
And the situation is yet another reminder that you shouldn't store scaled images.
Store the original, and let the software scale it on the fly when you need it.
That way you'll get the benefit of future improved scaling algorithms automatically.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31311120</id>
	<title>Whaa...?</title>
	<author>Anonymous</author>
	<datestamp>1267369500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Who is that bald dude? I wanted my lines in the smaller image, not him!</p></htmltext>
<tokenext>Who is that bald dude ?
I wanted my lines in the smaller image , not him !</tokentext>
<sentencetext>Who is that bald dude?
I wanted my lines in the smaller image, not him!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254994</id>
	<title>Re:HA!</title>
	<author>Anonymous</author>
	<datestamp>1266942300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I use shit too!</p></htmltext>
<tokenext>I use shit too !</tokentext>
<sentencetext>I use shit too!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255282</id>
	<title>Correct, yes.  Expected, maybe.  Desired, no.</title>
	<author>Animaether</author>
	<datestamp>1266944460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>I think the author specifically isn't stating whether the scaling is correct or not - it is; the whole story doesn't relate to scaling at all, but rather color space and how -it- affects, among other, scaling.  Yes, with filtering - scaling without filtering can hardly be called scaling at all as you're just discarding data - and for anything but multiples of 2 (4x, 2x, 0.5x, 0.25x, etc.) that'd have a whole 'nother set of problems.</p><p>The author, I think, is suggesting, quite rightly so, that while...</p><blockquote><div><p>The grey square shown is the correct result</p></div></blockquote><p><nobr> <wbr></nobr>...it is not the expected (by laymen) nor desired (by just about anybody) result.</p><p>The desired result for scaling down likely being that of the same visual image as when you simply stand further back.<br>( although at some point the resolution limit of a display and the image itself being presented on that display prevents that concept from being applied to "moving your eyeballs closer to the screen" for scaling up. )</p></div>
	</htmltext>
<tokenext>I think the author specifically is n't stating whether the scaling is correct or not - it is ; the whole story does n't relate to scaling at all , but rather color space and how -it- affects , among other , scaling .
Yes , with filtering - scaling without filtering can hardly be called scaling at all as you 're just discarding data - and for anything but multiples of 2 ( 4x , 2x , 0.5x , 0.25x , etc .
) that 'd have a whole 'nother set of problems.The author , I think , is suggesting , quite rightly so , that while...The grey square shown is the correct result ...it is not the expected ( by laymen ) nor desired ( by just about anybody ) result.The desired result for scaling down likely being that of the same visual image as when you simply stand further back .
( although at some point the resolution limit of a display and the image itself being presented on that display prevents that concept from being applied to " moving your eyeballs closer to the screen " for scaling up .
)</tokentext>
<sentencetext>I think the author specifically isn't stating whether the scaling is correct or not - it is; the whole story doesn't relate to scaling at all, but rather color space and how -it- affects, among other, scaling.
Yes, with filtering - scaling without filtering can hardly be called scaling at all as you're just discarding data - and for anything but multiples of 2 (4x, 2x, 0.5x, 0.25x, etc.
) that'd have a whole 'nother set of problems.The author, I think, is suggesting, quite rightly so, that while...The grey square shown is the correct result ...it is not the expected (by laymen) nor desired (by just about anybody) result.The desired result for scaling down likely being that of the same visual image as when you simply stand further back.
( although at some point the resolution limit of a display and the image itself being presented on that display prevents that concept from being applied to "moving your eyeballs closer to the screen" for scaling up.
)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254446</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31263648</id>
	<title>Re:Could some one explain the following then</title>
	<author>drewm1980</author>
	<datestamp>1265101200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Unless you are using a high-end camera built for machine vision and scientific purposes that you are certain has a linear gamma curve, the numerical values in the image files are ~not proportional to photon counts.  Likewise, if you display a linear ramp of data as an image (say using something like MATLAB's imshow(repmat(linspace(...),...)), your monitor will not emit a linear ramp of photon intensity.  Until MATLAB, PIL, opencv, et al provide build-in conversion functions, you're going to have to use a home-rolled gamma conversion functions whenever you display data, and whenever you load data from an image file (unless you used a camera that you're sure is linear).</p></htmltext>
<tokenext>Unless you are using a high-end camera built for machine vision and scientific purposes that you are certain has a linear gamma curve , the numerical values in the image files are ~ not proportional to photon counts .
Likewise , if you display a linear ramp of data as an image ( say using something like MATLAB 's imshow ( repmat ( linspace ( ... ) ,... ) ) , your monitor will not emit a linear ramp of photon intensity .
Until MATLAB , PIL , opencv , et al provide build-in conversion functions , you 're going to have to use a home-rolled gamma conversion functions whenever you display data , and whenever you load data from an image file ( unless you used a camera that you 're sure is linear ) .</tokentext>
<sentencetext>Unless you are using a high-end camera built for machine vision and scientific purposes that you are certain has a linear gamma curve, the numerical values in the image files are ~not proportional to photon counts.
Likewise, if you display a linear ramp of data as an image (say using something like MATLAB's imshow(repmat(linspace(...),...)), your monitor will not emit a linear ramp of photon intensity.
Until MATLAB, PIL, opencv, et al provide build-in conversion functions, you're going to have to use a home-rolled gamma conversion functions whenever you display data, and whenever you load data from an image file (unless you used a camera that you're sure is linear).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260066</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258876</id>
	<title>Re:Old news</title>
	<author>gmueckl</author>
	<datestamp>1265123940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I remember having read the pages that you link to a long time ago. Tone mapping issues are very real and noticable on rendered images - but in subtle way. The lighting just tends to look wrong, as you showed so nicely. I'm trying to get it right in my software, but it's not so trivial. Even if you know you handle it correctly internally, you have to be sure to handle input and output correctly as well. And it's very easy to make a mistake there.</p><p>Still, I'm quite surprised that software like Photoshop that ought to be awfully aware of color spaces does not convert to/from linear automatically. This would be the right thing to do in my opinion. Or does this "loophole" leave some sort of artist control that I'm not aware of?</p></htmltext>
<tokenext>I remember having read the pages that you link to a long time ago .
Tone mapping issues are very real and noticable on rendered images - but in subtle way .
The lighting just tends to look wrong , as you showed so nicely .
I 'm trying to get it right in my software , but it 's not so trivial .
Even if you know you handle it correctly internally , you have to be sure to handle input and output correctly as well .
And it 's very easy to make a mistake there.Still , I 'm quite surprised that software like Photoshop that ought to be awfully aware of color spaces does not convert to/from linear automatically .
This would be the right thing to do in my opinion .
Or does this " loophole " leave some sort of artist control that I 'm not aware of ?</tokentext>
<sentencetext>I remember having read the pages that you link to a long time ago.
Tone mapping issues are very real and noticable on rendered images - but in subtle way.
The lighting just tends to look wrong, as you showed so nicely.
I'm trying to get it right in my software, but it's not so trivial.
Even if you know you handle it correctly internally, you have to be sure to handle input and output correctly as well.
And it's very easy to make a mistake there.Still, I'm quite surprised that software like Photoshop that ought to be awfully aware of color spaces does not convert to/from linear automatically.
This would be the right thing to do in my opinion.
Or does this "loophole" leave some sort of artist control that I'm not aware of?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255056</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259176</id>
	<title>Re:short version</title>
	<author>cynyr</author>
	<datestamp>1265125620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>also this would seem to only be an issue for downscaling, an increase in size by 100\% in both directions should yield the expected result.</htmltext>
<tokenext>also this would seem to only be an issue for downscaling , an increase in size by 100 \ % in both directions should yield the expected result .</tokentext>
<sentencetext>also this would seem to only be an issue for downscaling, an increase in size by 100\% in both directions should yield the expected result.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254438</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255714</id>
	<title>Is there a way to use this for steganography?</title>
	<author>marciot</author>
	<datestamp>1266948540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Hummm, can you turn this bug around and come up with an image that appears totally gray in normal size but looks like something recognizable when you scale it up or down? If so, that could be a basis of some cool steganographic hack.</p></htmltext>
<tokenext>Hummm , can you turn this bug around and come up with an image that appears totally gray in normal size but looks like something recognizable when you scale it up or down ?
If so , that could be a basis of some cool steganographic hack .</tokentext>
<sentencetext>Hummm, can you turn this bug around and come up with an image that appears totally gray in normal size but looks like something recognizable when you scale it up or down?
If so, that could be a basis of some cool steganographic hack.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254664</id>
	<title>Oh dear. Linear color space again, 11 years later?</title>
	<author>Animaether</author>
	<datestamp>1266940020000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Come on, this isn't news...</p><p>Helmut Dersch (of Panorama Tools fame) certainly posted about this before;<br><a href="http://www.all-in-one.ee/~dersch/gamma/gamma.html" title="all-in-one.ee">http://www.all-in-one.ee/~dersch/gamma/gamma.html</a> [all-in-one.ee] - Interpolation and Gamma Correction</p><p>There's no factual error in the scaling algorithm, as the<nobr> <wbr></nobr>/. headline would like you to believe - it's a color space (linearity) issue; you have to do your calculations in linear space which means a typical photo off of a camera/scanner gets the inverse of an sRGB curve applied (a gamma of 0.454545 is 'close enough' if you can't do the proper color bits).  Then scale.  Then re-apply the curve.</p><p>And no - for real life imagery, nobody really cares - the JPEGs out of the cameras and subsequent re-compression to JPEG after scaling will have 'destroyed' far more data than the linearity issue.</p><p>They're nice example images in the story, but they should be called 'academic'.</p></htmltext>
<tokenext>Come on , this is n't news...Helmut Dersch ( of Panorama Tools fame ) certainly posted about this before ; http : //www.all-in-one.ee/ ~ dersch/gamma/gamma.html [ all-in-one.ee ] - Interpolation and Gamma CorrectionThere 's no factual error in the scaling algorithm , as the / .
headline would like you to believe - it 's a color space ( linearity ) issue ; you have to do your calculations in linear space which means a typical photo off of a camera/scanner gets the inverse of an sRGB curve applied ( a gamma of 0.454545 is 'close enough ' if you ca n't do the proper color bits ) .
Then scale .
Then re-apply the curve.And no - for real life imagery , nobody really cares - the JPEGs out of the cameras and subsequent re-compression to JPEG after scaling will have 'destroyed ' far more data than the linearity issue.They 're nice example images in the story , but they should be called 'academic' .</tokentext>
<sentencetext>Come on, this isn't news...Helmut Dersch (of Panorama Tools fame) certainly posted about this before;http://www.all-in-one.ee/~dersch/gamma/gamma.html [all-in-one.ee] - Interpolation and Gamma CorrectionThere's no factual error in the scaling algorithm, as the /.
headline would like you to believe - it's a color space (linearity) issue; you have to do your calculations in linear space which means a typical photo off of a camera/scanner gets the inverse of an sRGB curve applied (a gamma of 0.454545 is 'close enough' if you can't do the proper color bits).
Then scale.
Then re-apply the curve.And no - for real life imagery, nobody really cares - the JPEGs out of the cameras and subsequent re-compression to JPEG after scaling will have 'destroyed' far more data than the linearity issue.They're nice example images in the story, but they should be called 'academic'.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255488</id>
	<title>ok, now that this is fixed</title>
	<author>Anonymous</author>
	<datestamp>1266946320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>now that this is fixed, can we finally have those infinite resolution zoom-in functions they have in the movies?</p></htmltext>
<tokenext>now that this is fixed , can we finally have those infinite resolution zoom-in functions they have in the movies ?</tokentext>
<sentencetext>now that this is fixed, can we finally have those infinite resolution zoom-in functions they have in the movies?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402</id>
	<title>Oh calm down..</title>
	<author>Anonymous</author>
	<datestamp>1266938280000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>Photographs scaled with the affected software are degraded, because of incorrect algorithmic accounting for monitor gamma.</p></div><p>Seriously!</p><p>I have a theory on why this has gone unnoticed for so long, but I'll keep it to myself...</p></div>
	</htmltext>
<tokenext>Photographs scaled with the affected software are degraded , because of incorrect algorithmic accounting for monitor gamma.Seriously ! I have a theory on why this has gone unnoticed for so long , but I 'll keep it to myself.. .</tokentext>
<sentencetext>Photographs scaled with the affected software are degraded, because of incorrect algorithmic accounting for monitor gamma.Seriously!I have a theory on why this has gone unnoticed for so long, but I'll keep it to myself...
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256026</id>
	<title>Re:Not so common image</title>
	<author>Machtyn</author>
	<datestamp>1266951540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Using The GIMP, I used the standard blur, then resized 1:2.  The result was similar to when I used the Sinc (Lanczos3) Interpolation method of resizing.  I tried a bunch of the filters and most of them had interesting results of gray.  The one exception was the different algorithms of Edge Detection.  Sobel with a maximum amount found lots of edges.  Gradient and Differential produced an image that did not appear to be interlaced.</htmltext>
<tokenext>Using The GIMP , I used the standard blur , then resized 1 : 2 .
The result was similar to when I used the Sinc ( Lanczos3 ) Interpolation method of resizing .
I tried a bunch of the filters and most of them had interesting results of gray .
The one exception was the different algorithms of Edge Detection .
Sobel with a maximum amount found lots of edges .
Gradient and Differential produced an image that did not appear to be interlaced .</tokentext>
<sentencetext>Using The GIMP, I used the standard blur, then resized 1:2.
The result was similar to when I used the Sinc (Lanczos3) Interpolation method of resizing.
I tried a bunch of the filters and most of them had interesting results of gray.
The one exception was the different algorithms of Edge Detection.
Sobel with a maximum amount found lots of edges.
Gradient and Differential produced an image that did not appear to be interlaced.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254632</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259342</id>
	<title>I thought it was just me</title>
	<author>Xabraxas</author>
	<datestamp>1265126460000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>I noticed this bug the other day but I thought perhaps I made a mistake somewhere.  I am creating a Drupal site for photos and it has a dark background.  I was just testing out the image upload and I used an unscaled image.  Later I scaled the same image down to save space and re-uploaded the image.  The brightness was noticeably different.  It's actually very hard to tell in a lot of cases, especially with a brighter background.  A dark background really makes the bug apparent.</htmltext>
<tokenext>I noticed this bug the other day but I thought perhaps I made a mistake somewhere .
I am creating a Drupal site for photos and it has a dark background .
I was just testing out the image upload and I used an unscaled image .
Later I scaled the same image down to save space and re-uploaded the image .
The brightness was noticeably different .
It 's actually very hard to tell in a lot of cases , especially with a brighter background .
A dark background really makes the bug apparent .</tokentext>
<sentencetext>I noticed this bug the other day but I thought perhaps I made a mistake somewhere.
I am creating a Drupal site for photos and it has a dark background.
I was just testing out the image upload and I used an unscaled image.
Later I scaled the same image down to save space and re-uploaded the image.
The brightness was noticeably different.
It's actually very hard to tell in a lot of cases, especially with a brighter background.
A dark background really makes the bug apparent.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31266046</id>
	<title>Re:Could some one explain the following then</title>
	<author>SETIGuy</author>
	<datestamp>1265111640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
For scientific purposes you are correct.  When used for scientific purposes, CCDs are linear detectors, and you want to do normal linear math on them, because you are interested in the number, not how the picture looks However, human perceptions aren't linear with color or intensity, nor do human brains tend to interpolate each of the color channels across discontinuities.  Our brains expect an edge to be sharp.  When we see a boundary between red and blue, our brains don't perceive a blurry purple edge.
</p><p>
I see this as a problem of working in the wrong coordinates.  Rather than working in RGB space and doing interpolations there (which would be valid for scientific purposed) imaging software for photo-reproduction purpose should be operating in a perceived luminance vs color space.  The gamma approximation is one way to get close to what an out of focus eye would see.
</p></htmltext>
<tokenext>For scientific purposes you are correct .
When used for scientific purposes , CCDs are linear detectors , and you want to do normal linear math on them , because you are interested in the number , not how the picture looks However , human perceptions are n't linear with color or intensity , nor do human brains tend to interpolate each of the color channels across discontinuities .
Our brains expect an edge to be sharp .
When we see a boundary between red and blue , our brains do n't perceive a blurry purple edge .
I see this as a problem of working in the wrong coordinates .
Rather than working in RGB space and doing interpolations there ( which would be valid for scientific purposed ) imaging software for photo-reproduction purpose should be operating in a perceived luminance vs color space .
The gamma approximation is one way to get close to what an out of focus eye would see .</tokentext>
<sentencetext>
For scientific purposes you are correct.
When used for scientific purposes, CCDs are linear detectors, and you want to do normal linear math on them, because you are interested in the number, not how the picture looks However, human perceptions aren't linear with color or intensity, nor do human brains tend to interpolate each of the color channels across discontinuities.
Our brains expect an edge to be sharp.
When we see a boundary between red and blue, our brains don't perceive a blurry purple edge.
I see this as a problem of working in the wrong coordinates.
Rather than working in RGB space and doing interpolations there (which would be valid for scientific purposed) imaging software for photo-reproduction purpose should be operating in a perceived luminance vs color space.
The gamma approximation is one way to get close to what an out of focus eye would see.
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260066</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256200
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257112
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260730
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254664
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255652
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254424
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260182
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_69</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254438
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259176
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_74</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254446
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254486
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_76</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254664
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256080
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259132
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255400
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255590
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_75</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256920
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254446
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256192
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_66</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254994
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256804
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256230
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257592
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254446
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254892
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260066
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31266046
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256960
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254424
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257020
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_67</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254802
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257360
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255056
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257398
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254664
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254956
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31296590
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254990
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257820
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258158
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257916
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_64</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254424
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254498
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256792
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255474
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258334
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256182
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256140
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257782
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254438
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254880
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255906
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258094
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256818
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255416
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257706
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260220
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_73</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254740
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258372
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256594
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254802
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31270472
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259528
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255684
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254570
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257458
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255500
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257064
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256112
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259484
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260066
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31263648
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_72</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256026
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254446
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255282
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259426
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_68</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31264038
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255354
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_71</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255144
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_62</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254802
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257520
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257682
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254802
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257676
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254584
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31261702
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255056
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259292
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31264734
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254632
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256212
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_65</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258626
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_70</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254702
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255332
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255582
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255836
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31263888
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254438
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254880
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255300
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256352
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256722
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255714
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259786
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254424
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254462
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258224
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255056
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258876
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_63</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254584
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31262600
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256752
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_23_2317259_77</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254438
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254880
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259360
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254664
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254956
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31296590
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255652
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256080
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256996
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255054
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255648
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257916
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257782
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258626
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258224
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260730
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254438
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254880
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255300
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255906
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259360
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259176
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255408
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256076
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254580
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254402
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257618
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256960
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31264734
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259484
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257064
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260066
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31266046
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31263648
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255582
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255540
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258094
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257682
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257592
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260220
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257706
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254740
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258372
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254424
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31260182
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254498
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257020
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254462
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255056
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257398
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259292
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258876
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254570
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257458
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255288
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255714
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259786
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254504
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254994
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255354
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256722
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256200
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255836
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255474
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258334
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256594
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255382
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256804
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256920
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31264038
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256182
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256230
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256450
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254632
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259528
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256026
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256112
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256212
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254446
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255282
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259426
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256188
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256192
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254892
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254510
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255590
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255400
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255416
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255684
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254702
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255332
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255626
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255488
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254420
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254802
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257360
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257676
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31270472
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257520
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254486
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256752
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254560
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254990
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257820
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258158
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258628
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31263888
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256352
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256792
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255144
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31258784
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31259132
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256818
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255076
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255500
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31256140
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31257112
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31254584
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31261702
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31262600
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_23_2317259.23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_23_2317259.31255306
</commentlist>
</conversation>
