<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_12_04_0329231</id>
	<title>One Way To Save Digital Archives From File Corruption</title>
	<author>timothy</author>
	<datestamp>1259931000000</datestamp>
	<htmltext>storagedude points out this article about one of the perils of digital storage, the author of which  <i>"says massive digital archives are threatened by simple bit errors that can render whole files useless. The article notes that analog pictures and film can degrade and still be usable; why can't the same be true of digital files? The solution proposed by the author: <a href="http://www.enterprisestorageforum.com/technology/features/article.php/3850951">two headers and error correction code (ECC) in every file</a>."</i></htmltext>
<tokenext>storagedude points out this article about one of the perils of digital storage , the author of which " says massive digital archives are threatened by simple bit errors that can render whole files useless .
The article notes that analog pictures and film can degrade and still be usable ; why ca n't the same be true of digital files ?
The solution proposed by the author : two headers and error correction code ( ECC ) in every file .
"</tokentext>
<sentencetext>storagedude points out this article about one of the perils of digital storage, the author of which  "says massive digital archives are threatened by simple bit errors that can render whole files useless.
The article notes that analog pictures and film can degrade and still be usable; why can't the same be true of digital files?
The solution proposed by the author: two headers and error correction code (ECC) in every file.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322968</id>
	<title>Re:What files does a single bit error destroy?</title>
	<author>netsharc</author>
	<datestamp>1259937420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'd venture to say TrueCrypt containers, when that corruption occurs at the place where they store the encrypted symmetrical key. Depending on the size of said container it could be the whole harddisk.<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>I 'd venture to say TrueCrypt containers , when that corruption occurs at the place where they store the encrypted symmetrical key .
Depending on the size of said container it could be the whole harddisk .
: )</tokentext>
<sentencetext>I'd venture to say TrueCrypt containers, when that corruption occurs at the place where they store the encrypted symmetrical key.
Depending on the size of said container it could be the whole harddisk.
:)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325622</id>
	<title>Re:Incorrect...</title>
	<author>omnichad</author>
	<datestamp>1259950680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Because what's left after zeroing a drive is not even above the noise floor.  If each "spot" on the disk had more than 2 potential states, we'd lose density over the hardware requirements of stricter thresholds.  You gain nothing.</p></htmltext>
<tokenext>Because what 's left after zeroing a drive is not even above the noise floor .
If each " spot " on the disk had more than 2 potential states , we 'd lose density over the hardware requirements of stricter thresholds .
You gain nothing .</tokentext>
<sentencetext>Because what's left after zeroing a drive is not even above the noise floor.
If each "spot" on the disk had more than 2 potential states, we'd lose density over the hardware requirements of stricter thresholds.
You gain nothing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323442</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30335974</id>
	<title>Re:What files does a single bit error destroy?</title>
	<author>sjames</author>
	<datestamp>1260036060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If the bit error is the inode number in a directory entry, the whole file goes missing. If it's in the length field of an inode, half the file can go missing. If the application that reads the file isn't robust, it may not matter where the bit error is, the application will freak out and die if you try to open the file. Video and some ausio can be really screwed up if the bit flip is in the header.</p></htmltext>
<tokenext>If the bit error is the inode number in a directory entry , the whole file goes missing .
If it 's in the length field of an inode , half the file can go missing .
If the application that reads the file is n't robust , it may not matter where the bit error is , the application will freak out and die if you try to open the file .
Video and some ausio can be really screwed up if the bit flip is in the header .</tokentext>
<sentencetext>If the bit error is the inode number in a directory entry, the whole file goes missing.
If it's in the length field of an inode, half the file can go missing.
If the application that reads the file isn't robust, it may not matter where the bit error is, the application will freak out and die if you try to open the file.
Video and some ausio can be really screwed up if the bit flip is in the header.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323276</id>
	<title>Re:It's that computer called the brain.</title>
	<author>ILongForDarkness</author>
	<datestamp>1259939580000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>And how well did that work for your last corrupted text file? Or a printer job that the printer didn't know how to handle? My guess you could pick out a few words and the rest was random garble. The mind is good at filtering out noise but it is an intrinsically hard problem to do a similar thing with a computer. Say a random bit is missed and the whole file ends up shifted one to the left, how does the computer know that the combinations of pixel values it is displaying should start one bit out of sync so that the still existing data "looks" good? Similarly with a text file, all the remaining bits could be valid characters, how is a computer to know what characters to show other than having the correct data?</htmltext>
<tokenext>And how well did that work for your last corrupted text file ?
Or a printer job that the printer did n't know how to handle ?
My guess you could pick out a few words and the rest was random garble .
The mind is good at filtering out noise but it is an intrinsically hard problem to do a similar thing with a computer .
Say a random bit is missed and the whole file ends up shifted one to the left , how does the computer know that the combinations of pixel values it is displaying should start one bit out of sync so that the still existing data " looks " good ?
Similarly with a text file , all the remaining bits could be valid characters , how is a computer to know what characters to show other than having the correct data ?</tokentext>
<sentencetext>And how well did that work for your last corrupted text file?
Or a printer job that the printer didn't know how to handle?
My guess you could pick out a few words and the rest was random garble.
The mind is good at filtering out noise but it is an intrinsically hard problem to do a similar thing with a computer.
Say a random bit is missed and the whole file ends up shifted one to the left, how does the computer know that the combinations of pixel values it is displaying should start one bit out of sync so that the still existing data "looks" good?
Similarly with a text file, all the remaining bits could be valid characters, how is a computer to know what characters to show other than having the correct data?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325346</id>
	<title>Re:ZFS</title>
	<author>Anonymous</author>
	<datestamp>1259949420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Sorry, I don't have mod points, but I'd give you about +100 Insightful and just shut down posting for this whole story. That's exactly what I was going to say: "One acronym: ZFS" Why look further when it gives you exactly what you want? Oh, you want GOOD IMPLEMENTATION and DIRT CHEAP and SIMPLE and FLEXIBLE and FULL OF OPTIONS and RIGID (yes, I'm starting to notice the oxymorons, too) solution? Well then, fuck you!<br>Or you can stick to ZFS (which costs a lot of money for a software solution or a lot of time for self-teaching).</p></htmltext>
<tokenext>Sorry , I do n't have mod points , but I 'd give you about + 100 Insightful and just shut down posting for this whole story .
That 's exactly what I was going to say : " One acronym : ZFS " Why look further when it gives you exactly what you want ?
Oh , you want GOOD IMPLEMENTATION and DIRT CHEAP and SIMPLE and FLEXIBLE and FULL OF OPTIONS and RIGID ( yes , I 'm starting to notice the oxymorons , too ) solution ?
Well then , fuck you ! Or you can stick to ZFS ( which costs a lot of money for a software solution or a lot of time for self-teaching ) .</tokentext>
<sentencetext>Sorry, I don't have mod points, but I'd give you about +100 Insightful and just shut down posting for this whole story.
That's exactly what I was going to say: "One acronym: ZFS" Why look further when it gives you exactly what you want?
Oh, you want GOOD IMPLEMENTATION and DIRT CHEAP and SIMPLE and FLEXIBLE and FULL OF OPTIONS and RIGID (yes, I'm starting to notice the oxymorons, too) solution?
Well then, fuck you!Or you can stick to ZFS (which costs a lot of money for a software solution or a lot of time for self-teaching).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323118</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324078</id>
	<title>Ecc?</title>
	<author>rjolley</author>
	<datestamp>1259944020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>What's with the useless acronym at the end of the summary?  I hate useless and made up acronyms (umua)</htmltext>
<tokenext>What 's with the useless acronym at the end of the summary ?
I hate useless and made up acronyms ( umua )</tokentext>
<sentencetext>What's with the useless acronym at the end of the summary?
I hate useless and made up acronyms (umua)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388</id>
	<title>Re:It's that computer called the brain.</title>
	<author>Phreakiture</author>
	<datestamp>1259940360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><blockquote><div><p>What we need is a smarter computer that says, "I don't know what this is supposed to be, but here's my best guess," and displays noise. Let the brain then takeover and mentally remove the noise from the audio or image.</p></div></blockquote><p>Audio CDs have always done this.  Audio CDs are also uncompressed*.</p><p>The problem, I suspect, is that we have come to rely on a lot of data compression, particularly where video is concerned.  I'm not saying this is the wrong choice, necessarily, because video can become ungodly huge without it (NTSC SD video -- 720 x 480 x 29.97 -- in the 4:2:2 colour space, 8 bits per pixel per plane, will consume 69.5 GiB an hour without compression), but maybe we didn't give enough thought to stream corruption.</p><p>Mini DV video tape, when run in SD, uses no compression on the audio, and the video is only lightly compressed, using a DCT-based codec, with no delta coding.  In practical terms, what this means is that one corrupted frame of video doesn't cascade into future frames.  If my camcorder gets a wrinkle in the tape, it will affect the frames recorded on the wrinkle, and no others.  It also makes a best-guess effort to reconstruct the frame.  This task may not be impossible with more dense codecs that <i>do</i> use delta coding and motion compensation (MPEG, DiVX, etc), but it is certainly made far more difficult.</p><p>Incidentally, even digital cinemas are using compression.  It is a no-delta compression, but the individual frames are compressed in a manner akin to JPEGs, and the audio is compressed either using DTS or AC3 or one of their variants in most cinemas.  The difference, of course, is that the cinemas <i>must</i> provide a good presentation.  If they fail to do so, people will stop coming.  If the presentation isn't better than watching TV/DVD/BluRay at home, then why pay the $11?</p><p>(* I refer here to data compression, not dynamic range compression.  Dynamic range compression is applied way too much in most audio media)</p></div>
	</htmltext>
<tokenext>What we need is a smarter computer that says , " I do n't know what this is supposed to be , but here 's my best guess , " and displays noise .
Let the brain then takeover and mentally remove the noise from the audio or image.Audio CDs have always done this .
Audio CDs are also uncompressed * .The problem , I suspect , is that we have come to rely on a lot of data compression , particularly where video is concerned .
I 'm not saying this is the wrong choice , necessarily , because video can become ungodly huge without it ( NTSC SD video -- 720 x 480 x 29.97 -- in the 4 : 2 : 2 colour space , 8 bits per pixel per plane , will consume 69.5 GiB an hour without compression ) , but maybe we did n't give enough thought to stream corruption.Mini DV video tape , when run in SD , uses no compression on the audio , and the video is only lightly compressed , using a DCT-based codec , with no delta coding .
In practical terms , what this means is that one corrupted frame of video does n't cascade into future frames .
If my camcorder gets a wrinkle in the tape , it will affect the frames recorded on the wrinkle , and no others .
It also makes a best-guess effort to reconstruct the frame .
This task may not be impossible with more dense codecs that do use delta coding and motion compensation ( MPEG , DiVX , etc ) , but it is certainly made far more difficult.Incidentally , even digital cinemas are using compression .
It is a no-delta compression , but the individual frames are compressed in a manner akin to JPEGs , and the audio is compressed either using DTS or AC3 or one of their variants in most cinemas .
The difference , of course , is that the cinemas must provide a good presentation .
If they fail to do so , people will stop coming .
If the presentation is n't better than watching TV/DVD/BluRay at home , then why pay the $ 11 ?
( * I refer here to data compression , not dynamic range compression .
Dynamic range compression is applied way too much in most audio media )</tokentext>
<sentencetext>What we need is a smarter computer that says, "I don't know what this is supposed to be, but here's my best guess," and displays noise.
Let the brain then takeover and mentally remove the noise from the audio or image.Audio CDs have always done this.
Audio CDs are also uncompressed*.The problem, I suspect, is that we have come to rely on a lot of data compression, particularly where video is concerned.
I'm not saying this is the wrong choice, necessarily, because video can become ungodly huge without it (NTSC SD video -- 720 x 480 x 29.97 -- in the 4:2:2 colour space, 8 bits per pixel per plane, will consume 69.5 GiB an hour without compression), but maybe we didn't give enough thought to stream corruption.Mini DV video tape, when run in SD, uses no compression on the audio, and the video is only lightly compressed, using a DCT-based codec, with no delta coding.
In practical terms, what this means is that one corrupted frame of video doesn't cascade into future frames.
If my camcorder gets a wrinkle in the tape, it will affect the frames recorded on the wrinkle, and no others.
It also makes a best-guess effort to reconstruct the frame.
This task may not be impossible with more dense codecs that do use delta coding and motion compensation (MPEG, DiVX, etc), but it is certainly made far more difficult.Incidentally, even digital cinemas are using compression.
It is a no-delta compression, but the individual frames are compressed in a manner akin to JPEGs, and the audio is compressed either using DTS or AC3 or one of their variants in most cinemas.
The difference, of course, is that the cinemas must provide a good presentation.
If they fail to do so, people will stop coming.
If the presentation isn't better than watching TV/DVD/BluRay at home, then why pay the $11?
(* I refer here to data compression, not dynamic range compression.
Dynamic range compression is applied way too much in most audio media)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323238</id>
	<title>Linearity is the real problem</title>
	<author>designlabz</author>
	<datestamp>1259939460000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Problem is not in error correction, but actually in linearity of data. Using only 256 pixels you could represent an image brain can interpret. Problem is, brain can not interpret an image form first 256 pixels, as that would probably be a line half long as the image width, consisting of mostly irrelevant data.<br>
If I would want to make a fail proof image, I would split it to squares of, say, 9(3x3) pixels, and than put only central pixel(every 5th px) values in byte stream. Once that is done repeat that for surrounding pixels in the block. In that way, even if part of data is lost, program would have at least one of the pixels in a 9x9 block and it could use one of nearby pixels as a substitute, leaving up to person to try and figure out the data. You could repeat subdivision once again, achieving pseudo random order of bytes. <br>
And this is just a mock up of what could be done to improve data safety in images without increasing the actual file size.<br>

In old days of internet, designers were using images in lower resolution, to lower page loading time, and than gradually exchanging images with higher res versions once those loaded. If it had sense to do it then, maybe we could now use integrated preview images to represent the average sector of pixels in the image, and than reverse calculate missing ones using pixels we have.<br>

This could also work for audio files, and maybe even archives. I know I could still read the book even if every fifth letter was replaced by a incorrect one. <br>
<br>
Cheers,<br>
DLabz</htmltext>
<tokenext>Problem is not in error correction , but actually in linearity of data .
Using only 256 pixels you could represent an image brain can interpret .
Problem is , brain can not interpret an image form first 256 pixels , as that would probably be a line half long as the image width , consisting of mostly irrelevant data .
If I would want to make a fail proof image , I would split it to squares of , say , 9 ( 3x3 ) pixels , and than put only central pixel ( every 5th px ) values in byte stream .
Once that is done repeat that for surrounding pixels in the block .
In that way , even if part of data is lost , program would have at least one of the pixels in a 9x9 block and it could use one of nearby pixels as a substitute , leaving up to person to try and figure out the data .
You could repeat subdivision once again , achieving pseudo random order of bytes .
And this is just a mock up of what could be done to improve data safety in images without increasing the actual file size .
In old days of internet , designers were using images in lower resolution , to lower page loading time , and than gradually exchanging images with higher res versions once those loaded .
If it had sense to do it then , maybe we could now use integrated preview images to represent the average sector of pixels in the image , and than reverse calculate missing ones using pixels we have .
This could also work for audio files , and maybe even archives .
I know I could still read the book even if every fifth letter was replaced by a incorrect one .
Cheers , DLabz</tokentext>
<sentencetext>Problem is not in error correction, but actually in linearity of data.
Using only 256 pixels you could represent an image brain can interpret.
Problem is, brain can not interpret an image form first 256 pixels, as that would probably be a line half long as the image width, consisting of mostly irrelevant data.
If I would want to make a fail proof image, I would split it to squares of, say, 9(3x3) pixels, and than put only central pixel(every 5th px) values in byte stream.
Once that is done repeat that for surrounding pixels in the block.
In that way, even if part of data is lost, program would have at least one of the pixels in a 9x9 block and it could use one of nearby pixels as a substitute, leaving up to person to try and figure out the data.
You could repeat subdivision once again, achieving pseudo random order of bytes.
And this is just a mock up of what could be done to improve data safety in images without increasing the actual file size.
In old days of internet, designers were using images in lower resolution, to lower page loading time, and than gradually exchanging images with higher res versions once those loaded.
If it had sense to do it then, maybe we could now use integrated preview images to represent the average sector of pixels in the image, and than reverse calculate missing ones using pixels we have.
This could also work for audio files, and maybe even archives.
I know I could still read the book even if every fifth letter was replaced by a incorrect one.
Cheers,
DLabz</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322792</id>
	<title>What about the "block errors"?</title>
	<author>Anonymous</author>
	<datestamp>1259935680000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>Most of the storage media in common use (disks, tapes, CD/DVD-R) already do use ECC at sector of block level and will fix "single bit" errors at firmware level transparently. What is more of an issue at application level are "missing block" errors; when the low-level ECC fails and the storage device signals "unreadable sector" and one or more blocks of data are lost.<p>
Off course this can be fixed by "block redundancy" (like RAID does), "block recovery checksums" or old-fashioned backups.</p></htmltext>
<tokenext>Most of the storage media in common use ( disks , tapes , CD/DVD-R ) already do use ECC at sector of block level and will fix " single bit " errors at firmware level transparently .
What is more of an issue at application level are " missing block " errors ; when the low-level ECC fails and the storage device signals " unreadable sector " and one or more blocks of data are lost .
Off course this can be fixed by " block redundancy " ( like RAID does ) , " block recovery checksums " or old-fashioned backups .</tokentext>
<sentencetext>Most of the storage media in common use (disks, tapes, CD/DVD-R) already do use ECC at sector of block level and will fix "single bit" errors at firmware level transparently.
What is more of an issue at application level are "missing block" errors; when the low-level ECC fails and the storage device signals "unreadable sector" and one or more blocks of data are lost.
Off course this can be fixed by "block redundancy" (like RAID does), "block recovery checksums" or old-fashioned backups.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323118</id>
	<title>ZFS</title>
	<author>DiSKiLLeR</author>
	<datestamp>1259938680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just use ZFS, its already been done.</p><p>kthxbai.</p></htmltext>
<tokenext>Just use ZFS , its already been done.kthxbai .</tokentext>
<sentencetext>Just use ZFS, its already been done.kthxbai.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325374</id>
	<title>Re:PNG was designed to be able to do this</title>
	<author>Anonymous</author>
	<datestamp>1259949540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>The PNG image format divides the image data into "chunks", typically 8kbytes each</i><br>Do you have a source for this claim, I was under the impression that unless doing streaming generation it was more normal to use just one big chunk for all the image data.</p></htmltext>
<tokenext>The PNG image format divides the image data into " chunks " , typically 8kbytes eachDo you have a source for this claim , I was under the impression that unless doing streaming generation it was more normal to use just one big chunk for all the image data .</tokentext>
<sentencetext>The PNG image format divides the image data into "chunks", typically 8kbytes eachDo you have a source for this claim, I was under the impression that unless doing streaming generation it was more normal to use just one big chunk for all the image data.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322734</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323326</id>
	<title>Re:It's that computer called the brain.</title>
	<author>Anonymous</author>
	<datestamp>1259939940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sure, but you'd need to not compress anything and introduce redundancy. Most people prefer using efficient formats and doing error checks and corrections elsewhere (in the filesystem, by external correction codes - par2 springs to mind, etc).</p><p>Here's a simple image format that will survive bit flip style corruption:</p><p>Every pixel is stored as with 96 bits, the first 32 is the width of the image, the next 32 is the height, the next 8 bits is the R, the next 8 bits is the G, the next 8 bits is the B, and the last 8 bits is the alpha channel.</p><p>To read the image every 96 bit chunk is read and the most common 64 bit prefix is used as the width/height. And the 24 bits of color data for each pixel is used as is.</p><p>That will survive large amounts bit corruption just fine - it won't survive shifting (adding 32 bits of extra data at the start of the file, for example). It would also be ridiculously large compared with the image file formats we do actually use.</p></htmltext>
<tokenext>Sure , but you 'd need to not compress anything and introduce redundancy .
Most people prefer using efficient formats and doing error checks and corrections elsewhere ( in the filesystem , by external correction codes - par2 springs to mind , etc ) .Here 's a simple image format that will survive bit flip style corruption : Every pixel is stored as with 96 bits , the first 32 is the width of the image , the next 32 is the height , the next 8 bits is the R , the next 8 bits is the G , the next 8 bits is the B , and the last 8 bits is the alpha channel.To read the image every 96 bit chunk is read and the most common 64 bit prefix is used as the width/height .
And the 24 bits of color data for each pixel is used as is.That will survive large amounts bit corruption just fine - it wo n't survive shifting ( adding 32 bits of extra data at the start of the file , for example ) .
It would also be ridiculously large compared with the image file formats we do actually use .</tokentext>
<sentencetext>Sure, but you'd need to not compress anything and introduce redundancy.
Most people prefer using efficient formats and doing error checks and corrections elsewhere (in the filesystem, by external correction codes - par2 springs to mind, etc).Here's a simple image format that will survive bit flip style corruption:Every pixel is stored as with 96 bits, the first 32 is the width of the image, the next 32 is the height, the next 8 bits is the R, the next 8 bits is the G, the next 8 bits is the B, and the last 8 bits is the alpha channel.To read the image every 96 bit chunk is read and the most common 64 bit prefix is used as the width/height.
And the 24 bits of color data for each pixel is used as is.That will survive large amounts bit corruption just fine - it won't survive shifting (adding 32 bits of extra data at the start of the file, for example).
It would also be ridiculously large compared with the image file formats we do actually use.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323588</id>
	<title>zfec, Tahoe-LAFS</title>
	<author>Zooko</author>
	<datestamp>1259941500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>zfec is much, much faster than par2: http://allmydata.org/trac/zfec</p><p>Tahoe-LAFS uses zfec, encryption, integrity checking based on SHA-256, digital signatures based on RSA, and peer-to-peer networking to take a bunch of hard disks and make them into a single virtual hard disk which is extremely robust: http://allmydata.org/trac/tahoe</p></htmltext>
<tokenext>zfec is much , much faster than par2 : http : //allmydata.org/trac/zfecTahoe-LAFS uses zfec , encryption , integrity checking based on SHA-256 , digital signatures based on RSA , and peer-to-peer networking to take a bunch of hard disks and make them into a single virtual hard disk which is extremely robust : http : //allmydata.org/trac/tahoe</tokentext>
<sentencetext>zfec is much, much faster than par2: http://allmydata.org/trac/zfecTahoe-LAFS uses zfec, encryption, integrity checking based on SHA-256, digital signatures based on RSA, and peer-to-peer networking to take a bunch of hard disks and make them into a single virtual hard disk which is extremely robust: http://allmydata.org/trac/tahoe</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323362</id>
	<title>Re:What files does a single bit error destroy?</title>
	<author>EllisDees</author>
	<datestamp>1259940180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've got a 10 gig<nobr> <wbr></nobr>.tar.bz2 file that I've only been partially able to recover due to a couple of bad blocks on a hard drive. I ran bzip2recover on it, which broke it into many, many pieces, and then put them back together into a partially recoverable tar file. Now I just can't figure out how to get past the corrupt pieces.:(</p></htmltext>
<tokenext>I 've got a 10 gig .tar.bz2 file that I 've only been partially able to recover due to a couple of bad blocks on a hard drive .
I ran bzip2recover on it , which broke it into many , many pieces , and then put them back together into a partially recoverable tar file .
Now I just ca n't figure out how to get past the corrupt pieces .
: (</tokentext>
<sentencetext>I've got a 10 gig .tar.bz2 file that I've only been partially able to recover due to a couple of bad blocks on a hard drive.
I ran bzip2recover on it, which broke it into many, many pieces, and then put them back together into a partially recoverable tar file.
Now I just can't figure out how to get past the corrupt pieces.
:(</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324272</id>
	<title>Three Headers, Not Two</title>
	<author>dwye</author>
	<datestamp>1259944980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt; The solution proposed by the author: two headers and error correction code (ECC) in every file."</p><p>When there are two possibilities, which one do you chose?  Three allows the software to have a vote among the headers, and ignore or correct the loser (assuming that there IS one, of course).</p><p>Also, keeping the headers in text, rather than using complicated encoding schemes to save space where it doesn't much matter, is probably a good idea, as well.  Semantic sugar is your friend here.</p></htmltext>
<tokenext>&gt; The solution proposed by the author : two headers and error correction code ( ECC ) in every file .
" When there are two possibilities , which one do you chose ?
Three allows the software to have a vote among the headers , and ignore or correct the loser ( assuming that there IS one , of course ) .Also , keeping the headers in text , rather than using complicated encoding schemes to save space where it does n't much matter , is probably a good idea , as well .
Semantic sugar is your friend here .</tokentext>
<sentencetext>&gt; The solution proposed by the author: two headers and error correction code (ECC) in every file.
"When there are two possibilities, which one do you chose?
Three allows the software to have a vote among the headers, and ignore or correct the loser (assuming that there IS one, of course).Also, keeping the headers in text, rather than using complicated encoding schemes to save space where it doesn't much matter, is probably a good idea, as well.
Semantic sugar is your friend here.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30326028</id>
	<title>Re:To much reinvention</title>
	<author>dgatwood</author>
	<datestamp>1259952420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I disagree.  With compressed files, it's even more important.  Compression and dense binary formats are precisely the reason that ECC on files is useful.  A one-bit error in a text file does not render the entire file unreadable.  A one-bit error in an XML file might, but can be easily detected with xmllint and fixed.</p><p>By contrast, a one-bit error in a run-length-encoded file format like JPEG causes the entire rest of the file to be unusable.  And even if you stopped using run-length coding, a one-bit header in the right place causes an entire block to be scaled wrong, though this is more easily repaired.</p><p>Similarly, break an I-frame in an MPEG stream and you've just corrupted several seconds of video.</p></htmltext>
<tokenext>I disagree .
With compressed files , it 's even more important .
Compression and dense binary formats are precisely the reason that ECC on files is useful .
A one-bit error in a text file does not render the entire file unreadable .
A one-bit error in an XML file might , but can be easily detected with xmllint and fixed.By contrast , a one-bit error in a run-length-encoded file format like JPEG causes the entire rest of the file to be unusable .
And even if you stopped using run-length coding , a one-bit header in the right place causes an entire block to be scaled wrong , though this is more easily repaired.Similarly , break an I-frame in an MPEG stream and you 've just corrupted several seconds of video .</tokentext>
<sentencetext>I disagree.
With compressed files, it's even more important.
Compression and dense binary formats are precisely the reason that ECC on files is useful.
A one-bit error in a text file does not render the entire file unreadable.
A one-bit error in an XML file might, but can be easily detected with xmllint and fixed.By contrast, a one-bit error in a run-length-encoded file format like JPEG causes the entire rest of the file to be unusable.
And even if you stopped using run-length coding, a one-bit header in the right place causes an entire block to be scaled wrong, though this is more easily repaired.Similarly, break an I-frame in an MPEG stream and you've just corrupted several seconds of video.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323120</id>
	<title>Cloud computing provides an opportunity</title>
	<author>davide marney</author>
	<datestamp>1259938740000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>As we're on the cusp of moving much of our data to the cloud, we've got the perfect opportunity to improve the resilience of information storage for a lot of people at the same time.</p></htmltext>
<tokenext>As we 're on the cusp of moving much of our data to the cloud , we 've got the perfect opportunity to improve the resilience of information storage for a lot of people at the same time .</tokentext>
<sentencetext>As we're on the cusp of moving much of our data to the cloud, we've got the perfect opportunity to improve the resilience of information storage for a lot of people at the same time.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323082</id>
	<title>Re:What about the "block errors"?</title>
	<author>Prof.Phreak</author>
	<datestamp>1259938320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It means we need error correction at every level---error correction at physical device (already in place, more or less) and error correction at file system level (so even if a few blocks from a file are missing, the file system auto-corrects itself and still functions---upto some point of course).</p></htmltext>
<tokenext>It means we need error correction at every level---error correction at physical device ( already in place , more or less ) and error correction at file system level ( so even if a few blocks from a file are missing , the file system auto-corrects itself and still functions---upto some point of course ) .</tokentext>
<sentencetext>It means we need error correction at every level---error correction at physical device (already in place, more or less) and error correction at file system level (so even if a few blocks from a file are missing, the file system auto-corrects itself and still functions---upto some point of course).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322792</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323442</id>
	<title>Incorrect...</title>
	<author>Anonymous</author>
	<datestamp>1259940660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>In the context of Hard Drives.  Hard Drives are NOT digital.<br>They store a value via "majority".  If that bit is overwritten, it can easily be recovered with the right software, several times and you are maybe needing some specific hardware.</p><p>Why don't OSes (and manufacturers) take advantage of this?  There are effectively 2 layers per disc you could use to store data without degradation.<br>As long as the disc is kept in good condition, you can use this extra layer.<br>Instead, what we have are companies squeezing sectors closer together and making this method unreliable the higher the density.<br>Stop treating a magnetic disc as an optical disc, you can store much more on it.<br>This could drop the cost of drives significantly and still retain the same size as we currently have. (1gig, 2 gigs at a stretch but i still wouldn't risk it)</p></htmltext>
<tokenext>In the context of Hard Drives .
Hard Drives are NOT digital.They store a value via " majority " .
If that bit is overwritten , it can easily be recovered with the right software , several times and you are maybe needing some specific hardware.Why do n't OSes ( and manufacturers ) take advantage of this ?
There are effectively 2 layers per disc you could use to store data without degradation.As long as the disc is kept in good condition , you can use this extra layer.Instead , what we have are companies squeezing sectors closer together and making this method unreliable the higher the density.Stop treating a magnetic disc as an optical disc , you can store much more on it.This could drop the cost of drives significantly and still retain the same size as we currently have .
( 1gig , 2 gigs at a stretch but i still would n't risk it )</tokentext>
<sentencetext>In the context of Hard Drives.
Hard Drives are NOT digital.They store a value via "majority".
If that bit is overwritten, it can easily be recovered with the right software, several times and you are maybe needing some specific hardware.Why don't OSes (and manufacturers) take advantage of this?
There are effectively 2 layers per disc you could use to store data without degradation.As long as the disc is kept in good condition, you can use this extra layer.Instead, what we have are companies squeezing sectors closer together and making this method unreliable the higher the density.Stop treating a magnetic disc as an optical disc, you can store much more on it.This could drop the cost of drives significantly and still retain the same size as we currently have.
(1gig, 2 gigs at a stretch but i still wouldn't risk it)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323758</id>
	<title>Re:What files does a single bit error destroy?</title>
	<author>Twinbee</author>
	<datestamp>1259942460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>With that in mind, what program would be best to use? I take it WinRAR (which uses RAR and zip compression), or 7-Zip would be of no use...?</p></htmltext>
<tokenext>With that in mind , what program would be best to use ?
I take it WinRAR ( which uses RAR and zip compression ) , or 7-Zip would be of no use... ?</tokentext>
<sentencetext>With that in mind, what program would be best to use?
I take it WinRAR (which uses RAR and zip compression), or 7-Zip would be of no use...?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30326086</id>
	<title>Digital is just like analog</title>
	<author>Anonymous</author>
	<datestamp>1259952720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Let me qualify. Digital is pretty much like analog when you're dealing with data that's designed to be consumed non-digitally anyway, say like the smile of Mona Lisa or a pop song. A single off byte shouldn't matter to human eyes and ears. In contrast, an encrypted file is meant to be consumed digitally by the decrypting software before a human reads, watches, or listens to it: every bit counts.</p><p>So the problem is clearly with the type of digital storage being used. For most video or audio files, I find that preserving the first and last few megabytes should be enough for the file to be partly accessed. Any errors in between would, or should, result in a digital blip that is no different from a smudge on a piece of paper where only the smudged part is rendered illegible. (I add "should" because in some real cases the player program crashes.)</p><p>Of course, severely degraded media would be a problem. But how is this different from getting your precious million-dollar painting damaged in a fire or flood?</p></htmltext>
<tokenext>Let me qualify .
Digital is pretty much like analog when you 're dealing with data that 's designed to be consumed non-digitally anyway , say like the smile of Mona Lisa or a pop song .
A single off byte should n't matter to human eyes and ears .
In contrast , an encrypted file is meant to be consumed digitally by the decrypting software before a human reads , watches , or listens to it : every bit counts.So the problem is clearly with the type of digital storage being used .
For most video or audio files , I find that preserving the first and last few megabytes should be enough for the file to be partly accessed .
Any errors in between would , or should , result in a digital blip that is no different from a smudge on a piece of paper where only the smudged part is rendered illegible .
( I add " should " because in some real cases the player program crashes .
) Of course , severely degraded media would be a problem .
But how is this different from getting your precious million-dollar painting damaged in a fire or flood ?</tokentext>
<sentencetext>Let me qualify.
Digital is pretty much like analog when you're dealing with data that's designed to be consumed non-digitally anyway, say like the smile of Mona Lisa or a pop song.
A single off byte shouldn't matter to human eyes and ears.
In contrast, an encrypted file is meant to be consumed digitally by the decrypting software before a human reads, watches, or listens to it: every bit counts.So the problem is clearly with the type of digital storage being used.
For most video or audio files, I find that preserving the first and last few megabytes should be enough for the file to be partly accessed.
Any errors in between would, or should, result in a digital blip that is no different from a smudge on a piece of paper where only the smudged part is rendered illegible.
(I add "should" because in some real cases the player program crashes.
)Of course, severely degraded media would be a problem.
But how is this different from getting your precious million-dollar painting damaged in a fire or flood?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323680</id>
	<title>Re:To much reinvention</title>
	<author>Phreakiture</author>
	<datestamp>1259941980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A simple solution would be to add a file full of redundancy data alongside the original on the archival media.  A simple application could be used to repair the file if it becomes damaged, or test it for damage before you go to use it, but the original format of the file remains unchanged, and your recovery system is file system agnostic.</p></htmltext>
<tokenext>A simple solution would be to add a file full of redundancy data alongside the original on the archival media .
A simple application could be used to repair the file if it becomes damaged , or test it for damage before you go to use it , but the original format of the file remains unchanged , and your recovery system is file system agnostic .</tokentext>
<sentencetext>A simple solution would be to add a file full of redundancy data alongside the original on the archival media.
A simple application could be used to repair the file if it becomes damaged, or test it for damage before you go to use it, but the original format of the file remains unchanged, and your recovery system is file system agnostic.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324548</id>
	<title>Re:To much reinvention</title>
	<author>petermgreen</author>
	<datestamp>1259946180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>I think the article offers a number of good ideas but it would be better to do most of them at the filesystem and perhaps some at the storage layer.<br>
&nbsp; &nbsp; &nbsp; &nbsp; Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.</i><br>The downside of this approach is that the checksums are not end to end. Every time the file is transfered from one location to another there is potential for corruption.</p></htmltext>
<tokenext>I think the article offers a number of good ideas but it would be better to do most of them at the filesystem and perhaps some at the storage layer .
        Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.The downside of this approach is that the checksums are not end to end .
Every time the file is transfered from one location to another there is potential for corruption .</tokentext>
<sentencetext>I think the article offers a number of good ideas but it would be better to do most of them at the filesystem and perhaps some at the storage layer.
        Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.The downside of this approach is that the checksums are not end to end.
Every time the file is transfered from one location to another there is potential for corruption.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30332080</id>
	<title>Re:Solution:</title>
	<author>Anonymous</author>
	<datestamp>1259940840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><i>Just don't compress anything, if a bit corrupts in a non compressed bitmap file or in a plain<nobr> <wbr></nobr>.txt file, no more than 1 pixel or letter is lost.</i></p><p>And don't encrypt anything either, as encryption is equivalent to compression (as far as potential for corruption goes).</p><p>This is why I am very weary of live, filesystem-wide encryption.  Sure, you get security, but you also get a hugely increased chance of a single bit corruption completely wiping out the entire filesystem.</p></htmltext>
<tokenext>Just do n't compress anything , if a bit corrupts in a non compressed bitmap file or in a plain .txt file , no more than 1 pixel or letter is lost.And do n't encrypt anything either , as encryption is equivalent to compression ( as far as potential for corruption goes ) .This is why I am very weary of live , filesystem-wide encryption .
Sure , you get security , but you also get a hugely increased chance of a single bit corruption completely wiping out the entire filesystem .</tokentext>
<sentencetext>Just don't compress anything, if a bit corrupts in a non compressed bitmap file or in a plain .txt file, no more than 1 pixel or letter is lost.And don't encrypt anything either, as encryption is equivalent to compression (as far as potential for corruption goes).This is why I am very weary of live, filesystem-wide encryption.
Sure, you get security, but you also get a hugely increased chance of a single bit corruption completely wiping out the entire filesystem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30358254</id>
	<title>Re:It's that computer called the brain.</title>
	<author>Anonymous</author>
	<datestamp>1260183300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>The difference, of course, is that the cinemas must provide a good presentation. If they fail to do so, people will stop coming. If the presentation isn't better than watching TV/DVD/BluRay at home, then why pay the $11?</p></div></blockquote><p>Because you want to get out of the house? Because you want to be around other people? Because you want to take another person out on a date? I hate to be be presumptuous, but your reasoning seems to miss the entire social aspect of why we do things like go to a theater; and it is typical for the nerdish type folk that populate this website, which sometimes includes me too. It's fine if you do not value those aspects, but you have to recognize that they exist and are very important for other people.</p></div>
	</htmltext>
<tokenext>The difference , of course , is that the cinemas must provide a good presentation .
If they fail to do so , people will stop coming .
If the presentation is n't better than watching TV/DVD/BluRay at home , then why pay the $ 11 ? Because you want to get out of the house ?
Because you want to be around other people ?
Because you want to take another person out on a date ?
I hate to be be presumptuous , but your reasoning seems to miss the entire social aspect of why we do things like go to a theater ; and it is typical for the nerdish type folk that populate this website , which sometimes includes me too .
It 's fine if you do not value those aspects , but you have to recognize that they exist and are very important for other people .</tokentext>
<sentencetext>The difference, of course, is that the cinemas must provide a good presentation.
If they fail to do so, people will stop coming.
If the presentation isn't better than watching TV/DVD/BluRay at home, then why pay the $11?Because you want to get out of the house?
Because you want to be around other people?
Because you want to take another person out on a date?
I hate to be be presumptuous, but your reasoning seems to miss the entire social aspect of why we do things like go to a theater; and it is typical for the nerdish type folk that populate this website, which sometimes includes me too.
It's fine if you do not value those aspects, but you have to recognize that they exist and are very important for other people.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324066</id>
	<title>Clay Tablets</title>
	<author>Ukab the Great</author>
	<datestamp>1259943960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just resign yourself to the fact that the <a href="http://en.wikipedia.org/wiki/Code\_of\_Hammurabi" title="wikipedia.org">Code of Hammurabi</a> [wikipedia.org] will outlive your pr0n.</p></htmltext>
<tokenext>Just resign yourself to the fact that the Code of Hammurabi [ wikipedia.org ] will outlive your pr0n .</tokentext>
<sentencetext>Just resign yourself to the fact that the Code of Hammurabi [wikipedia.org] will outlive your pr0n.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30328482</id>
	<title>Re:Use TCP/IP</title>
	<author>Locke2005</author>
	<datestamp>1259919360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Bummer! Why can't you mom just yell down the stairs for you to come up for supper like my mom does, instead of calling you on the phone while you're trying to download ASCII porn?</htmltext>
<tokenext>Bummer !
Why ca n't you mom just yell down the stairs for you to come up for supper like my mom does , instead of calling you on the phone while you 're trying to download ASCII porn ?</tokentext>
<sentencetext>Bummer!
Why can't you mom just yell down the stairs for you to come up for supper like my mom does, instead of calling you on the phone while you're trying to download ASCII porn?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323068</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322734</id>
	<title>PNG was designed to be able to do this</title>
	<author>Anonymous</author>
	<datestamp>1259935320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>The PNG image format divides the image data into "chunks", typically 8kbytes each, and each having a CRC checksum.  You'd archive two copies of each image, presumably in two places and on different media.  Years later you check both files for CRC errors.  If there are just a few errors, probably they won't occur in the same chunk, so you can splice the good chunks from each stored file to create a new good file.</p></htmltext>
<tokenext>The PNG image format divides the image data into " chunks " , typically 8kbytes each , and each having a CRC checksum .
You 'd archive two copies of each image , presumably in two places and on different media .
Years later you check both files for CRC errors .
If there are just a few errors , probably they wo n't occur in the same chunk , so you can splice the good chunks from each stored file to create a new good file .</tokentext>
<sentencetext>The PNG image format divides the image data into "chunks", typically 8kbytes each, and each having a CRC checksum.
You'd archive two copies of each image, presumably in two places and on different media.
Years later you check both files for CRC errors.
If there are just a few errors, probably they won't occur in the same chunk, so you can splice the good chunks from each stored file to create a new good file.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322770</id>
	<title>Easy...</title>
	<author>Anonymous</author>
	<datestamp>1259935500000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Don't save anything.</p></htmltext>
<tokenext>Do n't save anything .</tokentext>
<sentencetext>Don't save anything.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325658</id>
	<title>Interesting missing-data experience</title>
	<author>KingAlanI</author>
	<datestamp>1259950860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>One time I was downloading a movie that didn't quite finish.<br>I went to watch the file [XviD in an<nobr> <wbr></nobr>.avi container] in VLC, and most of the movie played fine;, obviously it stuttered over the missing sections<br>So, perhaps there's something to say for this format for video-archiving.<br>(This format doesn't compress much further (with<nobr> <wbr></nobr>.zip or<nobr> <wbr></nobr>.rar at least), so we wouldn't face compression-algorithm issues.</p></htmltext>
<tokenext>One time I was downloading a movie that did n't quite finish.I went to watch the file [ XviD in an .avi container ] in VLC , and most of the movie played fine ; , obviously it stuttered over the missing sectionsSo , perhaps there 's something to say for this format for video-archiving .
( This format does n't compress much further ( with .zip or .rar at least ) , so we would n't face compression-algorithm issues .</tokentext>
<sentencetext>One time I was downloading a movie that didn't quite finish.I went to watch the file [XviD in an .avi container] in VLC, and most of the movie played fine;, obviously it stuttered over the missing sectionsSo, perhaps there's something to say for this format for video-archiving.
(This format doesn't compress much further (with .zip or .rar at least), so we wouldn't face compression-algorithm issues.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</id>
	<title>It's that computer called the brain.</title>
	<author>commodore64\_love</author>
	<datestamp>1259935200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>&gt;&gt;&gt;"...analog pictures and film can degrade and still be usable; why can't the same be true of digital files?"</p><p>The ear-eye-brain connection has ~500 million years of development, and has learned the ability to filter-out noise.  If for example I'm listening to a radio, the hiss is mentally filtered-out, or if I'm watching a VHS tape that has wrinkles, my brain can focus on the undamaged areas.  In contrast when a computer encounters noise or errors, it panics and says, "I give up," and the digital radio or digital television goes blank.</p><p>What we need is a smarter computer that says, "I don't know what this is supposed to be, but here's my best guess," and displays noise.  Let the brain then takeover and mentally remove the noise from the audio or image.</p></htmltext>
<tokenext>&gt; &gt; &gt; " ...analog pictures and film can degrade and still be usable ; why ca n't the same be true of digital files ?
" The ear-eye-brain connection has ~ 500 million years of development , and has learned the ability to filter-out noise .
If for example I 'm listening to a radio , the hiss is mentally filtered-out , or if I 'm watching a VHS tape that has wrinkles , my brain can focus on the undamaged areas .
In contrast when a computer encounters noise or errors , it panics and says , " I give up , " and the digital radio or digital television goes blank.What we need is a smarter computer that says , " I do n't know what this is supposed to be , but here 's my best guess , " and displays noise .
Let the brain then takeover and mentally remove the noise from the audio or image .</tokentext>
<sentencetext>&gt;&gt;&gt;"...analog pictures and film can degrade and still be usable; why can't the same be true of digital files?
"The ear-eye-brain connection has ~500 million years of development, and has learned the ability to filter-out noise.
If for example I'm listening to a radio, the hiss is mentally filtered-out, or if I'm watching a VHS tape that has wrinkles, my brain can focus on the undamaged areas.
In contrast when a computer encounters noise or errors, it panics and says, "I give up," and the digital radio or digital television goes blank.What we need is a smarter computer that says, "I don't know what this is supposed to be, but here's my best guess," and displays noise.
Let the brain then takeover and mentally remove the noise from the audio or image.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325162</id>
	<title>never ending story</title>
	<author>thelonious</author>
	<datestamp>1259948700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>But then who polices the police bits?</p></htmltext>
<tokenext>But then who polices the police bits ?</tokentext>
<sentencetext>But then who polices the police bits?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323300</id>
	<title>Use small hunks and do the checksum thing</title>
	<author>davidwr</author>
	<datestamp>1259939820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The in-file checksum thing is a good idea, but it may be redundant to disk- or filesystem-level checksums.</p><p>Another useful thing is to store information in "chunks" so that if a bit goes bad no more than one "chunk" is lost.  A chunk could be a pixel or group of pixels in certain graphics formats, a page, in certain "page" formats such as PDF or multi-page TIFF, a cell in a spreadsheet, a maximum-length run of characters in a word-processing document, etc.</p><p>Storing files in "ascii-like" formats where it makes sense to do so is also a good idea from a data-recovery perspective.</p><p>For files that represent "events in time" such as music, video, or some scientific data collections, a "chunk" might be a second or some other period of time.</p><p>Many of today's data formats already operate at a "chunk" level.  Many do not.</p><p>On another note, these days, "space is cheap" on disk, but not necessarily so when it comes to networking or the time it takes to make backups.  1TB is under $100 on a home machine, several times that on a server, a relative pittance over the life of the drive.  However, copying 1 TB takes a non-trivial amount of time.</p></htmltext>
<tokenext>The in-file checksum thing is a good idea , but it may be redundant to disk- or filesystem-level checksums.Another useful thing is to store information in " chunks " so that if a bit goes bad no more than one " chunk " is lost .
A chunk could be a pixel or group of pixels in certain graphics formats , a page , in certain " page " formats such as PDF or multi-page TIFF , a cell in a spreadsheet , a maximum-length run of characters in a word-processing document , etc.Storing files in " ascii-like " formats where it makes sense to do so is also a good idea from a data-recovery perspective.For files that represent " events in time " such as music , video , or some scientific data collections , a " chunk " might be a second or some other period of time.Many of today 's data formats already operate at a " chunk " level .
Many do not.On another note , these days , " space is cheap " on disk , but not necessarily so when it comes to networking or the time it takes to make backups .
1TB is under $ 100 on a home machine , several times that on a server , a relative pittance over the life of the drive .
However , copying 1 TB takes a non-trivial amount of time .</tokentext>
<sentencetext>The in-file checksum thing is a good idea, but it may be redundant to disk- or filesystem-level checksums.Another useful thing is to store information in "chunks" so that if a bit goes bad no more than one "chunk" is lost.
A chunk could be a pixel or group of pixels in certain graphics formats, a page, in certain "page" formats such as PDF or multi-page TIFF, a cell in a spreadsheet, a maximum-length run of characters in a word-processing document, etc.Storing files in "ascii-like" formats where it makes sense to do so is also a good idea from a data-recovery perspective.For files that represent "events in time" such as music, video, or some scientific data collections, a "chunk" might be a second or some other period of time.Many of today's data formats already operate at a "chunk" level.
Many do not.On another note, these days, "space is cheap" on disk, but not necessarily so when it comes to networking or the time it takes to make backups.
1TB is under $100 on a home machine, several times that on a server, a relative pittance over the life of the drive.
However, copying 1 TB takes a non-trivial amount of time.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324216</id>
	<title>We can read Egyptian heiroglyphs 3,000 yrs later</title>
	<author>BrightSpark</author>
	<datestamp>1259944680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>What's wrong with physical storage for say 200 years? Most data we save these days will be fine stored this way. The issue is what we choose to store just because we can which caused problems. The net is overflowing with the minutia of boring existances. What we really need is a data scythe to cut the rubbish out and then store it. The Victorian era handed down lots of books, magazines and pamphlets. Some of these are preserved and read by historians. In 150 years will we really care about the financial statements for Goldman-Sachs or want the blog of Paris Hilton? (No lewd comments, it was rhetoical).</htmltext>
<tokenext>What 's wrong with physical storage for say 200 years ?
Most data we save these days will be fine stored this way .
The issue is what we choose to store just because we can which caused problems .
The net is overflowing with the minutia of boring existances .
What we really need is a data scythe to cut the rubbish out and then store it .
The Victorian era handed down lots of books , magazines and pamphlets .
Some of these are preserved and read by historians .
In 150 years will we really care about the financial statements for Goldman-Sachs or want the blog of Paris Hilton ?
( No lewd comments , it was rhetoical ) .</tokentext>
<sentencetext>What's wrong with physical storage for say 200 years?
Most data we save these days will be fine stored this way.
The issue is what we choose to store just because we can which caused problems.
The net is overflowing with the minutia of boring existances.
What we really need is a data scythe to cut the rubbish out and then store it.
The Victorian era handed down lots of books, magazines and pamphlets.
Some of these are preserved and read by historians.
In 150 years will we really care about the financial statements for Goldman-Sachs or want the blog of Paris Hilton?
(No lewd comments, it was rhetoical).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323026</id>
	<title>Don't worry</title>
	<author>Anonymous</author>
	<datestamp>1259938020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Quantum computers will save us. They could examine every combination possible to rebuild a file in seconds.</p></htmltext>
<tokenext>Quantum computers will save us .
They could examine every combination possible to rebuild a file in seconds .</tokentext>
<sentencetext>Quantum computers will save us.
They could examine every combination possible to rebuild a file in seconds.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322986</id>
	<title>Re:What about the "block errors"?</title>
	<author>careysb</author>
	<datestamp>1259937600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>RAID 0 does not offer "block redundancy".

If I have old-fashioned backups, how can I determine that my primary file has sustained damage? The OS doesn't tell us. In fact, the OS probably doesn't even know.

Backup software often allows you to validate the backup immediately after the backup was made, but not, say, a year later.

The term "checksum" is over-used. A true check-sum algorithm *may* tell you that a file has sustained damage. It will not, however, tell you how to correct it. A "CRC" (cyclic-redundancy-check) or "SHA" (secure hashing algorithm) have a better chance for flagging a damaged file than a check-sum. The only "correction" algorithm that I've tripped across (and there are probably others) is "Reed-Solomon".</htmltext>
<tokenext>RAID 0 does not offer " block redundancy " .
If I have old-fashioned backups , how can I determine that my primary file has sustained damage ?
The OS does n't tell us .
In fact , the OS probably does n't even know .
Backup software often allows you to validate the backup immediately after the backup was made , but not , say , a year later .
The term " checksum " is over-used .
A true check-sum algorithm * may * tell you that a file has sustained damage .
It will not , however , tell you how to correct it .
A " CRC " ( cyclic-redundancy-check ) or " SHA " ( secure hashing algorithm ) have a better chance for flagging a damaged file than a check-sum .
The only " correction " algorithm that I 've tripped across ( and there are probably others ) is " Reed-Solomon " .</tokentext>
<sentencetext>RAID 0 does not offer "block redundancy".
If I have old-fashioned backups, how can I determine that my primary file has sustained damage?
The OS doesn't tell us.
In fact, the OS probably doesn't even know.
Backup software often allows you to validate the backup immediately after the backup was made, but not, say, a year later.
The term "checksum" is over-used.
A true check-sum algorithm *may* tell you that a file has sustained damage.
It will not, however, tell you how to correct it.
A "CRC" (cyclic-redundancy-check) or "SHA" (secure hashing algorithm) have a better chance for flagging a damaged file than a check-sum.
The only "correction" algorithm that I've tripped across (and there are probably others) is "Reed-Solomon".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322792</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323104</id>
	<title>Re:Sun Microsystems..... zfs.....</title>
	<author>sskinnider</author>
	<datestamp>1259938560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Hard drives are not yet an ubiquitous media for archiving.  Once the files are written to CD or tape you lose the advantage that hardware or filesystem protection gave you and now you are dependent on the lifespan and degradation of the media.</htmltext>
<tokenext>Hard drives are not yet an ubiquitous media for archiving .
Once the files are written to CD or tape you lose the advantage that hardware or filesystem protection gave you and now you are dependent on the lifespan and degradation of the media .</tokentext>
<sentencetext>Hard drives are not yet an ubiquitous media for archiving.
Once the files are written to CD or tape you lose the advantage that hardware or filesystem protection gave you and now you are dependent on the lifespan and degradation of the media.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322728</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323792</id>
	<title>Re:It's that computer called the brain.</title>
	<author>Bengie</author>
	<datestamp>1259942580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>but how much of our ability to "read through" noise is because we know what data to expect in the first place?</p><p>I know when listening to a radio station that's barely coming in, many times it will sound like random noise, but at some point I will hear a certain note or something and suddenly I know what song it is. Now that I know what song it is, I have no problems actually "hearing" the song. I know what to expect from the song or have picked up on certain cues and I'm guessing that pattern recognition goes into overdrive and dynamically filling in the gaps from memory or even some sort of heuristics based on the current acoustical pattern when you listen to parts you don't know.</p></htmltext>
<tokenext>but how much of our ability to " read through " noise is because we know what data to expect in the first place ? I know when listening to a radio station that 's barely coming in , many times it will sound like random noise , but at some point I will hear a certain note or something and suddenly I know what song it is .
Now that I know what song it is , I have no problems actually " hearing " the song .
I know what to expect from the song or have picked up on certain cues and I 'm guessing that pattern recognition goes into overdrive and dynamically filling in the gaps from memory or even some sort of heuristics based on the current acoustical pattern when you listen to parts you do n't know .</tokentext>
<sentencetext>but how much of our ability to "read through" noise is because we know what data to expect in the first place?I know when listening to a radio station that's barely coming in, many times it will sound like random noise, but at some point I will hear a certain note or something and suddenly I know what song it is.
Now that I know what song it is, I have no problems actually "hearing" the song.
I know what to expect from the song or have picked up on certain cues and I'm guessing that pattern recognition goes into overdrive and dynamically filling in the gaps from memory or even some sort of heuristics based on the current acoustical pattern when you listen to parts you don't know.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323670</id>
	<title>Re:It's that computer called the brain.</title>
	<author>PinkyDead</author>
	<datestamp>1259941920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This was extremely useful in the 1980s when certain television channels were available from Sweden and Holland but only in a scrambled form.</p></htmltext>
<tokenext>This was extremely useful in the 1980s when certain television channels were available from Sweden and Holland but only in a scrambled form .</tokentext>
<sentencetext>This was extremely useful in the 1980s when certain television channels were available from Sweden and Holland but only in a scrambled form.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323932</id>
	<title>Re:It's that computer called the brain.</title>
	<author>Kjella</author>
	<datestamp>1259943240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That would be trivial to do if we were still doing BMP and WAV files, where one bit = one speck of noise. But on the file/network level we use a ton of compression, and the result is that a bit error isn't a bit error in the human sense. Bits are part of a block, and other blocks depend on that block. One change and everything that comes after until the next key frame changes. That means two vastly different results are suddenly a bit apart.</p><p>Of course, no actual medium is that stable which is why we use error correction codes, Reed-Solomon being the most common. With those you can fill in any x\% of missing information perfectly or if you miss a few bytes too much you have a bunch of digital junk, because the matrix isn't solvable. So failing at say a 25\% recovery rate, you instantly drop from 100\% to 80\% of the information, then you lose everything up to the first key frame of anything recovered because it's all deltas. What's left just isn't worth showing to anyone, it's not noise in any human meaning of the word.</p></htmltext>
<tokenext>That would be trivial to do if we were still doing BMP and WAV files , where one bit = one speck of noise .
But on the file/network level we use a ton of compression , and the result is that a bit error is n't a bit error in the human sense .
Bits are part of a block , and other blocks depend on that block .
One change and everything that comes after until the next key frame changes .
That means two vastly different results are suddenly a bit apart.Of course , no actual medium is that stable which is why we use error correction codes , Reed-Solomon being the most common .
With those you can fill in any x \ % of missing information perfectly or if you miss a few bytes too much you have a bunch of digital junk , because the matrix is n't solvable .
So failing at say a 25 \ % recovery rate , you instantly drop from 100 \ % to 80 \ % of the information , then you lose everything up to the first key frame of anything recovered because it 's all deltas .
What 's left just is n't worth showing to anyone , it 's not noise in any human meaning of the word .</tokentext>
<sentencetext>That would be trivial to do if we were still doing BMP and WAV files, where one bit = one speck of noise.
But on the file/network level we use a ton of compression, and the result is that a bit error isn't a bit error in the human sense.
Bits are part of a block, and other blocks depend on that block.
One change and everything that comes after until the next key frame changes.
That means two vastly different results are suddenly a bit apart.Of course, no actual medium is that stable which is why we use error correction codes, Reed-Solomon being the most common.
With those you can fill in any x\% of missing information perfectly or if you miss a few bytes too much you have a bunch of digital junk, because the matrix isn't solvable.
So failing at say a 25\% recovery rate, you instantly drop from 100\% to 80\% of the information, then you lose everything up to the first key frame of anything recovered because it's all deltas.
What's left just isn't worth showing to anyone, it's not noise in any human meaning of the word.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30327124</id>
	<title>Re:Sun Microsystems..... zfs.....</title>
	<author>Anonymous</author>
	<datestamp>1259956680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Make ZFS available everywhere.<br>I would LOVE to implement it on all my computers, but I can't at a native filesystem level.</p><p>Oh well, my next file server will be built for solaris and run zfs.. since BSD's implementation is apparently buggy</p></htmltext>
<tokenext>Make ZFS available everywhere.I would LOVE to implement it on all my computers , but I ca n't at a native filesystem level.Oh well , my next file server will be built for solaris and run zfs.. since BSD 's implementation is apparently buggy</tokentext>
<sentencetext>Make ZFS available everywhere.I would LOVE to implement it on all my computers, but I can't at a native filesystem level.Oh well, my next file server will be built for solaris and run zfs.. since BSD's implementation is apparently buggy</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322728</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322976</id>
	<title>Film and digital</title>
	<author>Anonymous</author>
	<datestamp>1259937480000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Ten years ago my old company used to advocate that for individuals who wanted to convert paper to digital, they first put them on microfilm and then scan them. That way when their digital media got damaged or lost they could always recreate it. Film last for a long long time when stored correctly. Unfortunately that still seems the be the best advice, at least if you are starting from an analog original.</htmltext>
<tokenext>Ten years ago my old company used to advocate that for individuals who wanted to convert paper to digital , they first put them on microfilm and then scan them .
That way when their digital media got damaged or lost they could always recreate it .
Film last for a long long time when stored correctly .
Unfortunately that still seems the be the best advice , at least if you are starting from an analog original .</tokentext>
<sentencetext>Ten years ago my old company used to advocate that for individuals who wanted to convert paper to digital, they first put them on microfilm and then scan them.
That way when their digital media got damaged or lost they could always recreate it.
Film last for a long long time when stored correctly.
Unfortunately that still seems the be the best advice, at least if you are starting from an analog original.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742</id>
	<title>What files does a single bit error destroy?</title>
	<author>jmitchel!jmitchel.co</author>
	<datestamp>1259935320000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>What files does a single bit error irretrievably destroy?  Obviously it may cause problems, even very annoying problems when you go to use the file.  But unless that one bit is in a really bad spot that information is pretty recoverable.</htmltext>
<tokenext>What files does a single bit error irretrievably destroy ?
Obviously it may cause problems , even very annoying problems when you go to use the file .
But unless that one bit is in a really bad spot that information is pretty recoverable .</tokentext>
<sentencetext>What files does a single bit error irretrievably destroy?
Obviously it may cause problems, even very annoying problems when you go to use the file.
But unless that one bit is in a really bad spot that information is pretty recoverable.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30333382</id>
	<title>Re:It's that computer called the brain.</title>
	<author>sourICE</author>
	<datestamp>1260046380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>What we need is a smarter computer that says, "I don't know what this is supposed to be, but here's my best guess," and displays noise. Let the brain then takeover and mentally remove the noise from the audio or image.</p></div><p>You obviously don't understand the mechanics of computer programming. A computer attempting to execute a backed up program can not possibly view a file in this manner and attempt to show you 'noise' because
the computer is missing an instruction in it's likely already compiled code and without being the actual programmer or having access to the original source code good luck knowing what instruction is actually missing.</p></div>
	</htmltext>
<tokenext>What we need is a smarter computer that says , " I do n't know what this is supposed to be , but here 's my best guess , " and displays noise .
Let the brain then takeover and mentally remove the noise from the audio or image.You obviously do n't understand the mechanics of computer programming .
A computer attempting to execute a backed up program can not possibly view a file in this manner and attempt to show you 'noise ' because the computer is missing an instruction in it 's likely already compiled code and without being the actual programmer or having access to the original source code good luck knowing what instruction is actually missing .</tokentext>
<sentencetext>What we need is a smarter computer that says, "I don't know what this is supposed to be, but here's my best guess," and displays noise.
Let the brain then takeover and mentally remove the noise from the audio or image.You obviously don't understand the mechanics of computer programming.
A computer attempting to execute a backed up program can not possibly view a file in this manner and attempt to show you 'noise' because
the computer is missing an instruction in it's likely already compiled code and without being the actual programmer or having access to the original source code good luck knowing what instruction is actually missing.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324176</id>
	<title>Where's ISO</title>
	<author>GerryHattrick</author>
	<datestamp>1259944500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Come on, ISO, where are you?  We all need the best (or alternatively, least-worst) glidepath now.  When I retired, the argument was all about proprietary formats for formatted text, and this and that.  USians seemed to want to take the lead on everything and thereby 'offer' formal Secretariat (and steering).  Now there's something worth doing - fixit, folks - and non-proprietary, pretty-please.</htmltext>
<tokenext>Come on , ISO , where are you ?
We all need the best ( or alternatively , least-worst ) glidepath now .
When I retired , the argument was all about proprietary formats for formatted text , and this and that .
USians seemed to want to take the lead on everything and thereby 'offer ' formal Secretariat ( and steering ) .
Now there 's something worth doing - fixit , folks - and non-proprietary , pretty-please .</tokentext>
<sentencetext>Come on, ISO, where are you?
We all need the best (or alternatively, least-worst) glidepath now.
When I retired, the argument was all about proprietary formats for formatted text, and this and that.
USians seemed to want to take the lead on everything and thereby 'offer' formal Secretariat (and steering).
Now there's something worth doing - fixit, folks - and non-proprietary, pretty-please.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323188</id>
	<title>Re:To much reinvention</title>
	<author>Anonymous</author>
	<datestamp>1259939160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>If this type of thing is implemented at the file level every application is going to have to do its own thing.  That means to many implementations most of which wont be very good or well tested.  It also means applications developers will have to be busy slogging though error correction data in their files rather than the data they actually wanted to persist for their application.  I think the article offers a number of good ideas but it would be better to do most of them at the filesystem and perhaps some at the storage layer.</p><p>
&nbsp; &nbsp; Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.</p></div><p>Precisely. This is what things like torrents, RAR files with recovery blocks, and filesystems like ZFS are for: so every app developer doesn't have to roll their own, badly.</p></div>
	</htmltext>
<tokenext>If this type of thing is implemented at the file level every application is going to have to do its own thing .
That means to many implementations most of which wont be very good or well tested .
It also means applications developers will have to be busy slogging though error correction data in their files rather than the data they actually wanted to persist for their application .
I think the article offers a number of good ideas but it would be better to do most of them at the filesystem and perhaps some at the storage layer .
    Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.Precisely .
This is what things like torrents , RAR files with recovery blocks , and filesystems like ZFS are for : so every app developer does n't have to roll their own , badly .</tokentext>
<sentencetext>If this type of thing is implemented at the file level every application is going to have to do its own thing.
That means to many implementations most of which wont be very good or well tested.
It also means applications developers will have to be busy slogging though error correction data in their files rather than the data they actually wanted to persist for their application.
I think the article offers a number of good ideas but it would be better to do most of them at the filesystem and perhaps some at the storage layer.
    Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.Precisely.
This is what things like torrents, RAR files with recovery blocks, and filesystems like ZFS are for: so every app developer doesn't have to roll their own, badly.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322756</id>
	<title>clueless</title>
	<author>Anonymous</author>
	<datestamp>1259935440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Stupid idea. Nowadays digital preservation is more about file format conversion then about bit rot.</htmltext>
<tokenext>Stupid idea .
Nowadays digital preservation is more about file format conversion then about bit rot .</tokentext>
<sentencetext>Stupid idea.
Nowadays digital preservation is more about file format conversion then about bit rot.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323526</id>
	<title>Re:Sun Microsystems..... zfs.....</title>
	<author>xZgf6xHx2uhoAj9D</author>
	<datestamp>1259941200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Why? There's no reason a filesystem like ZFS can't be used on CD or tape and a lot of people do use them.
</p><p>Even if you didn't want to do that, ISO 9660, the filesystem used by default on data CDs, contains its own error correction scheme (288 bytes of redundancy for every 2048 byte block).</p></htmltext>
<tokenext>Why ?
There 's no reason a filesystem like ZFS ca n't be used on CD or tape and a lot of people do use them .
Even if you did n't want to do that , ISO 9660 , the filesystem used by default on data CDs , contains its own error correction scheme ( 288 bytes of redundancy for every 2048 byte block ) .</tokentext>
<sentencetext>Why?
There's no reason a filesystem like ZFS can't be used on CD or tape and a lot of people do use them.
Even if you didn't want to do that, ISO 9660, the filesystem used by default on data CDs, contains its own error correction scheme (288 bytes of redundancy for every 2048 byte block).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323104</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323666</id>
	<title>Re:It's that computer called the brain.</title>
	<author>stevew</author>
	<datestamp>1259941920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You're right about CD's always having contained this type of ECC mechanism.  For that matter you will see this type of ECC in Radio based communications infrastructure and data that gets written to Hard disks too!  In other words - all modern data storage devices (except maybe Flash..) contain ECC mechanisms that allow burst error detection and correction.</p><p>So now we're talking about being doubly redundant.  Put ECC on the ECC?  I'm not sure that helps.</p><p>Consider - if a bit twiddles on a magnetic domain.  It will be found and corrected by the codes that are already protecting the bit when the bit is next accessed. This is hardware functionality. What good does expanding the size of your data do by being redundant with the ECC codes?  It makes more sense to actually increase the size of your ECC fields within the hardware mechanism to handle larger burst errors. It certainly is more efficient with storage anyway.</p><p>There IS a point where the media can be so damaged that the data can't be retrieved.  Your errors exceed the number of bits you are capable of correcting.  Why does anyone believe the doubly redundant data is going to be any better? It has the same correction limits as the original algorithms...and likely got compromised along with the hardware ECC.</p><p>I don't see the point.</p></htmltext>
<tokenext>You 're right about CD 's always having contained this type of ECC mechanism .
For that matter you will see this type of ECC in Radio based communications infrastructure and data that gets written to Hard disks too !
In other words - all modern data storage devices ( except maybe Flash.. ) contain ECC mechanisms that allow burst error detection and correction.So now we 're talking about being doubly redundant .
Put ECC on the ECC ?
I 'm not sure that helps.Consider - if a bit twiddles on a magnetic domain .
It will be found and corrected by the codes that are already protecting the bit when the bit is next accessed .
This is hardware functionality .
What good does expanding the size of your data do by being redundant with the ECC codes ?
It makes more sense to actually increase the size of your ECC fields within the hardware mechanism to handle larger burst errors .
It certainly is more efficient with storage anyway.There IS a point where the media can be so damaged that the data ca n't be retrieved .
Your errors exceed the number of bits you are capable of correcting .
Why does anyone believe the doubly redundant data is going to be any better ?
It has the same correction limits as the original algorithms...and likely got compromised along with the hardware ECC.I do n't see the point .</tokentext>
<sentencetext>You're right about CD's always having contained this type of ECC mechanism.
For that matter you will see this type of ECC in Radio based communications infrastructure and data that gets written to Hard disks too!
In other words - all modern data storage devices (except maybe Flash..) contain ECC mechanisms that allow burst error detection and correction.So now we're talking about being doubly redundant.
Put ECC on the ECC?
I'm not sure that helps.Consider - if a bit twiddles on a magnetic domain.
It will be found and corrected by the codes that are already protecting the bit when the bit is next accessed.
This is hardware functionality.
What good does expanding the size of your data do by being redundant with the ECC codes?
It makes more sense to actually increase the size of your ECC fields within the hardware mechanism to handle larger burst errors.
It certainly is more efficient with storage anyway.There IS a point where the media can be so damaged that the data can't be retrieved.
Your errors exceed the number of bits you are capable of correcting.
Why does anyone believe the doubly redundant data is going to be any better?
It has the same correction limits as the original algorithms...and likely got compromised along with the hardware ECC.I don't see the point.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324238</id>
	<title>Re:It's that computer called the brain.</title>
	<author>Anonymous</author>
	<datestamp>1259944800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>The ear-eye-brain connection has ~500 million years of development.</p></div><p>What are you talking about? The ear-eye-brain connection 1 day (of 6) of development time, and around 4000 years of god-given evolutionary history...so let's get our facts straight<nobr> <wbr></nobr>:)</p></div>
	</htmltext>
<tokenext>The ear-eye-brain connection has ~ 500 million years of development.What are you talking about ?
The ear-eye-brain connection 1 day ( of 6 ) of development time , and around 4000 years of god-given evolutionary history...so let 's get our facts straight : )</tokentext>
<sentencetext>The ear-eye-brain connection has ~500 million years of development.What are you talking about?
The ear-eye-brain connection 1 day (of 6) of development time, and around 4000 years of god-given evolutionary history...so let's get our facts straight :)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322872</id>
	<title>Re:What files does a single bit error destroy?</title>
	<author>Rockoon</author>
	<datestamp>1259936460000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>Most modern compression formats will not tolerate any errors. With LZ a single bit error could propagate over a long expanse of the uncompressed output, while with Arithmetic encoding the remainder of the file following the single bit error will be completely unrecoverable.<br>
<br>
Pretty much only the prefix-code style compression schemes (Huffman for one) will isolate errors to short sgements, and then only if the compressor is not of the adaptive variety.</htmltext>
<tokenext>Most modern compression formats will not tolerate any errors .
With LZ a single bit error could propagate over a long expanse of the uncompressed output , while with Arithmetic encoding the remainder of the file following the single bit error will be completely unrecoverable .
Pretty much only the prefix-code style compression schemes ( Huffman for one ) will isolate errors to short sgements , and then only if the compressor is not of the adaptive variety .</tokentext>
<sentencetext>Most modern compression formats will not tolerate any errors.
With LZ a single bit error could propagate over a long expanse of the uncompressed output, while with Arithmetic encoding the remainder of the file following the single bit error will be completely unrecoverable.
Pretty much only the prefix-code style compression schemes (Huffman for one) will isolate errors to short sgements, and then only if the compressor is not of the adaptive variety.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325160</id>
	<title>Re:It's that computer called the brain.</title>
	<author>Anonymous</author>
	<datestamp>1259948700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>OT but I stopped going to movies because I was so pissed with the bad audio and convergence issues when I went to see "Watchmen".  I am so pleased with my 67" LED DLP  + BluRay + 5.1 surround</p></htmltext>
<tokenext>OT but I stopped going to movies because I was so pissed with the bad audio and convergence issues when I went to see " Watchmen " .
I am so pleased with my 67 " LED DLP + BluRay + 5.1 surround</tokentext>
<sentencetext>OT but I stopped going to movies because I was so pissed with the bad audio and convergence issues when I went to see "Watchmen".
I am so pleased with my 67" LED DLP  + BluRay + 5.1 surround</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325604</id>
	<title>Re:To much reinvention</title>
	<author>Danga</author>
	<datestamp>1259950560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Not to nitpick, but I'm going to nitpick...</i></p><p><i>Parity bits don't allow you to correct an error, only check that an error is present.</i></p><p>Next time nitpick in an area you know a lot about which is definitely NOT error correction or detection since you messed up the basic acronyms (not a big deal) but to say parity cannot be used for correction is flat out WRONG.  The whole point of parity is to use it for error CORRECTION, here is a simple example:</p><p>Data byte 1: 0xF<br>Data byte 2: 0xA<br>Data byte 3: 0x3</p><p>Generate parity data by simply XORing each byte: 0xF XOR 0xA XOR 0x3 = 0x6</p><p>Now, if byte 2 is lost/corrupted then it can be regenerated by XORing the remaining bytes including the parity byte 0x6:</p><p>Data byte 2 = 0xF XOR 0x3 XOR 0x6 = 0xA</p><p>Viola!  Data byte 2 has been 100\% fully corrected using parity, so your nitpicking quote "Parity bits don't allow you to correct an error" is 100\% wrong.</p></div>
	</htmltext>
<tokenext>Not to nitpick , but I 'm going to nitpick...Parity bits do n't allow you to correct an error , only check that an error is present.Next time nitpick in an area you know a lot about which is definitely NOT error correction or detection since you messed up the basic acronyms ( not a big deal ) but to say parity can not be used for correction is flat out WRONG .
The whole point of parity is to use it for error CORRECTION , here is a simple example : Data byte 1 : 0xFData byte 2 : 0xAData byte 3 : 0x3Generate parity data by simply XORing each byte : 0xF XOR 0xA XOR 0x3 = 0x6Now , if byte 2 is lost/corrupted then it can be regenerated by XORing the remaining bytes including the parity byte 0x6 : Data byte 2 = 0xF XOR 0x3 XOR 0x6 = 0xAViola !
Data byte 2 has been 100 \ % fully corrected using parity , so your nitpicking quote " Parity bits do n't allow you to correct an error " is 100 \ % wrong .</tokentext>
<sentencetext>Not to nitpick, but I'm going to nitpick...Parity bits don't allow you to correct an error, only check that an error is present.Next time nitpick in an area you know a lot about which is definitely NOT error correction or detection since you messed up the basic acronyms (not a big deal) but to say parity cannot be used for correction is flat out WRONG.
The whole point of parity is to use it for error CORRECTION, here is a simple example:Data byte 1: 0xFData byte 2: 0xAData byte 3: 0x3Generate parity data by simply XORing each byte: 0xF XOR 0xA XOR 0x3 = 0x6Now, if byte 2 is lost/corrupted then it can be regenerated by XORing the remaining bytes including the parity byte 0x6:Data byte 2 = 0xF XOR 0x3 XOR 0x6 = 0xAViola!
Data byte 2 has been 100\% fully corrected using parity, so your nitpicking quote "Parity bits don't allow you to correct an error" is 100\% wrong.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324280</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325904</id>
	<title>Easier solution</title>
	<author>Locke2005</author>
	<datestamp>1259951940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Create massive redundant copies of each work (with MD5 checksums), and keep copying them to new media on a staggered basis. Whenever one copy fails a checksum check, replace it with a good copy. Memory is cheap; why keep just one or a few copies of anything that important? To the RIAA, let me say this: we are just trying to insure that none of your valuable intellectual property becomes lost due to data corruption. You release it, we'll archive it for you!</htmltext>
<tokenext>Create massive redundant copies of each work ( with MD5 checksums ) , and keep copying them to new media on a staggered basis .
Whenever one copy fails a checksum check , replace it with a good copy .
Memory is cheap ; why keep just one or a few copies of anything that important ?
To the RIAA , let me say this : we are just trying to insure that none of your valuable intellectual property becomes lost due to data corruption .
You release it , we 'll archive it for you !</tokentext>
<sentencetext>Create massive redundant copies of each work (with MD5 checksums), and keep copying them to new media on a staggered basis.
Whenever one copy fails a checksum check, replace it with a good copy.
Memory is cheap; why keep just one or a few copies of anything that important?
To the RIAA, let me say this: we are just trying to insure that none of your valuable intellectual property becomes lost due to data corruption.
You release it, we'll archive it for you!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30327744</id>
	<title>HDDs do a LOT of this</title>
	<author>Anonymous</author>
	<datestamp>1259959440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I can say that HDDs make a lot of use of ECC. They misread bits all the time, but only seldom do these misreads require a re-read, much less cause actual corruption.</p><p>I assume that if an OEM requested higher ECC (at the loss of data capacity) they could do so.</p></htmltext>
<tokenext>I can say that HDDs make a lot of use of ECC .
They misread bits all the time , but only seldom do these misreads require a re-read , much less cause actual corruption.I assume that if an OEM requested higher ECC ( at the loss of data capacity ) they could do so .</tokentext>
<sentencetext>I can say that HDDs make a lot of use of ECC.
They misread bits all the time, but only seldom do these misreads require a re-read, much less cause actual corruption.I assume that if an OEM requested higher ECC (at the loss of data capacity) they could do so.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322924</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323014</id>
	<title>Re:It's that computer called the brain.</title>
	<author>Fackamato</author>
	<datestamp>1259937900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Like a corrupted MPEG stream played in most video players?</p></htmltext>
<tokenext>Like a corrupted MPEG stream played in most video players ?</tokentext>
<sentencetext>Like a corrupted MPEG stream played in most video players?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322740</id>
	<title>Re:To much reinvention</title>
	<author>Anonymous</author>
	<datestamp>1259935320000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>At least they didn't suggest just sticking the files "in the cloud".</p></htmltext>
<tokenext>At least they did n't suggest just sticking the files " in the cloud " .</tokentext>
<sentencetext>At least they didn't suggest just sticking the files "in the cloud".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323368</id>
	<title>Re:par files</title>
	<author>drooling-dog</author>
	<datestamp>1259940180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm glad I didn't have to scroll down too far to see this. They're practically magic.</p></htmltext>
<tokenext>I 'm glad I did n't have to scroll down too far to see this .
They 're practically magic .</tokentext>
<sentencetext>I'm glad I didn't have to scroll down too far to see this.
They're practically magic.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322704</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323186</id>
	<title>Re:It's that computer called the brain.</title>
	<author>Anonymous</author>
	<datestamp>1259939100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That's done when it's possible to do so. Good luck trying it on something like a zip file.</p><p>Don't confuse background noise with a gap in the signal. Your speakers don't hiss when a CD is scratched, they pop, which is why in some cases it's better to just wait until the stream looks good again.</p></htmltext>
<tokenext>That 's done when it 's possible to do so .
Good luck trying it on something like a zip file.Do n't confuse background noise with a gap in the signal .
Your speakers do n't hiss when a CD is scratched , they pop , which is why in some cases it 's better to just wait until the stream looks good again .</tokentext>
<sentencetext>That's done when it's possible to do so.
Good luck trying it on something like a zip file.Don't confuse background noise with a gap in the signal.
Your speakers don't hiss when a CD is scratched, they pop, which is why in some cases it's better to just wait until the stream looks good again.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325004</id>
	<title>Re:It's that computer called the brain.</title>
	<author>petermgreen</author>
	<datestamp>1259948160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>You're right about CD's always having contained this type of ECC mechanism. For that matter you will see this type of ECC in Radio based communications infrastructure and data that gets written to Hard disks too! In other words - all modern data storage devices (except maybe Flash..) contain ECC mechanisms that allow burst error detection and correction.</i><br>There are a few issues</p><p>1: the mechanisms aren't end to end. The data is protected by one system when on disk, another when in memory (and that is assuming you bought ECC ram, builders of lower end machines generally don't bother) another when on the network and I don't think it's protected at all when flowing accross the computers busses.<br>2: the mechanisms are often pretty weak and the user generally has no control over thier strength.<br>3: the machanisms often work on pretty small blocks, if something obliterates a whole block they probablly won't help you.</p><p>An end to end system where the ammount of redundancy is controlled by the archivist not by a penny pinching hardware manufacturer (who will be trying to put in just enough ecc that thier results don't come out as appalling) sounds far more attractive.</p><p>Though i'm not sure complex ecc algorithms are the best soloution, I think a better way is probablly to store copies of the file along with strong checksums of blocks of the file in multiple seperate locations. When a checksum check fails the block can be retrived from other copies.</p><p>The good thing about this system is that you get corruption protection for very little extra cost over the redundancy you need for disaster protection anyway.</p></htmltext>
<tokenext>You 're right about CD 's always having contained this type of ECC mechanism .
For that matter you will see this type of ECC in Radio based communications infrastructure and data that gets written to Hard disks too !
In other words - all modern data storage devices ( except maybe Flash.. ) contain ECC mechanisms that allow burst error detection and correction.There are a few issues1 : the mechanisms are n't end to end .
The data is protected by one system when on disk , another when in memory ( and that is assuming you bought ECC ram , builders of lower end machines generally do n't bother ) another when on the network and I do n't think it 's protected at all when flowing accross the computers busses.2 : the mechanisms are often pretty weak and the user generally has no control over thier strength.3 : the machanisms often work on pretty small blocks , if something obliterates a whole block they probablly wo n't help you.An end to end system where the ammount of redundancy is controlled by the archivist not by a penny pinching hardware manufacturer ( who will be trying to put in just enough ecc that thier results do n't come out as appalling ) sounds far more attractive.Though i 'm not sure complex ecc algorithms are the best soloution , I think a better way is probablly to store copies of the file along with strong checksums of blocks of the file in multiple seperate locations .
When a checksum check fails the block can be retrived from other copies.The good thing about this system is that you get corruption protection for very little extra cost over the redundancy you need for disaster protection anyway .</tokentext>
<sentencetext>You're right about CD's always having contained this type of ECC mechanism.
For that matter you will see this type of ECC in Radio based communications infrastructure and data that gets written to Hard disks too!
In other words - all modern data storage devices (except maybe Flash..) contain ECC mechanisms that allow burst error detection and correction.There are a few issues1: the mechanisms aren't end to end.
The data is protected by one system when on disk, another when in memory (and that is assuming you bought ECC ram, builders of lower end machines generally don't bother) another when on the network and I don't think it's protected at all when flowing accross the computers busses.2: the mechanisms are often pretty weak and the user generally has no control over thier strength.3: the machanisms often work on pretty small blocks, if something obliterates a whole block they probablly won't help you.An end to end system where the ammount of redundancy is controlled by the archivist not by a penny pinching hardware manufacturer (who will be trying to put in just enough ecc that thier results don't come out as appalling) sounds far more attractive.Though i'm not sure complex ecc algorithms are the best soloution, I think a better way is probablly to store copies of the file along with strong checksums of blocks of the file in multiple seperate locations.
When a checksum check fails the block can be retrived from other copies.The good thing about this system is that you get corruption protection for very little extra cost over the redundancy you need for disaster protection anyway.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323666</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323700</id>
	<title>Re:Solution:</title>
	<author>Anonymous</author>
	<datestamp>1259942160000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>II  ccaann ssuuggeesstt eevveenn  bbeetteerr iiddeeaa.<br>
II  ccaann ssuuggeesstt eevveenn  bbeetteerr iiddeeaa.</htmltext>
<tokenext>II ccaann ssuuggeesstt eevveenn bbeetteerr iiddeeaa .
II ccaann ssuuggeesstt eevveenn bbeetteerr iiddeeaa .</tokentext>
<sentencetext>II  ccaann ssuuggeesstt eevveenn  bbeetteerr iiddeeaa.
II  ccaann ssuuggeesstt eevveenn  bbeetteerr iiddeeaa.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322960</id>
	<title>Solution:</title>
	<author>Lord Lode</author>
	<datestamp>1259937360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Just don't compress anything, if a bit corrupts in a non compressed bitmap file or in a plain<nobr> <wbr></nobr>.txt file, no more than 1 pixel or letter is lost.</htmltext>
<tokenext>Just do n't compress anything , if a bit corrupts in a non compressed bitmap file or in a plain .txt file , no more than 1 pixel or letter is lost .</tokentext>
<sentencetext>Just don't compress anything, if a bit corrupts in a non compressed bitmap file or in a plain .txt file, no more than 1 pixel or letter is lost.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30328512</id>
	<title>Re:To much reinvention</title>
	<author>iamacat</author>
	<datestamp>1259919480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How is it going to solve the problem when the data is passed around on CD-ROMs and flash drives or downloaded from a web site? By designing a proper file format protection doesn't depend on the medium on which the file is stored. Potentially this can be implemented in a single, well-tested library that would also support metadata on exact file type and creator's version beyond what can be inferred from extension or the data itself. Ideally, zip/zlib could be enhanced to provide user-specified amount of redundancy. This way packaging, compression, encryption and error correction of archived data can be all handled with a familiar tool.</p><p>One interesting example of digital media that does no worse than analog when corrupted is mp3. Even files accidentally transferred in ASCII mode are somewhat playable.</p></htmltext>
<tokenext>How is it going to solve the problem when the data is passed around on CD-ROMs and flash drives or downloaded from a web site ?
By designing a proper file format protection does n't depend on the medium on which the file is stored .
Potentially this can be implemented in a single , well-tested library that would also support metadata on exact file type and creator 's version beyond what can be inferred from extension or the data itself .
Ideally , zip/zlib could be enhanced to provide user-specified amount of redundancy .
This way packaging , compression , encryption and error correction of archived data can be all handled with a familiar tool.One interesting example of digital media that does no worse than analog when corrupted is mp3 .
Even files accidentally transferred in ASCII mode are somewhat playable .</tokentext>
<sentencetext>How is it going to solve the problem when the data is passed around on CD-ROMs and flash drives or downloaded from a web site?
By designing a proper file format protection doesn't depend on the medium on which the file is stored.
Potentially this can be implemented in a single, well-tested library that would also support metadata on exact file type and creator's version beyond what can be inferred from extension or the data itself.
Ideally, zip/zlib could be enhanced to provide user-specified amount of redundancy.
This way packaging, compression, encryption and error correction of archived data can be all handled with a familiar tool.One interesting example of digital media that does no worse than analog when corrupted is mp3.
Even files accidentally transferred in ASCII mode are somewhat playable.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324212</id>
	<title>Re:It's that computer called the brain.</title>
	<author>phantomcircuit</author>
	<datestamp>1259944680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>If the presentation isn't better than watching TV/DVD/BluRay at home, then why pay the $11?</p></div><p>To get away from the kids?  To make out in the back row?</p><p>Seriously people dont go to the movie theatre for the movies mostly.</p><p>Also VLC tends to do best guess of the frame when there is corruption, it's usually a pretty bad guess but it still tries.</p></div>
	</htmltext>
<tokenext>If the presentation is n't better than watching TV/DVD/BluRay at home , then why pay the $ 11 ? To get away from the kids ?
To make out in the back row ? Seriously people dont go to the movie theatre for the movies mostly.Also VLC tends to do best guess of the frame when there is corruption , it 's usually a pretty bad guess but it still tries .</tokentext>
<sentencetext>If the presentation isn't better than watching TV/DVD/BluRay at home, then why pay the $11?To get away from the kids?
To make out in the back row?Seriously people dont go to the movie theatre for the movies mostly.Also VLC tends to do best guess of the frame when there is corruption, it's usually a pretty bad guess but it still tries.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323704</id>
	<title>I've lost jpgs that way</title>
	<author>damn\_registrars</author>
	<datestamp>1259942160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Many of the jpgs that I took with my first digital camera were damaged or destroyed to bit corruption.  I doubt I'm the only person who fell to that problem; those pictures were taken back in the day when jpg was the only option available on cameras and many of us didn't know well enough to save it under a different file format afterwards.  Now I have a collection of images where half the image is missing or disrupted - and many others that just simply don't open at all anymore.</htmltext>
<tokenext>Many of the jpgs that I took with my first digital camera were damaged or destroyed to bit corruption .
I doubt I 'm the only person who fell to that problem ; those pictures were taken back in the day when jpg was the only option available on cameras and many of us did n't know well enough to save it under a different file format afterwards .
Now I have a collection of images where half the image is missing or disrupted - and many others that just simply do n't open at all anymore .</tokentext>
<sentencetext>Many of the jpgs that I took with my first digital camera were damaged or destroyed to bit corruption.
I doubt I'm the only person who fell to that problem; those pictures were taken back in the day when jpg was the only option available on cameras and many of us didn't know well enough to save it under a different file format afterwards.
Now I have a collection of images where half the image is missing or disrupted - and many others that just simply don't open at all anymore.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324280</id>
	<title>Re:To much reinvention</title>
	<author>Bakkster</author>
	<datestamp>1259945040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.</p></div><p>Not to nitpick, but I'm going to nitpick...
</p><p>Parity bits don't allow you to correct an error, only check that an error is present.  This is useful for transfer protocols, where the system knows to ask for the file again, but the final result on an archived file is the same: file corrupt.  With parity, though, you have an extra bit that can be corrupted.
</p><p>In other words, parity is an error check system, what we need is ECC (Error Check and <em>Correct</em>).  Single parity bit is a 1-bit check, 0-bit correct.  we need a system that provides a correction limit at least as much as the average corruption rate for our required time.  Others in the comments have suggested common algorithms to do this, but simple parity just won't cut it.</p></div>
	</htmltext>
<tokenext>Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.Not to nitpick , but I 'm going to nitpick.. . Parity bits do n't allow you to correct an error , only check that an error is present .
This is useful for transfer protocols , where the system knows to ask for the file again , but the final result on an archived file is the same : file corrupt .
With parity , though , you have an extra bit that can be corrupted .
In other words , parity is an error check system , what we need is ECC ( Error Check and Correct ) .
Single parity bit is a 1-bit check , 0-bit correct .
we need a system that provides a correction limit at least as much as the average corruption rate for our required time .
Others in the comments have suggested common algorithms to do this , but simple parity just wo n't cut it .</tokentext>
<sentencetext>Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.Not to nitpick, but I'm going to nitpick...
Parity bits don't allow you to correct an error, only check that an error is present.
This is useful for transfer protocols, where the system knows to ask for the file again, but the final result on an archived file is the same: file corrupt.
With parity, though, you have an extra bit that can be corrupted.
In other words, parity is an error check system, what we need is ECC (Error Check and Correct).
Single parity bit is a 1-bit check, 0-bit correct.
we need a system that provides a correction limit at least as much as the average corruption rate for our required time.
Others in the comments have suggested common algorithms to do this, but simple parity just won't cut it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323068</id>
	<title>Use TCP/IP</title>
	<author>spacemky</author>
	<datestamp>1259938200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'd recommend just NOT using X-MODEM or Z-MODEM! Bit errors everywhere. Especially when mom picks up the telephone! ggggrrrrrrrrrrrrrrrr</p></htmltext>
<tokenext>I 'd recommend just NOT using X-MODEM or Z-MODEM !
Bit errors everywhere .
Especially when mom picks up the telephone !
ggggrrrrrrrrrrrrrrrr</tokentext>
<sentencetext>I'd recommend just NOT using X-MODEM or Z-MODEM!
Bit errors everywhere.
Especially when mom picks up the telephone!
ggggrrrrrrrrrrrrrrrr</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324324</id>
	<title>Re:What files does a single bit error destroy?</title>
	<author>mlts</author>
	<datestamp>1259945220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>With modern compression and encryption algorithms, a single bit flipped can mean a *lot* of downstream corruption, especially in video that uses deltas, or encryption algorithms that are stream based, so all bits upstream have some effect as the file gets encrypted.  A single bit flipped will render an encrypted file completely unusable.</p></htmltext>
<tokenext>With modern compression and encryption algorithms , a single bit flipped can mean a * lot * of downstream corruption , especially in video that uses deltas , or encryption algorithms that are stream based , so all bits upstream have some effect as the file gets encrypted .
A single bit flipped will render an encrypted file completely unusable .</tokentext>
<sentencetext>With modern compression and encryption algorithms, a single bit flipped can mean a *lot* of downstream corruption, especially in video that uses deltas, or encryption algorithms that are stream based, so all bits upstream have some effect as the file gets encrypted.
A single bit flipped will render an encrypted file completely unusable.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325984</id>
	<title>Re:To much reinvention</title>
	<author>lurking\_giant</author>
	<datestamp>1259952240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Perhaps the drive manufacturers could design a drive so that the it wrote the same data to multiple platters (give up capacity for redundancy). If you are writing to a 1 terabyte platter stack and it writes 3 platters of the identical data with the same signal to 3 write heads on one arm, you could store 333gb of double redundant archive. The controller would compare the 3 bit reads and choose the bits that match. Of course the bearings and motors and pickups could be made redundant or at least of the highest MTBF. Not RAID... "RAIP" (Redundant Array of Individual Platters).</p></htmltext>
<tokenext>Perhaps the drive manufacturers could design a drive so that the it wrote the same data to multiple platters ( give up capacity for redundancy ) .
If you are writing to a 1 terabyte platter stack and it writes 3 platters of the identical data with the same signal to 3 write heads on one arm , you could store 333gb of double redundant archive .
The controller would compare the 3 bit reads and choose the bits that match .
Of course the bearings and motors and pickups could be made redundant or at least of the highest MTBF .
Not RAID... " RAIP " ( Redundant Array of Individual Platters ) .</tokentext>
<sentencetext>Perhaps the drive manufacturers could design a drive so that the it wrote the same data to multiple platters (give up capacity for redundancy).
If you are writing to a 1 terabyte platter stack and it writes 3 platters of the identical data with the same signal to 3 write heads on one arm, you could store 333gb of double redundant archive.
The controller would compare the 3 bit reads and choose the bits that match.
Of course the bearings and motors and pickups could be made redundant or at least of the highest MTBF.
Not RAID... "RAIP" (Redundant Array of Individual Platters).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322728</id>
	<title>Sun Microsystems..... zfs.....</title>
	<author>Anonymous</author>
	<datestamp>1259935260000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>ZFS.</p><p>Next topic....</p></htmltext>
<tokenext>ZFS.Next topic... .</tokentext>
<sentencetext>ZFS.Next topic....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324202</id>
	<title>Re:It's that computer called the brain.</title>
	<author>TigerNut</author>
	<datestamp>1259944620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>One thing that helps is to specifically look at damage mechanisms and then come up with a strategy that offers the best workaround. As an example, in the first-generation TDMA digital cellular phone standard, it was recognized that channel noise was typically bursty. You could lose much of one frame of data, but adjacent frames would be OK. So they encoded important data with a 4:1 forward error correction scheme, and then interleaved the encoded data over two frames. If you lost a frame, the de-interleaving process followed by a maximum-likelyhood data detector would still properly decode the data.<p>On a disk, a similar approach might be to use a 2:1 or 3:1 forward error correction and then interleave data over multiple sectors. If you wipe out a sector, you'd still have the data from the other sectors to recover from.
</p><p> This would, of course, be implemented best at a low level on the disk drive controller. At high throughput rates, the amount of computation required for this scheme is substantial. But you don't get something for nothing.</p></htmltext>
<tokenext>One thing that helps is to specifically look at damage mechanisms and then come up with a strategy that offers the best workaround .
As an example , in the first-generation TDMA digital cellular phone standard , it was recognized that channel noise was typically bursty .
You could lose much of one frame of data , but adjacent frames would be OK. So they encoded important data with a 4 : 1 forward error correction scheme , and then interleaved the encoded data over two frames .
If you lost a frame , the de-interleaving process followed by a maximum-likelyhood data detector would still properly decode the data.On a disk , a similar approach might be to use a 2 : 1 or 3 : 1 forward error correction and then interleave data over multiple sectors .
If you wipe out a sector , you 'd still have the data from the other sectors to recover from .
This would , of course , be implemented best at a low level on the disk drive controller .
At high throughput rates , the amount of computation required for this scheme is substantial .
But you do n't get something for nothing .</tokentext>
<sentencetext>One thing that helps is to specifically look at damage mechanisms and then come up with a strategy that offers the best workaround.
As an example, in the first-generation TDMA digital cellular phone standard, it was recognized that channel noise was typically bursty.
You could lose much of one frame of data, but adjacent frames would be OK. So they encoded important data with a 4:1 forward error correction scheme, and then interleaved the encoded data over two frames.
If you lost a frame, the de-interleaving process followed by a maximum-likelyhood data detector would still properly decode the data.On a disk, a similar approach might be to use a 2:1 or 3:1 forward error correction and then interleave data over multiple sectors.
If you wipe out a sector, you'd still have the data from the other sectors to recover from.
This would, of course, be implemented best at a low level on the disk drive controller.
At high throughput rates, the amount of computation required for this scheme is substantial.
But you don't get something for nothing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323666</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323218</id>
	<title>Forward-error correction instead</title>
	<author>kriston</author>
	<datestamp>1259939340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I believe that Forward-error correction is an even better model.  Already used for error-free transmission of data over error-prone links in radio, and USENET using the PAR format, what better way to preserve data than with FEC?<br>Save your really precious files as Parchive files (PAR and PAR2).  You can spread them over several discs or just one disc with several of the files on it.</p><p>It's one thing to detect errors, but it's a wholly different universe when you can also correct them.</p><p><a href="http://en.wikipedia.org/wiki/Parchive" title="wikipedia.org">http://en.wikipedia.org/wiki/Parchive</a> [wikipedia.org]</p></htmltext>
<tokenext>I believe that Forward-error correction is an even better model .
Already used for error-free transmission of data over error-prone links in radio , and USENET using the PAR format , what better way to preserve data than with FEC ? Save your really precious files as Parchive files ( PAR and PAR2 ) .
You can spread them over several discs or just one disc with several of the files on it.It 's one thing to detect errors , but it 's a wholly different universe when you can also correct them.http : //en.wikipedia.org/wiki/Parchive [ wikipedia.org ]</tokentext>
<sentencetext>I believe that Forward-error correction is an even better model.
Already used for error-free transmission of data over error-prone links in radio, and USENET using the PAR format, what better way to preserve data than with FEC?Save your really precious files as Parchive files (PAR and PAR2).
You can spread them over several discs or just one disc with several of the files on it.It's one thing to detect errors, but it's a wholly different universe when you can also correct them.http://en.wikipedia.org/wiki/Parchive [wikipedia.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322972</id>
	<title>New versions of ISO, ZIP and Truecrypt for this?</title>
	<author>Brit\_in\_the\_USA</author>
	<datestamp>1259937420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I would like this. Some options I could work with: Extensions to current CD/DVD/Bluray ISO formats, new version of "ZIP" files and a new version of True Crypt files.<br> If done in an open standards way I could be somewhat confident of support in many years time when I may need to read the archives. Obviously backwards compatibility with earlier iso/file formats would be a plus.</htmltext>
<tokenext>I would like this .
Some options I could work with : Extensions to current CD/DVD/Bluray ISO formats , new version of " ZIP " files and a new version of True Crypt files .
If done in an open standards way I could be somewhat confident of support in many years time when I may need to read the archives .
Obviously backwards compatibility with earlier iso/file formats would be a plus .</tokentext>
<sentencetext>I would like this.
Some options I could work with: Extensions to current CD/DVD/Bluray ISO formats, new version of "ZIP" files and a new version of True Crypt files.
If done in an open standards way I could be somewhat confident of support in many years time when I may need to read the archives.
Obviously backwards compatibility with earlier iso/file formats would be a plus.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322704</id>
	<title>par files</title>
	<author>Anonymous</author>
	<datestamp>1259935080000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>include par2 files</p></htmltext>
<tokenext>include par2 files</tokentext>
<sentencetext>include par2 files</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323380</id>
	<title>ZFS or Reed-Solomon</title>
	<author>ttsiod</author>
	<datestamp>1259940300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Use ZFS if you are in OpenSolaris/FreeBSD land, or use a Reed-Solomon tool - a GPL implementation of mine was Slashdotted 2 years ago, here: <a href="http://users.softlab.ntua.gr/~ttsiod/rsbep.html" title="softlab.ntua.gr" rel="nofollow">http://users.softlab.ntua.gr/~ttsiod/rsbep.html</a> [softlab.ntua.gr]</htmltext>
<tokenext>Use ZFS if you are in OpenSolaris/FreeBSD land , or use a Reed-Solomon tool - a GPL implementation of mine was Slashdotted 2 years ago , here : http : //users.softlab.ntua.gr/ ~ ttsiod/rsbep.html [ softlab.ntua.gr ]</tokentext>
<sentencetext>Use ZFS if you are in OpenSolaris/FreeBSD land, or use a Reed-Solomon tool - a GPL implementation of mine was Slashdotted 2 years ago, here: http://users.softlab.ntua.gr/~ttsiod/rsbep.html [softlab.ntua.gr]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30327472</id>
	<title>ZFS must die...</title>
	<author>RedBear</author>
	<datestamp>1259958240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Everyone who is even considering spouting the letters "ZFS" in response to this article should really just STFU. Seriously.</p><p>Allow me to explain. Yes, ZFS is a very nice and very robust filesystem with great data protection and recovery features (although still subject to failure and data loss under some conditions, don't even try to deny it, it isn't perfect).</p><p>But all the ZFS zealots need to stop and think about all the other filesystems currently in use, and realize that ZFS will NEVER replace most of those filesystems in most situations. There needs to be a solution to bit rot that does not entail switching the entire world to a new filesystem. NTFS, FAT12/FAT16/FAT32, HFS+, Ext3/4, ReiserFS, UDF, all of these and more will continue to be in use in millions of computers and on billions of devices using removable or embedded media for many decades, and more filesystems will be invented in the future. You will never see a digital camera with built-in ZFS support, for instance. ZFS is totally unfeasible for that kind of application. It takes far too much processing power and memory to run ZFS for it to ever become anything resembling a universal filesystem. Filesystems like ZFS are not a panacea, there needs to be a solution (like PAR2) that is portable between ALL different filesystems that are now or ever will be in use.</p><p>Basically, things like the PAR2 parity archiving format already solve this type of problem, but in a way that is too limited. It needs to be better integrated into the filesystem or operating system level so that it works automatically on all kinds of different filesystems. Right now, the parity information is something that you have to manually create with a separate software tool like Parchive when you are interested in "archiving" something. This kind of functionality needs to be somehow tacked on to the file storage process so that the parity data is created, updated and continuously checked by whatever is reading and writing to the file, no matter where that file is stored. It needs to be part of the file itself, so that when a file is copied or moved, the parity data is not lost.</p><p>As usual, to any particular problem there is an answer that is straightforward, simple and WRONG (I forget what smart person said that first). For this problem, ZFS is not the ultimate answer. It's great for specific situations like file servers, but that's about it. As soon as you remove a file from that file server, poof, you lose access to that parity information. That's just dumb. For important data that needs to be self-repairing, the only real solution is to include the parity information alongside the data, in a portable format.</p><p>Personally I've been quite surprised over the years that almost no modern filesystem in use anywhere has the kind of parity information built-in that ZFS has. So much data could be easily recovered if filesystems were robust enough to handle simple things like bit errors or unreadable sectors. Why should my 2GB file be ruined just because a single 512-bit sector became unreadable in a critical location in the file? It's idiotic to need to have multiple complete duplicate copies of every single type of data we ever store in order to be sure we can recover from simple forms of data degredation like bit rot.</p></htmltext>
<tokenext>Everyone who is even considering spouting the letters " ZFS " in response to this article should really just STFU .
Seriously.Allow me to explain .
Yes , ZFS is a very nice and very robust filesystem with great data protection and recovery features ( although still subject to failure and data loss under some conditions , do n't even try to deny it , it is n't perfect ) .But all the ZFS zealots need to stop and think about all the other filesystems currently in use , and realize that ZFS will NEVER replace most of those filesystems in most situations .
There needs to be a solution to bit rot that does not entail switching the entire world to a new filesystem .
NTFS , FAT12/FAT16/FAT32 , HFS + , Ext3/4 , ReiserFS , UDF , all of these and more will continue to be in use in millions of computers and on billions of devices using removable or embedded media for many decades , and more filesystems will be invented in the future .
You will never see a digital camera with built-in ZFS support , for instance .
ZFS is totally unfeasible for that kind of application .
It takes far too much processing power and memory to run ZFS for it to ever become anything resembling a universal filesystem .
Filesystems like ZFS are not a panacea , there needs to be a solution ( like PAR2 ) that is portable between ALL different filesystems that are now or ever will be in use.Basically , things like the PAR2 parity archiving format already solve this type of problem , but in a way that is too limited .
It needs to be better integrated into the filesystem or operating system level so that it works automatically on all kinds of different filesystems .
Right now , the parity information is something that you have to manually create with a separate software tool like Parchive when you are interested in " archiving " something .
This kind of functionality needs to be somehow tacked on to the file storage process so that the parity data is created , updated and continuously checked by whatever is reading and writing to the file , no matter where that file is stored .
It needs to be part of the file itself , so that when a file is copied or moved , the parity data is not lost.As usual , to any particular problem there is an answer that is straightforward , simple and WRONG ( I forget what smart person said that first ) .
For this problem , ZFS is not the ultimate answer .
It 's great for specific situations like file servers , but that 's about it .
As soon as you remove a file from that file server , poof , you lose access to that parity information .
That 's just dumb .
For important data that needs to be self-repairing , the only real solution is to include the parity information alongside the data , in a portable format.Personally I 've been quite surprised over the years that almost no modern filesystem in use anywhere has the kind of parity information built-in that ZFS has .
So much data could be easily recovered if filesystems were robust enough to handle simple things like bit errors or unreadable sectors .
Why should my 2GB file be ruined just because a single 512-bit sector became unreadable in a critical location in the file ?
It 's idiotic to need to have multiple complete duplicate copies of every single type of data we ever store in order to be sure we can recover from simple forms of data degredation like bit rot .</tokentext>
<sentencetext>Everyone who is even considering spouting the letters "ZFS" in response to this article should really just STFU.
Seriously.Allow me to explain.
Yes, ZFS is a very nice and very robust filesystem with great data protection and recovery features (although still subject to failure and data loss under some conditions, don't even try to deny it, it isn't perfect).But all the ZFS zealots need to stop and think about all the other filesystems currently in use, and realize that ZFS will NEVER replace most of those filesystems in most situations.
There needs to be a solution to bit rot that does not entail switching the entire world to a new filesystem.
NTFS, FAT12/FAT16/FAT32, HFS+, Ext3/4, ReiserFS, UDF, all of these and more will continue to be in use in millions of computers and on billions of devices using removable or embedded media for many decades, and more filesystems will be invented in the future.
You will never see a digital camera with built-in ZFS support, for instance.
ZFS is totally unfeasible for that kind of application.
It takes far too much processing power and memory to run ZFS for it to ever become anything resembling a universal filesystem.
Filesystems like ZFS are not a panacea, there needs to be a solution (like PAR2) that is portable between ALL different filesystems that are now or ever will be in use.Basically, things like the PAR2 parity archiving format already solve this type of problem, but in a way that is too limited.
It needs to be better integrated into the filesystem or operating system level so that it works automatically on all kinds of different filesystems.
Right now, the parity information is something that you have to manually create with a separate software tool like Parchive when you are interested in "archiving" something.
This kind of functionality needs to be somehow tacked on to the file storage process so that the parity data is created, updated and continuously checked by whatever is reading and writing to the file, no matter where that file is stored.
It needs to be part of the file itself, so that when a file is copied or moved, the parity data is not lost.As usual, to any particular problem there is an answer that is straightforward, simple and WRONG (I forget what smart person said that first).
For this problem, ZFS is not the ultimate answer.
It's great for specific situations like file servers, but that's about it.
As soon as you remove a file from that file server, poof, you lose access to that parity information.
That's just dumb.
For important data that needs to be self-repairing, the only real solution is to include the parity information alongside the data, in a portable format.Personally I've been quite surprised over the years that almost no modern filesystem in use anywhere has the kind of parity information built-in that ZFS has.
So much data could be easily recovered if filesystems were robust enough to handle simple things like bit errors or unreadable sectors.
Why should my 2GB file be ruined just because a single 512-bit sector became unreadable in a critical location in the file?
It's idiotic to need to have multiple complete duplicate copies of every single type of data we ever store in order to be sure we can recover from simple forms of data degredation like bit rot.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322728</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30327426</id>
	<title>All or nothing?</title>
	<author>formfeed</author>
	<datestamp>1259958060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Most information we save is not about "all or nothing":
<p> A software program eventually crashes or stops because of a wrong bit. But a book is still readable if some letters are unclear or a word is missing. - We shouldn't store information that isn't "all or nothing" in an all-or-nothing format.
</p><p>
I remember a storage example from doing neural networks: <br>
An algorithm writes numbers to a matrix. A vector holds "1.2;4.8;0.9". If you add "5", the same memory now holds "1;5;1;5"
It of course could be used the other way around as well: Loss of one memory cell degrades quality of the whole, but the whole information is still accessible. </p><p>
By now, computer would be fast enough to implement a kind of "lossy digital" holographic archival file format. For scans or picture archives this would be great. And it doesn't prevent you from adding additional checksums or correction blocks.</p></htmltext>
<tokenext>Most information we save is not about " all or nothing " : A software program eventually crashes or stops because of a wrong bit .
But a book is still readable if some letters are unclear or a word is missing .
- We should n't store information that is n't " all or nothing " in an all-or-nothing format .
I remember a storage example from doing neural networks : An algorithm writes numbers to a matrix .
A vector holds " 1.2 ; 4.8 ; 0.9 " .
If you add " 5 " , the same memory now holds " 1 ; 5 ; 1 ; 5 " It of course could be used the other way around as well : Loss of one memory cell degrades quality of the whole , but the whole information is still accessible .
By now , computer would be fast enough to implement a kind of " lossy digital " holographic archival file format .
For scans or picture archives this would be great .
And it does n't prevent you from adding additional checksums or correction blocks .</tokentext>
<sentencetext>Most information we save is not about "all or nothing":
 A software program eventually crashes or stops because of a wrong bit.
But a book is still readable if some letters are unclear or a word is missing.
- We shouldn't store information that isn't "all or nothing" in an all-or-nothing format.
I remember a storage example from doing neural networks: 
An algorithm writes numbers to a matrix.
A vector holds "1.2;4.8;0.9".
If you add "5", the same memory now holds "1;5;1;5"
It of course could be used the other way around as well: Loss of one memory cell degrades quality of the whole, but the whole information is still accessible.
By now, computer would be fast enough to implement a kind of "lossy digital" holographic archival file format.
For scans or picture archives this would be great.
And it doesn't prevent you from adding additional checksums or correction blocks.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</id>
	<title>To much reinvention</title>
	<author>DarkOx</author>
	<datestamp>1259934960000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>If this type of thing is implemented at the file level every application is going to have to do its own thing.  That means to many implementations most of which wont be very good or well tested.  It also means applications developers will have to be busy slogging though error correction data in their files rather than the data they actually wanted to persist for their application.  I think the article offers a number of good ideas but it would be better to do most of them at the filesystem and perhaps some at the storage layer.<br>
&nbsp; &nbsp; Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.</p></htmltext>
<tokenext>If this type of thing is implemented at the file level every application is going to have to do its own thing .
That means to many implementations most of which wont be very good or well tested .
It also means applications developers will have to be busy slogging though error correction data in their files rather than the data they actually wanted to persist for their application .
I think the article offers a number of good ideas but it would be better to do most of them at the filesystem and perhaps some at the storage layer .
    Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well .</tokentext>
<sentencetext>If this type of thing is implemented at the file level every application is going to have to do its own thing.
That means to many implementations most of which wont be very good or well tested.
It also means applications developers will have to be busy slogging though error correction data in their files rather than the data they actually wanted to persist for their application.
I think the article offers a number of good ideas but it would be better to do most of them at the filesystem and perhaps some at the storage layer.
    Also if we can present the same logical file when read to the application even if every 9th byte is parity on the disk that is a plus because it means legacy apps can get the enhanced protection as well.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323446</id>
	<title>how many levels do you need?</title>
	<author>pydev</author>
	<datestamp>1259940660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There are several levels of error correction at the disk level, plus at the RAID level, plus possibly at the file system level.  And the whole thing has been wrapped up so well that users don't have to worry about.  If users are still getting bit errors, someone hasn't been paying attention to their SMART, RAID, and file system logs.</p><p>No amount of error correction will protect you from that; sooner or later, disks go bad, and you have to replace them before there are too many errors for the system to recover.</p></htmltext>
<tokenext>There are several levels of error correction at the disk level , plus at the RAID level , plus possibly at the file system level .
And the whole thing has been wrapped up so well that users do n't have to worry about .
If users are still getting bit errors , someone has n't been paying attention to their SMART , RAID , and file system logs.No amount of error correction will protect you from that ; sooner or later , disks go bad , and you have to replace them before there are too many errors for the system to recover .</tokentext>
<sentencetext>There are several levels of error correction at the disk level, plus at the RAID level, plus possibly at the file system level.
And the whole thing has been wrapped up so well that users don't have to worry about.
If users are still getting bit errors, someone hasn't been paying attention to their SMART, RAID, and file system logs.No amount of error correction will protect you from that; sooner or later, disks go bad, and you have to replace them before there are too many errors for the system to recover.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322924</id>
	<title>Re:To much reinvention</title>
	<author>Rockoon</author>
	<datestamp>1259937000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>What we are talking about here is how to add more redundancy on the software level.. but honestly..<br>
<br><nobr> <wbr></nobr>...why not do it at the hardware level where there is already redundancy, and cant be fucked up by an additional error vector?</htmltext>
<tokenext>What we are talking about here is how to add more redundancy on the software level.. but honestly. . ...why not do it at the hardware level where there is already redundancy , and cant be fucked up by an additional error vector ?</tokentext>
<sentencetext>What we are talking about here is how to add more redundancy on the software level.. but honestly..
 ...why not do it at the hardware level where there is already redundancy, and cant be fucked up by an additional error vector?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322938</id>
	<title>Parchive: Parity Archive Volume Set</title>
	<author>khundeck</author>
	<datestamp>1259937120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext>Parchive: Parity Archive Volume Set<br><br>It basically allows you to create an archive that's selectively larger, but contains an amount of parity such that you can have XX\% corruption and still 'unzip.'<br><br>"The original idea behind this project was to provide a tool to apply the data-recovery capability concepts of RAID-like systems to the posting and recovery of multi-part archives on Usenet. We accomplished that goal." [http://parchive.sourceforge.net/]<br><br>KPH</htmltext>
<tokenext>Parchive : Parity Archive Volume SetIt basically allows you to create an archive that 's selectively larger , but contains an amount of parity such that you can have XX \ % corruption and still 'unzip .
' " The original idea behind this project was to provide a tool to apply the data-recovery capability concepts of RAID-like systems to the posting and recovery of multi-part archives on Usenet .
We accomplished that goal .
" [ http : //parchive.sourceforge.net/ ] KPH</tokentext>
<sentencetext>Parchive: Parity Archive Volume SetIt basically allows you to create an archive that's selectively larger, but contains an amount of parity such that you can have XX\% corruption and still 'unzip.
'"The original idea behind this project was to provide a tool to apply the data-recovery capability concepts of RAID-like systems to the posting and recovery of multi-part archives on Usenet.
We accomplished that goal.
" [http://parchive.sourceforge.net/]KPH</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324248</id>
	<title>Re:It's that computer called the brain.</title>
	<author>Anonymous</author>
	<datestamp>1259944800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><i>Mini DV video tape, when run in SD, uses no compression on the audio, and the video is only lightly compressed, using a DCT-based codec, with no delta coding. In practical terms, what this means is that one corrupted frame of video doesn't cascade into future frames. If my camcorder gets a wrinkle in the tape, it will affect the frames recorded on the wrinkle, and no others. It also makes a best-guess effort to reconstruct the frame. This task may not be impossible with more dense codecs that do use delta coding and motion compensation (MPEG, DiVX, etc), but it is certainly made far more difficult.</i></p><p>Actually all you need to do in a delta-compressed case is have some sort of full-frame resync at reasonable intervals (say 1 second of video) - this gives nearly the full benefits of the compression and means that no glitch can cascade more than 1 second past the end of the glitch.</p></htmltext>
<tokenext>Mini DV video tape , when run in SD , uses no compression on the audio , and the video is only lightly compressed , using a DCT-based codec , with no delta coding .
In practical terms , what this means is that one corrupted frame of video does n't cascade into future frames .
If my camcorder gets a wrinkle in the tape , it will affect the frames recorded on the wrinkle , and no others .
It also makes a best-guess effort to reconstruct the frame .
This task may not be impossible with more dense codecs that do use delta coding and motion compensation ( MPEG , DiVX , etc ) , but it is certainly made far more difficult.Actually all you need to do in a delta-compressed case is have some sort of full-frame resync at reasonable intervals ( say 1 second of video ) - this gives nearly the full benefits of the compression and means that no glitch can cascade more than 1 second past the end of the glitch .</tokentext>
<sentencetext>Mini DV video tape, when run in SD, uses no compression on the audio, and the video is only lightly compressed, using a DCT-based codec, with no delta coding.
In practical terms, what this means is that one corrupted frame of video doesn't cascade into future frames.
If my camcorder gets a wrinkle in the tape, it will affect the frames recorded on the wrinkle, and no others.
It also makes a best-guess effort to reconstruct the frame.
This task may not be impossible with more dense codecs that do use delta coding and motion compensation (MPEG, DiVX, etc), but it is certainly made far more difficult.Actually all you need to do in a delta-compressed case is have some sort of full-frame resync at reasonable intervals (say 1 second of video) - this gives nearly the full benefits of the compression and means that no glitch can cascade more than 1 second past the end of the glitch.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323076</id>
	<title>Re:To much reinvention</title>
	<author>Interoperable</author>
	<datestamp>1259938320000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>I agree that filesystem level error correction is good idea. Having the option to specify ECC options for a given file or folder would be great functionality to have. The idea presented in this article, however, is that certain compressed formats don't need ECC for the entire file. Instead, as long as the headers are intact, a few bits here or there will result in only some distortion; not a big deal if it's just vacation photos/movies.</p><p>By only having ECC in the headers, you would save a good deal of storage space and processing time. It wouldn't need to be supported in every application either, just the codecs. Individual codecs could include it fairly easily as they release new versions, which wouldn't be backward compatible anyway so you don't introduce a new problem. I think it's a good idea, it would keep media readable with very little overhead, just a few odd pixels during playback even in a corrupted file.</p></htmltext>
<tokenext>I agree that filesystem level error correction is good idea .
Having the option to specify ECC options for a given file or folder would be great functionality to have .
The idea presented in this article , however , is that certain compressed formats do n't need ECC for the entire file .
Instead , as long as the headers are intact , a few bits here or there will result in only some distortion ; not a big deal if it 's just vacation photos/movies.By only having ECC in the headers , you would save a good deal of storage space and processing time .
It would n't need to be supported in every application either , just the codecs .
Individual codecs could include it fairly easily as they release new versions , which would n't be backward compatible anyway so you do n't introduce a new problem .
I think it 's a good idea , it would keep media readable with very little overhead , just a few odd pixels during playback even in a corrupted file .</tokentext>
<sentencetext>I agree that filesystem level error correction is good idea.
Having the option to specify ECC options for a given file or folder would be great functionality to have.
The idea presented in this article, however, is that certain compressed formats don't need ECC for the entire file.
Instead, as long as the headers are intact, a few bits here or there will result in only some distortion; not a big deal if it's just vacation photos/movies.By only having ECC in the headers, you would save a good deal of storage space and processing time.
It wouldn't need to be supported in every application either, just the codecs.
Individual codecs could include it fairly easily as they release new versions, which wouldn't be backward compatible anyway so you don't introduce a new problem.
I think it's a good idea, it would keep media readable with very little overhead, just a few odd pixels during playback even in a corrupted file.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324936</id>
	<title>Re:What about the "block errors"?</title>
	<author>glennpratt</author>
	<datestamp>1259947860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>RAID 0 does not offer "block redundancy".</p></div><p>Yes, that's why it's called RAID 0; it isn't true RAID, but depends on similar logic.  I don't know why you brought it up since the GP didn't mention it.</p></div>
	</htmltext>
<tokenext>RAID 0 does not offer " block redundancy " .Yes , that 's why it 's called RAID 0 ; it is n't true RAID , but depends on similar logic .
I do n't know why you brought it up since the GP did n't mention it .</tokentext>
<sentencetext>RAID 0 does not offer "block redundancy".Yes, that's why it's called RAID 0; it isn't true RAID, but depends on similar logic.
I don't know why you brought it up since the GP didn't mention it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322986</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325128</id>
	<title>Re:It's that computer called the brain.</title>
	<author>selven</author>
	<datestamp>1259948580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That is feasible for simple formats like BMP and ASCII, but the problem is that more efficient formats with data compression are common, and in these formats a change of one bit can change much more than one pixel for 33 miliseconds.</p></htmltext>
<tokenext>That is feasible for simple formats like BMP and ASCII , but the problem is that more efficient formats with data compression are common , and in these formats a change of one bit can change much more than one pixel for 33 miliseconds .</tokentext>
<sentencetext>That is feasible for simple formats like BMP and ASCII, but the problem is that more efficient formats with data compression are common, and in these formats a change of one bit can change much more than one pixel for 33 miliseconds.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30329320</id>
	<title>Re:To much reinvention</title>
	<author>hazem</author>
	<datestamp>1259923140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I can see how you're restoring byte 2 when you know it's invalid.  But how do you know that in the first place.  You XOR your new stuff and get a different parity byte and that tells you that you have a problem.  But can it identify which one is the problem?  Or is there a risk of restoring the wrong byte to achieve a correct parity byte and ending up with 2 incorrect bytes?</p></htmltext>
<tokenext>I can see how you 're restoring byte 2 when you know it 's invalid .
But how do you know that in the first place .
You XOR your new stuff and get a different parity byte and that tells you that you have a problem .
But can it identify which one is the problem ?
Or is there a risk of restoring the wrong byte to achieve a correct parity byte and ending up with 2 incorrect bytes ?</tokentext>
<sentencetext>I can see how you're restoring byte 2 when you know it's invalid.
But how do you know that in the first place.
You XOR your new stuff and get a different parity byte and that tells you that you have a problem.
But can it identify which one is the problem?
Or is there a risk of restoring the wrong byte to achieve a correct parity byte and ending up with 2 incorrect bytes?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325604</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323608</id>
	<title>Copy and Distribute, don't "preserve" like analog</title>
	<author>Anonymous</author>
	<datestamp>1259941680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Injecting ECC into the stored streams may help a bit, but please stop thinking about methods that preserve the content of single streams.</p><p>Just duplicate the streams to multiple physical media, it's cheap, easy and can remedy many more situations.</p></htmltext>
<tokenext>Injecting ECC into the stored streams may help a bit , but please stop thinking about methods that preserve the content of single streams.Just duplicate the streams to multiple physical media , it 's cheap , easy and can remedy many more situations .</tokentext>
<sentencetext>Injecting ECC into the stored streams may help a bit, but please stop thinking about methods that preserve the content of single streams.Just duplicate the streams to multiple physical media, it's cheap, easy and can remedy many more situations.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323808</id>
	<title>Re:Solution:</title>
	<author>Anonymous</author>
	<datestamp>1259942640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Compression + parity bits can probably survive more data corruption than your retarded suggestion (check the size of uncompressed video files) and do it using less storage.</p></htmltext>
<tokenext>Compression + parity bits can probably survive more data corruption than your retarded suggestion ( check the size of uncompressed video files ) and do it using less storage .</tokentext>
<sentencetext>Compression + parity bits can probably survive more data corruption than your retarded suggestion (check the size of uncompressed video files) and do it using less storage.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322716</id>
	<title>Or just use PAR for your archives</title>
	<author>syntap</author>
	<datestamp>1259935140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Done.  +1 to the poster who said there is some round transportation implement being reinvented here.</p></htmltext>
<tokenext>Done .
+ 1 to the poster who said there is some round transportation implement being reinvented here .</tokentext>
<sentencetext>Done.
+1 to the poster who said there is some round transportation implement being reinvented here.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323880</id>
	<title>Re:To much reinvention</title>
	<author>foniksonik</author>
	<datestamp>1259943000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How about an archive format, essentially a zip file, which contains additional headers for all of it's contents? Something like a manifest would work well. It could easily be an XML file with the header information and other meta-data about the contents of the archive. This way you get a good compromise between having to store an entire filesystem and the overhead of putting this information in each file.</p><p>Add another layer by using a striped data format with parity and you have the ability to reconstruct any of the data as well.</p><p>So your archive would grow in size maybe 20\% but would have a vastly improved life expectancy.</p></htmltext>
<tokenext>How about an archive format , essentially a zip file , which contains additional headers for all of it 's contents ?
Something like a manifest would work well .
It could easily be an XML file with the header information and other meta-data about the contents of the archive .
This way you get a good compromise between having to store an entire filesystem and the overhead of putting this information in each file.Add another layer by using a striped data format with parity and you have the ability to reconstruct any of the data as well.So your archive would grow in size maybe 20 \ % but would have a vastly improved life expectancy .</tokentext>
<sentencetext>How about an archive format, essentially a zip file, which contains additional headers for all of it's contents?
Something like a manifest would work well.
It could easily be an XML file with the header information and other meta-data about the contents of the archive.
This way you get a good compromise between having to store an entire filesystem and the overhead of putting this information in each file.Add another layer by using a striped data format with parity and you have the ability to reconstruct any of the data as well.So your archive would grow in size maybe 20\% but would have a vastly improved life expectancy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323552</id>
	<title>Re:To much reinvention</title>
	<author>PJ6</author>
	<datestamp>1259941380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>ECC would just be metadata in its own section that applications would be free to ignore, and it wouldn't be terribly difficult to implement. Allowing multiple implementations in one file for cases when one does not recognize another would be trivial. If you look at it from the point of security, it's a good idea: whatever the FS does to keep the data from corrupting may fail. Security through layers.</htmltext>
<tokenext>ECC would just be metadata in its own section that applications would be free to ignore , and it would n't be terribly difficult to implement .
Allowing multiple implementations in one file for cases when one does not recognize another would be trivial .
If you look at it from the point of security , it 's a good idea : whatever the FS does to keep the data from corrupting may fail .
Security through layers .</tokentext>
<sentencetext>ECC would just be metadata in its own section that applications would be free to ignore, and it wouldn't be terribly difficult to implement.
Allowing multiple implementations in one file for cases when one does not recognize another would be trivial.
If you look at it from the point of security, it's a good idea: whatever the FS does to keep the data from corrupting may fail.
Security through layers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324548
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323670
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324324
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323368
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322704
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323680
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325004
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323666
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323526
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323104
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322728
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323014
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323792
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30328482
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323068
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323362
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325374
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322734
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30329320
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325604
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324280
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323704
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30328512
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324202
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323666
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30327124
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322728
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323082
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322792
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325984
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30358254
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30333382
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325346
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323118
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30335974
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325160
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322740
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323276
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324212
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323758
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323186
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323932
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30332080
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322960
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325622
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323442
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323188
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324936
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322986
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322792
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324238
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323700
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322960
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30327472
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322728
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30326028
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323808
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322960
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323326
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325128
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323880
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324248
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_04_0329231_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30327744
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322924
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323026
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322770
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322976
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323068
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30328482
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322756
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322792
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322986
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324936
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323082
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322704
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323368
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322972
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323118
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325346
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324066
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323238
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323588
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322734
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325374
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322960
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323808
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323700
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30332080
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324078
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323442
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325622
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322938
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322692
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323076
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30326028
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322924
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30327744
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30328512
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322740
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323680
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324280
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325604
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30329320
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324548
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323880
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323188
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325984
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323552
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324216
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322724
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325128
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30333382
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323014
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323388
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30358254
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324212
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324248
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323666
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325004
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324202
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30325160
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323932
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324238
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323670
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323276
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323186
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323326
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323792
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322742
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322968
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30324324
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322872
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323758
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323362
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30335974
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323704
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_04_0329231.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30322728
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30327472
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30327124
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323104
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_04_0329231.30323526
</commentlist>
</conversation>
