<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_02_26_1943241</id>
	<title>Exploring Advanced Format Hard Drive Technology</title>
	<author>ScuttleMonkey</author>
	<datestamp>1267177740000</datestamp>
	<htmltext>MojoKid writes <i>"Hard drive capacities are sometimes broken down by the number of platters and the size of each. The first 1TB drives, for example, used five 200GB platters; current-generation 1TB drives use two 500GB platters. These values, however, only refer to the accessible storage capacity, not the total size of the platter itself. Invisible to the end-user, additional capacity is used to store positional information and for ECC.  The latest <a href="http://hothardware.com/Articles/WDs-1TB-Caviar-Green-w-Advanced-Format-Windows-XP-Users-Pay-Attention/">Advanced Format hard drive technology changes a hard drive's sector size</a> from 512 bytes to 4096 bytes. This allows the ECC data to be stored more efficiently. Advanced Format drives emulate a 512 byte sector size, to keep backwards compatibility intact, by mapping eight logical 512 byte sectors to a single physical sector. Unfortunately, this creates a problem for Windows XP users.  The good news is, Western Digital has already solved the problem and HotHardware offers some insight into the technology and how it performs."</i></htmltext>
<tokenext>MojoKid writes " Hard drive capacities are sometimes broken down by the number of platters and the size of each .
The first 1TB drives , for example , used five 200GB platters ; current-generation 1TB drives use two 500GB platters .
These values , however , only refer to the accessible storage capacity , not the total size of the platter itself .
Invisible to the end-user , additional capacity is used to store positional information and for ECC .
The latest Advanced Format hard drive technology changes a hard drive 's sector size from 512 bytes to 4096 bytes .
This allows the ECC data to be stored more efficiently .
Advanced Format drives emulate a 512 byte sector size , to keep backwards compatibility intact , by mapping eight logical 512 byte sectors to a single physical sector .
Unfortunately , this creates a problem for Windows XP users .
The good news is , Western Digital has already solved the problem and HotHardware offers some insight into the technology and how it performs .
"</tokentext>
<sentencetext>MojoKid writes "Hard drive capacities are sometimes broken down by the number of platters and the size of each.
The first 1TB drives, for example, used five 200GB platters; current-generation 1TB drives use two 500GB platters.
These values, however, only refer to the accessible storage capacity, not the total size of the platter itself.
Invisible to the end-user, additional capacity is used to store positional information and for ECC.
The latest Advanced Format hard drive technology changes a hard drive's sector size from 512 bytes to 4096 bytes.
This allows the ECC data to be stored more efficiently.
Advanced Format drives emulate a 512 byte sector size, to keep backwards compatibility intact, by mapping eight logical 512 byte sectors to a single physical sector.
Unfortunately, this creates a problem for Windows XP users.
The good news is, Western Digital has already solved the problem and HotHardware offers some insight into the technology and how it performs.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291628</id>
	<title>Re:Large sector size good?</title>
	<author>WrongSizeGlass</author>
	<datestamp>1267184400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>If you read the article carefully<nobr> <wbr></nobr>...</p></div><p>Well, if you read the article <i>very</i> carefully you'll note that it lists the WD AF drive as 5400 RPM. If true then they'll really see some performance gains from a 7200 RPM version. If it's just <i>another</i> typo/mistake/ooopsy then we should tag this article as "needs editor".</p></div>
	</htmltext>
<tokenext>If you read the article carefully ...Well , if you read the article very carefully you 'll note that it lists the WD AF drive as 5400 RPM .
If true then they 'll really see some performance gains from a 7200 RPM version .
If it 's just another typo/mistake/ooopsy then we should tag this article as " needs editor " .</tokentext>
<sentencetext>If you read the article carefully ...Well, if you read the article very carefully you'll note that it lists the WD AF drive as 5400 RPM.
If true then they'll really see some performance gains from a 7200 RPM version.
If it's just another typo/mistake/ooopsy then we should tag this article as "needs editor".
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291286</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292660</id>
	<title>Re:Large sector size good?</title>
	<author>Darinbob</author>
	<datestamp>1267190100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You can have file systems that don't use up a full sector for small files. Or you do what the article mentioned and have 8 effective blocks within one physical block.<br><br>On the other hand, with your logic, 512 byte sectors are too big too, because I have lots of files that are much smaller than that...</htmltext>
<tokenext>You can have file systems that do n't use up a full sector for small files .
Or you do what the article mentioned and have 8 effective blocks within one physical block.On the other hand , with your logic , 512 byte sectors are too big too , because I have lots of files that are much smaller than that.. .</tokentext>
<sentencetext>You can have file systems that don't use up a full sector for small files.
Or you do what the article mentioned and have 8 effective blocks within one physical block.On the other hand, with your logic, 512 byte sectors are too big too, because I have lots of files that are much smaller than that...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291256</id>
	<title>Defrag</title>
	<author>Anonymous</author>
	<datestamp>1267182060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Wait! Will this shorten or lengthen defrag times.

Do file sizes under 4096K still exist?</htmltext>
<tokenext>Wait !
Will this shorten or lengthen defrag times .
Do file sizes under 4096K still exist ?</tokentext>
<sentencetext>Wait!
Will this shorten or lengthen defrag times.
Do file sizes under 4096K still exist?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291462</id>
	<title>Re:1 byte = 10 bits?</title>
	<author>noidentity</author>
	<datestamp>1267183320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>1 byte = <a href="http://en.wikipedia.org/wiki/Byte#Size" title="wikipedia.org">N bits</a> [wikipedia.org], not necessarily 8. You're probably thinking of an <a href="http://en.wikipedia.org/wiki/Octet" title="wikipedia.org">octet</a> [wikipedia.org], which is always 8 bits (except in universes where the <i>oct</i> prefix doesn't mean 8).</htmltext>
<tokenext>1 byte = N bits [ wikipedia.org ] , not necessarily 8 .
You 're probably thinking of an octet [ wikipedia.org ] , which is always 8 bits ( except in universes where the oct prefix does n't mean 8 ) .</tokentext>
<sentencetext>1 byte = N bits [wikipedia.org], not necessarily 8.
You're probably thinking of an octet [wikipedia.org], which is always 8 bits (except in universes where the oct prefix doesn't mean 8).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291340</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291232</id>
	<title>512x4=4MB??</title>
	<author>Anonymous</author>
	<datestamp>1267181940000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>You mean  4096 bytes, not 4096k, right? Last time I checked, eight 512 byte sectors is considerably smaller than 4MB.</p></htmltext>
<tokenext>You mean 4096 bytes , not 4096k , right ?
Last time I checked , eight 512 byte sectors is considerably smaller than 4MB .</tokentext>
<sentencetext>You mean  4096 bytes, not 4096k, right?
Last time I checked, eight 512 byte sectors is considerably smaller than 4MB.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291608</id>
	<title>Re:Large sector size good?</title>
	<author>Avtuunaaja</author>
	<datestamp>1267184220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>You can fix this on the filesystem level by using packed files. For the actual disk, tracking 512-byte sectors when most operating systems actually always read them in groups of 8 is just insane. (If you wish to access files by mapping them to memory, and you do, you must do so at the granularity of the virtual memory page size. Which, on all architectures worth talking about, is 4K.)</htmltext>
<tokenext>You can fix this on the filesystem level by using packed files .
For the actual disk , tracking 512-byte sectors when most operating systems actually always read them in groups of 8 is just insane .
( If you wish to access files by mapping them to memory , and you do , you must do so at the granularity of the virtual memory page size .
Which , on all architectures worth talking about , is 4K .
)</tokentext>
<sentencetext>You can fix this on the filesystem level by using packed files.
For the actual disk, tracking 512-byte sectors when most operating systems actually always read them in groups of 8 is just insane.
(If you wish to access files by mapping them to memory, and you do, you must do so at the granularity of the virtual memory page size.
Which, on all architectures worth talking about, is 4K.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291340</id>
	<title>1 byte = 10 bits?</title>
	<author>djlemma</author>
	<datestamp>1267182600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Stupid question- are bytes really 10 bits when talking hard drive capacity?<br> <br>

Is that some sort of checksum going on, or did the way computers store numbers change while I wasn't looking?</htmltext>
<tokenext>Stupid question- are bytes really 10 bits when talking hard drive capacity ?
Is that some sort of checksum going on , or did the way computers store numbers change while I was n't looking ?</tokentext>
<sentencetext>Stupid question- are bytes really 10 bits when talking hard drive capacity?
Is that some sort of checksum going on, or did the way computers store numbers change while I wasn't looking?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295780</id>
	<title>XFS</title>
	<author>krischik</author>
	<datestamp>1267268640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Actually it makes me wonder if the virtual 512 sector stuff can be switched off. XFS for example handles lagers sectors sizes gracefully.</p></htmltext>
<tokenext>Actually it makes me wonder if the virtual 512 sector stuff can be switched off .
XFS for example handles lagers sectors sizes gracefully .</tokentext>
<sentencetext>Actually it makes me wonder if the virtual 512 sector stuff can be switched off.
XFS for example handles lagers sectors sizes gracefully.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291304</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295196</id>
	<title>Re:Speed is irrelevant</title>
	<author>blahplusplus</author>
	<datestamp>1267213860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This is what raid, mirroring and script backups are for.  If you can't write a batch file to copy shit to a USB/Firewire drive, or simply have another cheap blank 2TB disk in the same PC to copy to, you are failing at backup.</p><p>Hard drives are so cheap now that you should merely have massive redundancy, also flash USB sticks are good for one time files like documents and smaller stuff you want to keep.</p></htmltext>
<tokenext>This is what raid , mirroring and script backups are for .
If you ca n't write a batch file to copy shit to a USB/Firewire drive , or simply have another cheap blank 2TB disk in the same PC to copy to , you are failing at backup.Hard drives are so cheap now that you should merely have massive redundancy , also flash USB sticks are good for one time files like documents and smaller stuff you want to keep .</tokentext>
<sentencetext>This is what raid, mirroring and script backups are for.
If you can't write a batch file to copy shit to a USB/Firewire drive, or simply have another cheap blank 2TB disk in the same PC to copy to, you are failing at backup.Hard drives are so cheap now that you should merely have massive redundancy, also flash USB sticks are good for one time files like documents and smaller stuff you want to keep.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292084</id>
	<title>Re:What About Linux Systems?</title>
	<author>hawkingradiation</author>
	<datestamp>1267186920000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>Linux has had 4096 block size in the kernel for ages. See this <a href="http://idevelopment.info/data/Unix/Linux/LINUX\_PartitioningandFormattingSecondHardDrive\_ext3.shtml" title="idevelopment.info" rel="nofollow">article</a> [idevelopment.info] The issue being, as I recall somebody say, is that fdisk cannot properly do this. So use parted and you will be ok. ext3 and jfs and I suppose xfs and a whole bunch of others support the 4096 block size as well. BTW, who "tackled the XP issue pretty quick"? was it Microsoft or was it the hard drive makers. AFAIK a few hard drive manufacturers are emulating a 512 block size so it is not a complete fix.</htmltext>
<tokenext>Linux has had 4096 block size in the kernel for ages .
See this article [ idevelopment.info ] The issue being , as I recall somebody say , is that fdisk can not properly do this .
So use parted and you will be ok. ext3 and jfs and I suppose xfs and a whole bunch of others support the 4096 block size as well .
BTW , who " tackled the XP issue pretty quick " ?
was it Microsoft or was it the hard drive makers .
AFAIK a few hard drive manufacturers are emulating a 512 block size so it is not a complete fix .</tokentext>
<sentencetext>Linux has had 4096 block size in the kernel for ages.
See this article [idevelopment.info] The issue being, as I recall somebody say, is that fdisk cannot properly do this.
So use parted and you will be ok. ext3 and jfs and I suppose xfs and a whole bunch of others support the 4096 block size as well.
BTW, who "tackled the XP issue pretty quick"?
was it Microsoft or was it the hard drive makers.
AFAIK a few hard drive manufacturers are emulating a 512 block size so it is not a complete fix.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291304</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292534</id>
	<title>Re:Speed is irrelevant</title>
	<author>russotto</author>
	<datestamp>1267189080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>1. Number of Read/Write operations per task: Does the new format result in fewer head movements, therefore less wear on the hardware, thus increasing HD's life expectancy and MTBF?</p></div></blockquote><p>Yes.  By packing the bits more efficiently, each cylinder will have more capacity, thus requiring fewer cylinders and fewer head movements for any given disk capacity.</p><blockquote><div><p>2. Energy efficiency: Does the new format have lower power consumption, leading to lower operating temperature and better laptop/netbook battery autonomy?</p></div></blockquote><p>Probably slightly but not significantly.</p><blockquote><div><p>3. Are there differences in sustained read/write performance? E.g. is the new format more suitable for video editing than the old one?</p></div></blockquote><p>There should be.</p></div>
	</htmltext>
<tokenext>1 .
Number of Read/Write operations per task : Does the new format result in fewer head movements , therefore less wear on the hardware , thus increasing HD 's life expectancy and MTBF ? Yes .
By packing the bits more efficiently , each cylinder will have more capacity , thus requiring fewer cylinders and fewer head movements for any given disk capacity.2 .
Energy efficiency : Does the new format have lower power consumption , leading to lower operating temperature and better laptop/netbook battery autonomy ? Probably slightly but not significantly.3 .
Are there differences in sustained read/write performance ?
E.g. is the new format more suitable for video editing than the old one ? There should be .</tokentext>
<sentencetext>1.
Number of Read/Write operations per task: Does the new format result in fewer head movements, therefore less wear on the hardware, thus increasing HD's life expectancy and MTBF?Yes.
By packing the bits more efficiently, each cylinder will have more capacity, thus requiring fewer cylinders and fewer head movements for any given disk capacity.2.
Energy efficiency: Does the new format have lower power consumption, leading to lower operating temperature and better laptop/netbook battery autonomy?Probably slightly but not significantly.3.
Are there differences in sustained read/write performance?
E.g. is the new format more suitable for video editing than the old one?There should be.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292474</id>
	<title>Re:Speed is irrelevant</title>
	<author>Surt</author>
	<datestamp>1267188780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think the answer is that:</p><p>#1: only an idiot relies on the MTBF statistic as their backup strategy, so speed matters more (and helps you perform your routine backups faster).</p><p>#2: for energy efficiency, you don't buy a big spinning disk for your laptop, you use a solid state device.</p><p>#3: wait, i thought you didn't want them to talk about performance?  This format should indeed be better performing for video editing, however, since you asked.</p></htmltext>
<tokenext>I think the answer is that : # 1 : only an idiot relies on the MTBF statistic as their backup strategy , so speed matters more ( and helps you perform your routine backups faster ) . # 2 : for energy efficiency , you do n't buy a big spinning disk for your laptop , you use a solid state device. # 3 : wait , i thought you did n't want them to talk about performance ?
This format should indeed be better performing for video editing , however , since you asked .</tokentext>
<sentencetext>I think the answer is that:#1: only an idiot relies on the MTBF statistic as their backup strategy, so speed matters more (and helps you perform your routine backups faster).#2: for energy efficiency, you don't buy a big spinning disk for your laptop, you use a solid state device.#3: wait, i thought you didn't want them to talk about performance?
This format should indeed be better performing for video editing, however, since you asked.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295304</id>
	<title>It's not just partitioning</title>
	<author>ICantFindADecentNick</author>
	<datestamp>1267302000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>After the "linux doesn't handle it story" a couple of weeks ago <a href="http://hardware.slashdot.org/story/10/02/14/1541244/Linux-Not-Quite-Ready-For-New-4K-Sector-Drives" title="slashdot.org" rel="nofollow">http://hardware.slashdot.org/story/10/02/14/1541244/Linux-Not-Quite-Ready-For-New-4K-Sector-Drives</a> [slashdot.org] I wondered if the mis-alignment was what was causing my poor postgres performance on the WD Caviar Green. After quite a lot of effot moving things around I didn't actually see any noticeable difference. Now I'm left wondering whether the mis-alignment effect is dwarfed by the effort of reading 3.5K of a 4K block for every random 0.5K block write.

The fact that the disk is lying to the driver is the big deal here.

Does anybody know how to force the linux sd driver to use 4k blocks regardless of the what the disk tells it about blocksize?</htmltext>
<tokenext>After the " linux does n't handle it story " a couple of weeks ago http : //hardware.slashdot.org/story/10/02/14/1541244/Linux-Not-Quite-Ready-For-New-4K-Sector-Drives [ slashdot.org ] I wondered if the mis-alignment was what was causing my poor postgres performance on the WD Caviar Green .
After quite a lot of effot moving things around I did n't actually see any noticeable difference .
Now I 'm left wondering whether the mis-alignment effect is dwarfed by the effort of reading 3.5K of a 4K block for every random 0.5K block write .
The fact that the disk is lying to the driver is the big deal here .
Does anybody know how to force the linux sd driver to use 4k blocks regardless of the what the disk tells it about blocksize ?</tokentext>
<sentencetext>After the "linux doesn't handle it story" a couple of weeks ago http://hardware.slashdot.org/story/10/02/14/1541244/Linux-Not-Quite-Ready-For-New-4K-Sector-Drives [slashdot.org] I wondered if the mis-alignment was what was causing my poor postgres performance on the WD Caviar Green.
After quite a lot of effot moving things around I didn't actually see any noticeable difference.
Now I'm left wondering whether the mis-alignment effect is dwarfed by the effort of reading 3.5K of a 4K block for every random 0.5K block write.
The fact that the disk is lying to the driver is the big deal here.
Does anybody know how to force the linux sd driver to use 4k blocks regardless of the what the disk tells it about blocksize?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291372</id>
	<title>Re:Large sector size good?</title>
	<author>NFN\_NLN</author>
	<datestamp>1267182780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I thought the point was to have a small sector size.  With large sectors, say 4096K, a 1K file will actually take up the full 4096K.  A 4097K file will take up 8194K.  A thousand 1K files will end up taking up 4096000K.  I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.</p></div><p>You had me worried for a while there so I did a quick check.  Turns out NONE of my movies or MP3's are less than 4096 bytes so it looks like I dodged a bullet there.  However, when Hollywood perfects it's movie industry down to 512 different possible re-hashes of the same plot they might be able to store a movie with better space efficiency on a 512 byte/sector drive again.</p></div>
	</htmltext>
<tokenext>I thought the point was to have a small sector size .
With large sectors , say 4096K , a 1K file will actually take up the full 4096K .
A 4097K file will take up 8194K .
A thousand 1K files will end up taking up 4096000K .
I understand that with larger HDD 's that this becomes less of an issue , but unless you are dealing with a fewer number of large files , I do n't see how this can be more efficient when the size of every file is rounded up to the next 4096K.You had me worried for a while there so I did a quick check .
Turns out NONE of my movies or MP3 's are less than 4096 bytes so it looks like I dodged a bullet there .
However , when Hollywood perfects it 's movie industry down to 512 different possible re-hashes of the same plot they might be able to store a movie with better space efficiency on a 512 byte/sector drive again .</tokentext>
<sentencetext>I thought the point was to have a small sector size.
With large sectors, say 4096K, a 1K file will actually take up the full 4096K.
A 4097K file will take up 8194K.
A thousand 1K files will end up taking up 4096000K.
I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.You had me worried for a while there so I did a quick check.
Turns out NONE of my movies or MP3's are less than 4096 bytes so it looks like I dodged a bullet there.
However, when Hollywood perfects it's movie industry down to 512 different possible re-hashes of the same plot they might be able to store a movie with better space efficiency on a 512 byte/sector drive again.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291304</id>
	<title>What About Linux Systems?</title>
	<author>Anonymous</author>
	<datestamp>1267182360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>When this issue came up a few weeks ago there was a problem with XP and with Linux. I see they tackled the XP issue pretty quick but what about Linux?<br> <br>
<a href="http://hardware.slashdot.org/story/10/02/14/1541244/Linux-Not-Quite-Ready-For-New-4K-Sector-Drives" title="slashdot.org">This place</a> [slashdot.org] had something about it.</htmltext>
<tokenext>When this issue came up a few weeks ago there was a problem with XP and with Linux .
I see they tackled the XP issue pretty quick but what about Linux ?
This place [ slashdot.org ] had something about it .</tokentext>
<sentencetext>When this issue came up a few weeks ago there was a problem with XP and with Linux.
I see they tackled the XP issue pretty quick but what about Linux?
This place [slashdot.org] had something about it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291386</id>
	<title>Re:Large sector size good?</title>
	<author>StikyPad</author>
	<datestamp>1267182840000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p><i>If you read the article carefully, the new size is only 4K, not 4096K. The 4K size actually matches very well with most common <b>files ystems</b>.</i></p><p>Looks like they're not the only ones who miscalculated their block boundary.</p></htmltext>
<tokenext>If you read the article carefully , the new size is only 4K , not 4096K .
The 4K size actually matches very well with most common files ystems.Looks like they 're not the only ones who miscalculated their block boundary .</tokentext>
<sentencetext>If you read the article carefully, the new size is only 4K, not 4096K.
The 4K size actually matches very well with most common files ystems.Looks like they're not the only ones who miscalculated their block boundary.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291286</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291326</id>
	<title>Re:Large sector size good?</title>
	<author>Joce640k</author>
	<datestamp>1267182480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Most file systems work by clusters, not sectors.</p><p>NTFS partitions use 4k clusters by default so you already have this problem.</p></htmltext>
<tokenext>Most file systems work by clusters , not sectors.NTFS partitions use 4k clusters by default so you already have this problem .</tokentext>
<sentencetext>Most file systems work by clusters, not sectors.NTFS partitions use 4k clusters by default so you already have this problem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291484</id>
	<title>Re:1 byte = 10 bits?</title>
	<author>dfsmith</author>
	<datestamp>1267183500000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Depends on the drive.  In recent electrical signalling (Gb ethernet, SATA/SAS, etc.) the 8b10b encoding scheme has been very popular; and is 10 bits to a byte.  The extra bits are for recovering the clock signal.  The HDD has to do the same, but the manufacturers don't have to adhere to any standards inside their case.</p><p>Now, if you're asking the question "how many bytes in a MB?" there is great debate.  (The answer is, and has been from the first RAMAC*, 1,000,000.  However, the binary bus people like to argue otherwise; and Microsoft Windows is one of the protagonists.)</p><p>* Okay, so technically the RAMAC was 5,000,000 words, where a word was 7 bits.</p></htmltext>
<tokenext>Depends on the drive .
In recent electrical signalling ( Gb ethernet , SATA/SAS , etc .
) the 8b10b encoding scheme has been very popular ; and is 10 bits to a byte .
The extra bits are for recovering the clock signal .
The HDD has to do the same , but the manufacturers do n't have to adhere to any standards inside their case.Now , if you 're asking the question " how many bytes in a MB ?
" there is great debate .
( The answer is , and has been from the first RAMAC * , 1,000,000 .
However , the binary bus people like to argue otherwise ; and Microsoft Windows is one of the protagonists .
) * Okay , so technically the RAMAC was 5,000,000 words , where a word was 7 bits .</tokentext>
<sentencetext>Depends on the drive.
In recent electrical signalling (Gb ethernet, SATA/SAS, etc.
) the 8b10b encoding scheme has been very popular; and is 10 bits to a byte.
The extra bits are for recovering the clock signal.
The HDD has to do the same, but the manufacturers don't have to adhere to any standards inside their case.Now, if you're asking the question "how many bytes in a MB?
" there is great debate.
(The answer is, and has been from the first RAMAC*, 1,000,000.
However, the binary bus people like to argue otherwise; and Microsoft Windows is one of the protagonists.
)* Okay, so technically the RAMAC was 5,000,000 words, where a word was 7 bits.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291340</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291406</id>
	<title>Re:What About Linux Systems?</title>
	<author>Wesley Felter</author>
	<datestamp>1267183020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Some distro installers do it right and some do it wrong. Give it a few years and I'm sure it will all be sorted out.</p></htmltext>
<tokenext>Some distro installers do it right and some do it wrong .
Give it a few years and I 'm sure it will all be sorted out .</tokentext>
<sentencetext>Some distro installers do it right and some do it wrong.
Give it a few years and I'm sure it will all be sorted out.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291304</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294490</id>
	<title>Re:XP users</title>
	<author>MobileTatsu-NJG</author>
	<datestamp>1267204260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>XP users do not need big hard drives to have problems.</p></div><p>Tee hee giggle snort!  So, besides porn, what do Linux and Mac users fill their hard drives with?  Games?</p></div>
	</htmltext>
<tokenext>XP users do not need big hard drives to have problems.Tee hee giggle snort !
So , besides porn , what do Linux and Mac users fill their hard drives with ?
Games ?</tokentext>
<sentencetext>XP users do not need big hard drives to have problems.Tee hee giggle snort!
So, besides porn, what do Linux and Mac users fill their hard drives with?
Games?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291262</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292964</id>
	<title>Re:Speed is irrelevant</title>
	<author>jedidiah</author>
	<datestamp>1267191660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt; I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed. Regarding HDs, I'd like to see results relevant to:</p><p>You really want to be able to copy your stuff. If your stuff is 2TB, then it makes sense that you would want to copy that 2TB in a timely manner.</p><p>So yeah... speed does matter. Sooner or later you will want that drive to be able to keep up with how big it is.</p></htmltext>
<tokenext>&gt; I ca n't grasp why all ( these specific and most ) benchmarks are so much obsessed with speed .
Regarding HDs , I 'd like to see results relevant to : You really want to be able to copy your stuff .
If your stuff is 2TB , then it makes sense that you would want to copy that 2TB in a timely manner.So yeah... speed does matter .
Sooner or later you will want that drive to be able to keep up with how big it is .</tokentext>
<sentencetext>&gt; I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed.
Regarding HDs, I'd like to see results relevant to:You really want to be able to copy your stuff.
If your stuff is 2TB, then it makes sense that you would want to copy that 2TB in a timely manner.So yeah... speed does matter.
Sooner or later you will want that drive to be able to keep up with how big it is.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31293624</id>
	<title>Re:Large sector size good?</title>
	<author>Hurricane78</author>
	<datestamp>1267196040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Also, we are actually talking about 4 kilobyte sectors. TFS refers to it as 4096k, which would be a 4 megabyte sector. (Which is wildly wrong.)</p></div><p>Wanna bet TFS was written by a Verizon employee?<nobr> <wbr></nobr>;)</p></div>
	</htmltext>
<tokenext>Also , we are actually talking about 4 kilobyte sectors .
TFS refers to it as 4096k , which would be a 4 megabyte sector .
( Which is wildly wrong .
) Wan na bet TFS was written by a Verizon employee ?
; )</tokentext>
<sentencetext>Also, we are actually talking about 4 kilobyte sectors.
TFS refers to it as 4096k, which would be a 4 megabyte sector.
(Which is wildly wrong.
)Wanna bet TFS was written by a Verizon employee?
;)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291338</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31296552</id>
	<title>Re:Large sector size good?</title>
	<author>Ant P.</author>
	<datestamp>1267285020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>4KB is the minimum size of a memory page under x86. So yes this is a good thing, because things like disk caching will no longer need to care about fractions of pages being filled.</p><p>And if you're worrying about running out of space for 1KB files on a hard disk that has 250 billion 4K sectors then you've got bigger problems.</p></htmltext>
<tokenext>4KB is the minimum size of a memory page under x86 .
So yes this is a good thing , because things like disk caching will no longer need to care about fractions of pages being filled.And if you 're worrying about running out of space for 1KB files on a hard disk that has 250 billion 4K sectors then you 've got bigger problems .</tokentext>
<sentencetext>4KB is the minimum size of a memory page under x86.
So yes this is a good thing, because things like disk caching will no longer need to care about fractions of pages being filled.And if you're worrying about running out of space for 1KB files on a hard disk that has 250 billion 4K sectors then you've got bigger problems.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291226</id>
	<title>640 terabytes</title>
	<author>Anonymous</author>
	<datestamp>1267181940000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>ought to be enough for anyone</p></htmltext>
<tokenext>ought to be enough for anyone</tokentext>
<sentencetext>ought to be enough for anyone</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291262</id>
	<title>XP users</title>
	<author>spaceyhackerlady</author>
	<datestamp>1267182120000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>XP users do not need big hard drives to have problems.

</p><p>...laura</p></htmltext>
<tokenext>XP users do not need big hard drives to have problems .
...laura</tokentext>
<sentencetext>XP users do not need big hard drives to have problems.
...laura</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292434</id>
	<title>Oh noez!</title>
	<author>Kral\_Blbec</author>
	<datestamp>1267188540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Unfortunately, this creates a problem for Windows XP users. The good news is, Western Digital has already solved the problem</p></div><p>Is there a particular reason that we should care that a new technology isn't backwards compatible with an obsolete technology? Especially in light that it actually is compatible?</p></div>
	</htmltext>
<tokenext>Unfortunately , this creates a problem for Windows XP users .
The good news is , Western Digital has already solved the problemIs there a particular reason that we should care that a new technology is n't backwards compatible with an obsolete technology ?
Especially in light that it actually is compatible ?</tokentext>
<sentencetext>Unfortunately, this creates a problem for Windows XP users.
The good news is, Western Digital has already solved the problemIs there a particular reason that we should care that a new technology isn't backwards compatible with an obsolete technology?
Especially in light that it actually is compatible?
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</id>
	<title>Large sector size good?</title>
	<author>ArcherB</author>
	<datestamp>1267181880000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>I thought the point was to have a small sector size.  With large sectors, say 4096K, a 1K file will actually take up the full 4096K.  A 4097K file will take up 8194K.  A thousand 1K files will end up taking up 4096000K.  I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.</p></htmltext>
<tokenext>I thought the point was to have a small sector size .
With large sectors , say 4096K , a 1K file will actually take up the full 4096K .
A 4097K file will take up 8194K .
A thousand 1K files will end up taking up 4096000K .
I understand that with larger HDD 's that this becomes less of an issue , but unless you are dealing with a fewer number of large files , I do n't see how this can be more efficient when the size of every file is rounded up to the next 4096K .</tokentext>
<sentencetext>I thought the point was to have a small sector size.
With large sectors, say 4096K, a 1K file will actually take up the full 4096K.
A 4097K file will take up 8194K.
A thousand 1K files will end up taking up 4096000K.
I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291324</id>
	<title>Typo in Article</title>
	<author>HaeMaker</author>
	<datestamp>1267182480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It says 4096K, they mean 4096 bytes (4K).  Error is in the original.</p></htmltext>
<tokenext>It says 4096K , they mean 4096 bytes ( 4K ) .
Error is in the original .</tokentext>
<sentencetext>It says 4096K, they mean 4096 bytes (4K).
Error is in the original.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31311738</id>
	<title>disk block losses</title>
	<author>multicsfan</author>
	<datestamp>1267375860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What the original post seems to be ignoring is the amount of 'data' stored with the block of data seen by the customer.  It has been many years since I last looked into this so there may well be changes but:</p><p>A block of data consists of:</p><p>header/leader:  this is alignment, block id, and other control information.  at least 128 bytes</p><p>block of data:  the data actually seen by the user</p><p>trailer:  I don't remember the length, but probably close to the size of the header, but at least 64 bytes, probably 128.  includes ecc and a second block id and other control information.</p><p>so assuming you have a track with a raw capacity of 1MB,  just as an example:</p><p>512 byte blocks:  1,000,000/(128+512+128) = 1,000,000/768 = 1,302 blocks = 666,624 usable bytes out of 1,000,000 or 67\%</p><p>4096 byte blocks: 1,000,000/(128+4096+128) = 1,000,000/4352 = 229 blocks = 937,984 usable bytes out of 1,000,000 or 94\%</p><p>so the larger blocks make mush more efficient usage of the raw space.  Even if the trailer becomes 512 bytes, the new utilization is 84\%</p></htmltext>
<tokenext>What the original post seems to be ignoring is the amount of 'data ' stored with the block of data seen by the customer .
It has been many years since I last looked into this so there may well be changes but : A block of data consists of : header/leader : this is alignment , block id , and other control information .
at least 128 bytesblock of data : the data actually seen by the usertrailer : I do n't remember the length , but probably close to the size of the header , but at least 64 bytes , probably 128. includes ecc and a second block id and other control information.so assuming you have a track with a raw capacity of 1MB , just as an example : 512 byte blocks : 1,000,000/ ( 128 + 512 + 128 ) = 1,000,000/768 = 1,302 blocks = 666,624 usable bytes out of 1,000,000 or 67 \ % 4096 byte blocks : 1,000,000/ ( 128 + 4096 + 128 ) = 1,000,000/4352 = 229 blocks = 937,984 usable bytes out of 1,000,000 or 94 \ % so the larger blocks make mush more efficient usage of the raw space .
Even if the trailer becomes 512 bytes , the new utilization is 84 \ %</tokentext>
<sentencetext>What the original post seems to be ignoring is the amount of 'data' stored with the block of data seen by the customer.
It has been many years since I last looked into this so there may well be changes but:A block of data consists of:header/leader:  this is alignment, block id, and other control information.
at least 128 bytesblock of data:  the data actually seen by the usertrailer:  I don't remember the length, but probably close to the size of the header, but at least 64 bytes, probably 128.  includes ecc and a second block id and other control information.so assuming you have a track with a raw capacity of 1MB,  just as an example:512 byte blocks:  1,000,000/(128+512+128) = 1,000,000/768 = 1,302 blocks = 666,624 usable bytes out of 1,000,000 or 67\%4096 byte blocks: 1,000,000/(128+4096+128) = 1,000,000/4352 = 229 blocks = 937,984 usable bytes out of 1,000,000 or 94\%so the larger blocks make mush more efficient usage of the raw space.
Even if the trailer becomes 512 bytes, the new utilization is 84\%</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291482</id>
	<title>Re:1 byte = 10 bits?</title>
	<author>WrongSizeGlass</author>
	<datestamp>1267183440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>The article claims that they use 10 bits per byte on a hard drive. The extra 2 bits are used for the ECC data<nobr> <wbr></nobr>... they are not available for 'storage'. Of course, they claim a 1,000 GB drive = 1 TB which we all know is marketing, um, <i>speak</i>. A <i>real</i> TB = 1,024 GB (and I mean <i>real</i> GB's, not marketing speak GB's).</htmltext>
<tokenext>The article claims that they use 10 bits per byte on a hard drive .
The extra 2 bits are used for the ECC data ... they are not available for 'storage' .
Of course , they claim a 1,000 GB drive = 1 TB which we all know is marketing , um , speak .
A real TB = 1,024 GB ( and I mean real GB 's , not marketing speak GB 's ) .</tokentext>
<sentencetext>The article claims that they use 10 bits per byte on a hard drive.
The extra 2 bits are used for the ECC data ... they are not available for 'storage'.
Of course, they claim a 1,000 GB drive = 1 TB which we all know is marketing, um, speak.
A real TB = 1,024 GB (and I mean real GB's, not marketing speak GB's).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291340</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295620</id>
	<title>Re:Speed is irrelevant</title>
	<author>Dputiger</author>
	<datestamp>1267264980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>UBfusion, as the author of the piece in question, I'd like to note the following: </p><p>#1.  WD does not claim that AF currently offers any reliability, life expectancy, or MTBF benefits. The information you're looking for is extremely granular--drive seek algorithms and the like are considered 'secret sauce' by the various manufacturers. These are not characteristics I'm even sure a reviewer can objectively independently measure. </p><p>#2. WD claims no energy efficiency improvements and the new WD Green drives are listed as drawing exactly the same amount of power as the old ones. If total power consumption has changed at the 1TB level, I'd imagine it would be because the new 1TB drives use two 500GB platters while the original Caviar Green 1TB drives used 3x333GB. Either way, WD has not revised their guidance.</p><p>#3.  No--at least not real ones. As I stated in the article, <b>average</b> read/write speeds have improved across the entire drive because some of the inner drive tracks that were used in the WD10EARS models aren't used in the WD10EARS. In both cases, average read/writes are up about 2\%. This is not a real speed increase--it's a mathematical example of what happens when you lop off the lowest sequence of numbers in an average.</p><p>

To answer your overarching point, as you've implied, there's no solution save proper backup strategies.</p></htmltext>
<tokenext>UBfusion , as the author of the piece in question , I 'd like to note the following : # 1 .
WD does not claim that AF currently offers any reliability , life expectancy , or MTBF benefits .
The information you 're looking for is extremely granular--drive seek algorithms and the like are considered 'secret sauce ' by the various manufacturers .
These are not characteristics I 'm even sure a reviewer can objectively independently measure .
# 2. WD claims no energy efficiency improvements and the new WD Green drives are listed as drawing exactly the same amount of power as the old ones .
If total power consumption has changed at the 1TB level , I 'd imagine it would be because the new 1TB drives use two 500GB platters while the original Caviar Green 1TB drives used 3x333GB .
Either way , WD has not revised their guidance. # 3 .
No--at least not real ones .
As I stated in the article , average read/write speeds have improved across the entire drive because some of the inner drive tracks that were used in the WD10EARS models are n't used in the WD10EARS .
In both cases , average read/writes are up about 2 \ % .
This is not a real speed increase--it 's a mathematical example of what happens when you lop off the lowest sequence of numbers in an average .
To answer your overarching point , as you 've implied , there 's no solution save proper backup strategies .</tokentext>
<sentencetext>UBfusion, as the author of the piece in question, I'd like to note the following: #1.
WD does not claim that AF currently offers any reliability, life expectancy, or MTBF benefits.
The information you're looking for is extremely granular--drive seek algorithms and the like are considered 'secret sauce' by the various manufacturers.
These are not characteristics I'm even sure a reviewer can objectively independently measure.
#2. WD claims no energy efficiency improvements and the new WD Green drives are listed as drawing exactly the same amount of power as the old ones.
If total power consumption has changed at the 1TB level, I'd imagine it would be because the new 1TB drives use two 500GB platters while the original Caviar Green 1TB drives used 3x333GB.
Either way, WD has not revised their guidance.#3.
No--at least not real ones.
As I stated in the article, average read/write speeds have improved across the entire drive because some of the inner drive tracks that were used in the WD10EARS models aren't used in the WD10EARS.
In both cases, average read/writes are up about 2\%.
This is not a real speed increase--it's a mathematical example of what happens when you lop off the lowest sequence of numbers in an average.
To answer your overarching point, as you've implied, there's no solution save proper backup strategies.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31296354</id>
	<title>Re:The real meaning of this</title>
	<author>FuckingNickName</author>
	<datestamp>1267281600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sir, I had quite some trouble wading through your marketing speak -- "revolutionary [surely not, SSDs don't rotate?] rather than evolutionary", "a big change is coming", "an extinction level event", "the situation got dynamic" -- but I think what you were trying to get out over those seven paragraphs is "in the limit as technology becomes more perfect, stuff is slower to read if you have to physically move to it".</p><p>Well, maybe, but perfection is never attained. And, though Google is hypocritical enough to imply to you that you should trust a single large entity with all your data, it knows perfectly well in-house that the best approach to storage is lots of copies over lots of cheap equipment. As long as throwaway hard drives are fast enough - and even cheap hard drives are going to give you many more writes than SSDs - we will be staying with hard drives, thanks. Like all salesmen since the dawn of time, you can shill the great new thing, but all businesses and consumers need (whatever technocrats tell them) is the cheapest solution that is good enough. Your brushing over the most important point, that SSDs are "still not competetive with consumer magneto-mechanical media", betrays your loyalty.</p></htmltext>
<tokenext>Sir , I had quite some trouble wading through your marketing speak -- " revolutionary [ surely not , SSDs do n't rotate ?
] rather than evolutionary " , " a big change is coming " , " an extinction level event " , " the situation got dynamic " -- but I think what you were trying to get out over those seven paragraphs is " in the limit as technology becomes more perfect , stuff is slower to read if you have to physically move to it " .Well , maybe , but perfection is never attained .
And , though Google is hypocritical enough to imply to you that you should trust a single large entity with all your data , it knows perfectly well in-house that the best approach to storage is lots of copies over lots of cheap equipment .
As long as throwaway hard drives are fast enough - and even cheap hard drives are going to give you many more writes than SSDs - we will be staying with hard drives , thanks .
Like all salesmen since the dawn of time , you can shill the great new thing , but all businesses and consumers need ( whatever technocrats tell them ) is the cheapest solution that is good enough .
Your brushing over the most important point , that SSDs are " still not competetive with consumer magneto-mechanical media " , betrays your loyalty .</tokentext>
<sentencetext>Sir, I had quite some trouble wading through your marketing speak -- "revolutionary [surely not, SSDs don't rotate?
] rather than evolutionary", "a big change is coming", "an extinction level event", "the situation got dynamic" -- but I think what you were trying to get out over those seven paragraphs is "in the limit as technology becomes more perfect, stuff is slower to read if you have to physically move to it".Well, maybe, but perfection is never attained.
And, though Google is hypocritical enough to imply to you that you should trust a single large entity with all your data, it knows perfectly well in-house that the best approach to storage is lots of copies over lots of cheap equipment.
As long as throwaway hard drives are fast enough - and even cheap hard drives are going to give you many more writes than SSDs - we will be staying with hard drives, thanks.
Like all salesmen since the dawn of time, you can shill the great new thing, but all businesses and consumers need (whatever technocrats tell them) is the cheapest solution that is good enough.
Your brushing over the most important point, that SSDs are "still not competetive with consumer magneto-mechanical media", betrays your loyalty.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294890</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291174</id>
	<title>You mean...</title>
	<author>Anonymous</author>
	<datestamp>1267181640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>there is something more advanced than mkfs or format c:\ ?</htmltext>
<tokenext>there is something more advanced than mkfs or format c : \ ?</tokentext>
<sentencetext>there is something more advanced than mkfs or format c:\ ?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291554</id>
	<title>Re:Large sector size good?</title>
	<author>jgtg32a</author>
	<datestamp>1267183920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It isn't that great for the OS's partition but it works out great for my Media partition</p></htmltext>
<tokenext>It is n't that great for the OS 's partition but it works out great for my Media partition</tokentext>
<sentencetext>It isn't that great for the OS's partition but it works out great for my Media partition</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291530</id>
	<title>Re:Defrag</title>
	<author>owlstead</author>
	<datestamp>1267183800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes, of course file sizes under 4096 bytes (not 4096K) do still exist. As stated in the article, most FS on hard disks already use 4096 byte block sizes. Thus it won't make much difference for defrag, unless you misalign the data in which case the defrag suddenly could take a *very* long time.</p><p>Not that I care, my system drive is already SSD anyways and I've not filled a drive to the brim for a long time. If I would still download movies they would certainly go to a WD green drive or anything like that without any small file sizes in sight (after doing the PAR2 / unzip etc. on a separate drive or - if big enough - the SSD of course).</p></htmltext>
<tokenext>Yes , of course file sizes under 4096 bytes ( not 4096K ) do still exist .
As stated in the article , most FS on hard disks already use 4096 byte block sizes .
Thus it wo n't make much difference for defrag , unless you misalign the data in which case the defrag suddenly could take a * very * long time.Not that I care , my system drive is already SSD anyways and I 've not filled a drive to the brim for a long time .
If I would still download movies they would certainly go to a WD green drive or anything like that without any small file sizes in sight ( after doing the PAR2 / unzip etc .
on a separate drive or - if big enough - the SSD of course ) .</tokentext>
<sentencetext>Yes, of course file sizes under 4096 bytes (not 4096K) do still exist.
As stated in the article, most FS on hard disks already use 4096 byte block sizes.
Thus it won't make much difference for defrag, unless you misalign the data in which case the defrag suddenly could take a *very* long time.Not that I care, my system drive is already SSD anyways and I've not filled a drive to the brim for a long time.
If I would still download movies they would certainly go to a WD green drive or anything like that without any small file sizes in sight (after doing the PAR2 / unzip etc.
on a separate drive or - if big enough - the SSD of course).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291256</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291666</id>
	<title>And for an overview that knows how to do math...</title>
	<author>JorDan Clock</author>
	<datestamp>1267184640000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><a href="http://www.anandtech.com/storage/showdoc.aspx?i=3691" title="anandtech.com">Anandtech</a> [anandtech.com] has a much better write up on this technology, complete with correct conversions from bits to bytes, knowledge of the difference between 4096 bytes and 4096 kilobytes, and no in-text ads.</htmltext>
<tokenext>Anandtech [ anandtech.com ] has a much better write up on this technology , complete with correct conversions from bits to bytes , knowledge of the difference between 4096 bytes and 4096 kilobytes , and no in-text ads .</tokentext>
<sentencetext>Anandtech [anandtech.com] has a much better write up on this technology, complete with correct conversions from bits to bytes, knowledge of the difference between 4096 bytes and 4096 kilobytes, and no in-text ads.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291286</id>
	<title>Re:Large sector size good?</title>
	<author>Anonymous</author>
	<datestamp>1267182240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you read the article carefully, the new size is only 4K, not 4096K. The 4K size actually matches very well with most common files ystems. The 4096K is an error in the article.</p></htmltext>
<tokenext>If you read the article carefully , the new size is only 4K , not 4096K .
The 4K size actually matches very well with most common files ystems .
The 4096K is an error in the article .</tokentext>
<sentencetext>If you read the article carefully, the new size is only 4K, not 4096K.
The 4K size actually matches very well with most common files ystems.
The 4096K is an error in the article.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294110</id>
	<title>so...</title>
	<author>Theodore</author>
	<datestamp>1267200060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The WD green AF acts like A when not aligned, and B when aligned...<br>and the WD black is better than both for just a little more.</p><p>Probably better to skip Subway for lunch, make your own sandwiches for a week and get the WD black (or go for 2 weeks and get the RE3).</p></htmltext>
<tokenext>The WD green AF acts like A when not aligned , and B when aligned...and the WD black is better than both for just a little more.Probably better to skip Subway for lunch , make your own sandwiches for a week and get the WD black ( or go for 2 weeks and get the RE3 ) .</tokentext>
<sentencetext>The WD green AF acts like A when not aligned, and B when aligned...and the WD black is better than both for just a little more.Probably better to skip Subway for lunch, make your own sandwiches for a week and get the WD black (or go for 2 weeks and get the RE3).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295738</id>
	<title>Cluster Size</title>
	<author>krischik</author>
	<datestamp>1267267620000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>I thought the point was to have a small sector size.  With large sectors, say 4096K, a 1K file will actually take up the full 4096K.</p>  </div><p>Most file system already use a cluster size of 4096 (clustering 8 sectors). The only file system I know of which used sector=cluster size was IBM's HPFS.</p><p>So NO, we don't use size. Still I am wary of this emulation stuff. First the 4096 byte sector is broken down to 8 512 byte "virtual" sectors and then those 8 virtual are clustered to one cluster. Would it not be better to use an intelligent file system which can handle 4096 bytes sectors natively? Any file system which can be formatted onto a DVD-RAM should do.</p></div>
	</htmltext>
<tokenext>I thought the point was to have a small sector size .
With large sectors , say 4096K , a 1K file will actually take up the full 4096K .
Most file system already use a cluster size of 4096 ( clustering 8 sectors ) .
The only file system I know of which used sector = cluster size was IBM 's HPFS.So NO , we do n't use size .
Still I am wary of this emulation stuff .
First the 4096 byte sector is broken down to 8 512 byte " virtual " sectors and then those 8 virtual are clustered to one cluster .
Would it not be better to use an intelligent file system which can handle 4096 bytes sectors natively ?
Any file system which can be formatted onto a DVD-RAM should do .</tokentext>
<sentencetext>I thought the point was to have a small sector size.
With large sectors, say 4096K, a 1K file will actually take up the full 4096K.
Most file system already use a cluster size of 4096 (clustering 8 sectors).
The only file system I know of which used sector=cluster size was IBM's HPFS.So NO, we don't use size.
Still I am wary of this emulation stuff.
First the 4096 byte sector is broken down to 8 512 byte "virtual" sectors and then those 8 virtual are clustered to one cluster.
Would it not be better to use an intelligent file system which can handle 4096 bytes sectors natively?
Any file system which can be formatted onto a DVD-RAM should do.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291446</id>
	<title>Dear Slashdot Sales Department</title>
	<author>Anonymous</author>
	<datestamp>1267183260000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>1 No one except LOSERS uses Windows XP.</p><p>2. What is Slashdot's commission on these shameful book plugs?</p><p>Have a weekend, loozars.</p><p>Yours In Tashkent,<br>K. Trout</p></htmltext>
<tokenext>1 No one except LOSERS uses Windows XP.2 .
What is Slashdot 's commission on these shameful book plugs ? Have a weekend , loozars.Yours In Tashkent,K .
Trout</tokentext>
<sentencetext>1 No one except LOSERS uses Windows XP.2.
What is Slashdot's commission on these shameful book plugs?Have a weekend, loozars.Yours In Tashkent,K.
Trout</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291392</id>
	<title>Serious typos all over the place</title>
	<author>Anonymous</author>
	<datestamp>1267182900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"where 1 byte = 10 bits."</p><p>as well as multiple prominent instances of "4096K"</p><p>Being off by 3 orders of magnitude is pretty hugely wrong, as is something as basic as how many bits are in a byte.</p></htmltext>
<tokenext>" where 1 byte = 10 bits .
" as well as multiple prominent instances of " 4096K " Being off by 3 orders of magnitude is pretty hugely wrong , as is something as basic as how many bits are in a byte .</tokentext>
<sentencetext>"where 1 byte = 10 bits.
"as well as multiple prominent instances of "4096K"Being off by 3 orders of magnitude is pretty hugely wrong, as is something as basic as how many bits are in a byte.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31297264</id>
	<title>Another one from Tom's</title>
	<author>barryp</author>
	<datestamp>1267292460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Tom's Hardware has <a href="http://www.tomshardware.com/reviews/wd-4k-sector,2554.html#xtor=RSS-1825" title="tomshardware.com" rel="nofollow">tackled this too</a> [tomshardware.com].  Just mentioning it for the sake of completeness.</htmltext>
<tokenext>Tom 's Hardware has tackled this too [ tomshardware.com ] .
Just mentioning it for the sake of completeness .</tokentext>
<sentencetext>Tom's Hardware has tackled this too [tomshardware.com].
Just mentioning it for the sake of completeness.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291666</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291322</id>
	<title>Re:Large sector size good?</title>
	<author>Anonymous</author>
	<datestamp>1267182480000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>You want the sector size to be smaller than the average file size or you're going to waste a lot of space.  If your average file size is large, and writes are sequential, you want the largest possible sector sizes.</p></htmltext>
<tokenext>You want the sector size to be smaller than the average file size or you 're going to waste a lot of space .
If your average file size is large , and writes are sequential , you want the largest possible sector sizes .</tokentext>
<sentencetext>You want the sector size to be smaller than the average file size or you're going to waste a lot of space.
If your average file size is large, and writes are sequential, you want the largest possible sector sizes.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294890</id>
	<title>The real meaning of this</title>
	<author>symbolset</author>
	<datestamp>1267209120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>What this really means is that magnetomechanical media is dead.
</p><p>When you're doing tricks like this to get a few extra bytes per block it means you have run out of physical media density technologies.  It's kind of like when they moved the Earth, Moon and stars to get dial-up modems from 48.8Kbps to 56Kbps - redefining bps along the way.  It's the End.  It's an admission that we're out of magnetic media density improvements.  There might be one more but after this but it's over and even now the density isn't even the important thing any more.
</p><p>I warned about this here several years ago: the consolidation of server workloads leads to an I/O choke point.  Next month AMD releases their 12-core Magny-Cours processor and Intel replies with a new processor technology - both of them increasing the memory channels and the amount of RAM that can be configured on a system to over a terabyte.  It's on like Donkey Kong in terms of processing and RAM, but all of this tech will suffocate for lack of I/O.
</p><p>The good news is that solid state technologies are here with sufficient capacity and doubling all of streaming bandwidth, IOPs and storage density at more than an acceptable rate.  That they're greener is just bonus.  And then there's the fact that the price per gigabyte - while still not competetive with consumer magneto-mechanical media - is coming down at an even better rate and already bests enterprise media (SAS and FC).  There will be an accommodation period much like there was when we moved from analog modems to DSL and beyond - and this is a ripe field for the snakeoil salesmen.  There will be wrenching pain as we realize that 8Gbps FC SAN doesn't even effectively serve a 5-pack of properly constructed third generation SSD-format drives, let alone an entire rack of them.  The world will spin about us as multiplexed 4x SAS V2 (24Gbps) connections become the order of the day briefly, unless Intel makes a coup and figures a way to apply a heirarchical routing structure to LightPeak, which isn't even released yet and even so is obsolete.  For sure electrical interconnects are right out - they don't have the bandwidth.  We're going optical and I mean right <b>now</b>.  3.5" SAS drives will become the new tape.  Tape has already been the new punchchard storage method for several years.
</p><p>My guess: we'll find a new brand for "Enterprise storage" that uses RAID technologies to aggregate the bandwidth and improve the reliability of flash technologies in a way that doesn't rate-limit IOPs and in a way that provides reliable end-to-end performance and scales to terabits per second, until it becomes a static storage medium that actually reaches the performance of RAM.  An interim solution may include huge RAM cache on SAS attached Flash drives backed by supercapacitors for guaranteed commited writes even if the power fails to preserve data integrity at the storage unit level.  FC isn't the interconnect solution and SAS isn't it either - it'll likely be derived from external PCIe but be over optical media and probably multiple strands of it.
</p><p>This is a big change - a revolutionary rather than an evolutionary change.  A bigger change is coming. An extinction level event.  When we've mastered the IOPs and the storage capacity of everything that everybody wants to store, then what?  When every enterprise has consolidated their workloads down to three servers geographically separated for HA and DR, then what?  What do we sell them then?
</p><p>Friends the situation got dynamic.  Good luck to you all.</p></htmltext>
<tokenext>What this really means is that magnetomechanical media is dead .
When you 're doing tricks like this to get a few extra bytes per block it means you have run out of physical media density technologies .
It 's kind of like when they moved the Earth , Moon and stars to get dial-up modems from 48.8Kbps to 56Kbps - redefining bps along the way .
It 's the End .
It 's an admission that we 're out of magnetic media density improvements .
There might be one more but after this but it 's over and even now the density is n't even the important thing any more .
I warned about this here several years ago : the consolidation of server workloads leads to an I/O choke point .
Next month AMD releases their 12-core Magny-Cours processor and Intel replies with a new processor technology - both of them increasing the memory channels and the amount of RAM that can be configured on a system to over a terabyte .
It 's on like Donkey Kong in terms of processing and RAM , but all of this tech will suffocate for lack of I/O .
The good news is that solid state technologies are here with sufficient capacity and doubling all of streaming bandwidth , IOPs and storage density at more than an acceptable rate .
That they 're greener is just bonus .
And then there 's the fact that the price per gigabyte - while still not competetive with consumer magneto-mechanical media - is coming down at an even better rate and already bests enterprise media ( SAS and FC ) .
There will be an accommodation period much like there was when we moved from analog modems to DSL and beyond - and this is a ripe field for the snakeoil salesmen .
There will be wrenching pain as we realize that 8Gbps FC SAN does n't even effectively serve a 5-pack of properly constructed third generation SSD-format drives , let alone an entire rack of them .
The world will spin about us as multiplexed 4x SAS V2 ( 24Gbps ) connections become the order of the day briefly , unless Intel makes a coup and figures a way to apply a heirarchical routing structure to LightPeak , which is n't even released yet and even so is obsolete .
For sure electrical interconnects are right out - they do n't have the bandwidth .
We 're going optical and I mean right now .
3.5 " SAS drives will become the new tape .
Tape has already been the new punchchard storage method for several years .
My guess : we 'll find a new brand for " Enterprise storage " that uses RAID technologies to aggregate the bandwidth and improve the reliability of flash technologies in a way that does n't rate-limit IOPs and in a way that provides reliable end-to-end performance and scales to terabits per second , until it becomes a static storage medium that actually reaches the performance of RAM .
An interim solution may include huge RAM cache on SAS attached Flash drives backed by supercapacitors for guaranteed commited writes even if the power fails to preserve data integrity at the storage unit level .
FC is n't the interconnect solution and SAS is n't it either - it 'll likely be derived from external PCIe but be over optical media and probably multiple strands of it .
This is a big change - a revolutionary rather than an evolutionary change .
A bigger change is coming .
An extinction level event .
When we 've mastered the IOPs and the storage capacity of everything that everybody wants to store , then what ?
When every enterprise has consolidated their workloads down to three servers geographically separated for HA and DR , then what ?
What do we sell them then ?
Friends the situation got dynamic .
Good luck to you all .</tokentext>
<sentencetext>What this really means is that magnetomechanical media is dead.
When you're doing tricks like this to get a few extra bytes per block it means you have run out of physical media density technologies.
It's kind of like when they moved the Earth, Moon and stars to get dial-up modems from 48.8Kbps to 56Kbps - redefining bps along the way.
It's the End.
It's an admission that we're out of magnetic media density improvements.
There might be one more but after this but it's over and even now the density isn't even the important thing any more.
I warned about this here several years ago: the consolidation of server workloads leads to an I/O choke point.
Next month AMD releases their 12-core Magny-Cours processor and Intel replies with a new processor technology - both of them increasing the memory channels and the amount of RAM that can be configured on a system to over a terabyte.
It's on like Donkey Kong in terms of processing and RAM, but all of this tech will suffocate for lack of I/O.
The good news is that solid state technologies are here with sufficient capacity and doubling all of streaming bandwidth, IOPs and storage density at more than an acceptable rate.
That they're greener is just bonus.
And then there's the fact that the price per gigabyte - while still not competetive with consumer magneto-mechanical media - is coming down at an even better rate and already bests enterprise media (SAS and FC).
There will be an accommodation period much like there was when we moved from analog modems to DSL and beyond - and this is a ripe field for the snakeoil salesmen.
There will be wrenching pain as we realize that 8Gbps FC SAN doesn't even effectively serve a 5-pack of properly constructed third generation SSD-format drives, let alone an entire rack of them.
The world will spin about us as multiplexed 4x SAS V2 (24Gbps) connections become the order of the day briefly, unless Intel makes a coup and figures a way to apply a heirarchical routing structure to LightPeak, which isn't even released yet and even so is obsolete.
For sure electrical interconnects are right out - they don't have the bandwidth.
We're going optical and I mean right now.
3.5" SAS drives will become the new tape.
Tape has already been the new punchchard storage method for several years.
My guess: we'll find a new brand for "Enterprise storage" that uses RAID technologies to aggregate the bandwidth and improve the reliability of flash technologies in a way that doesn't rate-limit IOPs and in a way that provides reliable end-to-end performance and scales to terabits per second, until it becomes a static storage medium that actually reaches the performance of RAM.
An interim solution may include huge RAM cache on SAS attached Flash drives backed by supercapacitors for guaranteed commited writes even if the power fails to preserve data integrity at the storage unit level.
FC isn't the interconnect solution and SAS isn't it either - it'll likely be derived from external PCIe but be over optical media and probably multiple strands of it.
This is a big change - a revolutionary rather than an evolutionary change.
A bigger change is coming.
An extinction level event.
When we've mastered the IOPs and the storage capacity of everything that everybody wants to store, then what?
When every enterprise has consolidated their workloads down to three servers geographically separated for HA and DR, then what?
What do we sell them then?
Friends the situation got dynamic.
Good luck to you all.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291174</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291812</id>
	<title>Re:Large sector size good?</title>
	<author>kramulous</author>
	<datestamp>1267185360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I see what you mean but will it be like other parts of the computer?  I do computation on CPUs, GPUs or FPGAs depending on what hardware is appropriate for the work that needs to be done.  Is this similar?</p><p>You have data with certain attributes and store it appropriately.</p></htmltext>
<tokenext>I see what you mean but will it be like other parts of the computer ?
I do computation on CPUs , GPUs or FPGAs depending on what hardware is appropriate for the work that needs to be done .
Is this similar ? You have data with certain attributes and store it appropriately .</tokentext>
<sentencetext>I see what you mean but will it be like other parts of the computer?
I do computation on CPUs, GPUs or FPGAs depending on what hardware is appropriate for the work that needs to be done.
Is this similar?You have data with certain attributes and store it appropriately.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291354</id>
	<title>Re:Large sector size good?</title>
	<author>Anonymous</author>
	<datestamp>1267182720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>We approve of your subject line's redundant repetition.

</p><p>- Dept. of Redundancy Dept.</p></htmltext>
<tokenext>We approve of your subject line 's redundant repetition .
- Dept .
of Redundancy Dept .</tokentext>
<sentencetext>We approve of your subject line's redundant repetition.
- Dept.
of Redundancy Dept.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291236</id>
	<title>That's a little big...</title>
	<author>Anonymous</author>
	<datestamp>1267182000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>&gt; The latest Advanced Format hard drive technology changes a hard drive's sector size from 512 bytes to 4096K.</p><p>I think not... try 4096 bytes.</p></htmltext>
<tokenext>&gt; The latest Advanced Format hard drive technology changes a hard drive 's sector size from 512 bytes to 4096K.I think not... try 4096 bytes .</tokentext>
<sentencetext>&gt; The latest Advanced Format hard drive technology changes a hard drive's sector size from 512 bytes to 4096K.I think not... try 4096 bytes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291344</id>
	<title>Your hard drive doesn't know about your FS</title>
	<author>Anonymous</author>
	<datestamp>1267182660000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>Guys, filesystem block/cluster size is not the same as hard drive sector size.<br>Jesus.</p></htmltext>
<tokenext>Guys , filesystem block/cluster size is not the same as hard drive sector size.Jesus .</tokentext>
<sentencetext>Guys, filesystem block/cluster size is not the same as hard drive sector size.Jesus.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294778</id>
	<title>Re:Defrag</title>
	<author>c6gunner</author>
	<datestamp>1267207620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Do file sizes under 4096K still exist?</p></div><p>win+r<br>cmd<br>echo Yes &gt; file\_size\_under\_4096k.txt</p><p>Tada!</p></div>
	</htmltext>
<tokenext>Do file sizes under 4096K still exist ? win + rcmdecho Yes &gt; file \ _size \ _under \ _4096k.txtTada !</tokentext>
<sentencetext>Do file sizes under 4096K still exist?win+rcmdecho Yes &gt; file\_size\_under\_4096k.txtTada!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291256</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291380</id>
	<title>55GB Savings!!!!!!!</title>
	<author>Anonymous</author>
	<datestamp>1267182780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>FTA:<br>"Each one of those ECC blocks is 40 bits wide; a 4096K block of data contains 320 bytes of ECC. Using Advanced Format's new 4096 sector size cuts the amount of ECC and Sync/DAM space significantly. According to WD, it needs just 100 bytes of ECC data per 4096K sector under the new scheme, a savings of 220 bytes."</p><p>For those not wanting to do the math...<br>220 extra bytes per 4096 bytes, in a 1 terabyte drive nets us 55GB more space.</p><p>From Google:<br>(220 / 4096) * 1 terabyte = 55 gigabytes</p></htmltext>
<tokenext>FTA : " Each one of those ECC blocks is 40 bits wide ; a 4096K block of data contains 320 bytes of ECC .
Using Advanced Format 's new 4096 sector size cuts the amount of ECC and Sync/DAM space significantly .
According to WD , it needs just 100 bytes of ECC data per 4096K sector under the new scheme , a savings of 220 bytes .
" For those not wanting to do the math...220 extra bytes per 4096 bytes , in a 1 terabyte drive nets us 55GB more space.From Google : ( 220 / 4096 ) * 1 terabyte = 55 gigabytes</tokentext>
<sentencetext>FTA:"Each one of those ECC blocks is 40 bits wide; a 4096K block of data contains 320 bytes of ECC.
Using Advanced Format's new 4096 sector size cuts the amount of ECC and Sync/DAM space significantly.
According to WD, it needs just 100 bytes of ECC data per 4096K sector under the new scheme, a savings of 220 bytes.
"For those not wanting to do the math...220 extra bytes per 4096 bytes, in a 1 terabyte drive nets us 55GB more space.From Google:(220 / 4096) * 1 terabyte = 55 gigabytes</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294282</id>
	<title>Partitioning the right way fixes it</title>
	<author>Skapare</author>
	<datestamp>1267201980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Partitioning the right way deals with it.  You can use fdisk in Linux to do the partitioning for both Linux and Windows.</p><p>First, find out exactly how large the drive is in units of 512 byte sectors.  Divide that number by 8192 and round any fractions up.  Remember that as the number of cylinders.  In fdisk, use the "x" command to enter expert commands.  Do "s" to enter the number of sectors per track as 32.  Do "h" to enter the number of heads (tracks) per cylinder as 256 (not 255).  Do "c" to enter the number of cylinders previously calculated.  Do "r" to return from expert mode.  Do "u" to change units to exact sectors.</p><p>Now allocate space on boundaries of a multiple of 8 sectors (with the "last" sector for any partition being one less than such a multiple).  To squeeze a tiny bit more performance out of I/O scheduling and caching, allocate space on larger boundaries.  I now allocate on 2048 sector boundaries, which is exactly 1048576 bytes, with the first partition beginning at  sector 2048.  That leaves plenty of room for big boot loaders.</p><p>I have not tested starting at sector 2048 with Windows, but I did test starting at sector 64 with Windows several years ago and that worked fine.  I can't see why it would not work at 2048 if it worked at 64.  But I do recommend letting Windows do the filesystem formatting for any Windows partitions.</p></htmltext>
<tokenext>Partitioning the right way deals with it .
You can use fdisk in Linux to do the partitioning for both Linux and Windows.First , find out exactly how large the drive is in units of 512 byte sectors .
Divide that number by 8192 and round any fractions up .
Remember that as the number of cylinders .
In fdisk , use the " x " command to enter expert commands .
Do " s " to enter the number of sectors per track as 32 .
Do " h " to enter the number of heads ( tracks ) per cylinder as 256 ( not 255 ) .
Do " c " to enter the number of cylinders previously calculated .
Do " r " to return from expert mode .
Do " u " to change units to exact sectors.Now allocate space on boundaries of a multiple of 8 sectors ( with the " last " sector for any partition being one less than such a multiple ) .
To squeeze a tiny bit more performance out of I/O scheduling and caching , allocate space on larger boundaries .
I now allocate on 2048 sector boundaries , which is exactly 1048576 bytes , with the first partition beginning at sector 2048 .
That leaves plenty of room for big boot loaders.I have not tested starting at sector 2048 with Windows , but I did test starting at sector 64 with Windows several years ago and that worked fine .
I ca n't see why it would not work at 2048 if it worked at 64 .
But I do recommend letting Windows do the filesystem formatting for any Windows partitions .</tokentext>
<sentencetext>Partitioning the right way deals with it.
You can use fdisk in Linux to do the partitioning for both Linux and Windows.First, find out exactly how large the drive is in units of 512 byte sectors.
Divide that number by 8192 and round any fractions up.
Remember that as the number of cylinders.
In fdisk, use the "x" command to enter expert commands.
Do "s" to enter the number of sectors per track as 32.
Do "h" to enter the number of heads (tracks) per cylinder as 256 (not 255).
Do "c" to enter the number of cylinders previously calculated.
Do "r" to return from expert mode.
Do "u" to change units to exact sectors.Now allocate space on boundaries of a multiple of 8 sectors (with the "last" sector for any partition being one less than such a multiple).
To squeeze a tiny bit more performance out of I/O scheduling and caching, allocate space on larger boundaries.
I now allocate on 2048 sector boundaries, which is exactly 1048576 bytes, with the first partition beginning at  sector 2048.
That leaves plenty of room for big boot loaders.I have not tested starting at sector 2048 with Windows, but I did test starting at sector 64 with Windows several years ago and that worked fine.
I can't see why it would not work at 2048 if it worked at 64.
But I do recommend letting Windows do the filesystem formatting for any Windows partitions.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295462</id>
	<title>Can someone clarify something?</title>
	<author>caywen</author>
	<datestamp>1267261500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>From TFA: "Western Digital believes the technology will prove useful in the future and it's true that after thirty years, the 512 byte sector standard was creaking with age."</p><p>What does "creaking with age" really mean? I mean, the current format performs the same. The basic design is still the same, just with different magic numbers. I usually read "creaking with age" to mean that there's some kind of capacity or speed limit that we hit, but that's not the case. Is this more of a case of "why not" change it instead of "why"?</p></htmltext>
<tokenext>From TFA : " Western Digital believes the technology will prove useful in the future and it 's true that after thirty years , the 512 byte sector standard was creaking with age .
" What does " creaking with age " really mean ?
I mean , the current format performs the same .
The basic design is still the same , just with different magic numbers .
I usually read " creaking with age " to mean that there 's some kind of capacity or speed limit that we hit , but that 's not the case .
Is this more of a case of " why not " change it instead of " why " ?</tokentext>
<sentencetext>From TFA: "Western Digital believes the technology will prove useful in the future and it's true that after thirty years, the 512 byte sector standard was creaking with age.
"What does "creaking with age" really mean?
I mean, the current format performs the same.
The basic design is still the same, just with different magic numbers.
I usually read "creaking with age" to mean that there's some kind of capacity or speed limit that we hit, but that's not the case.
Is this more of a case of "why not" change it instead of "why"?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492</id>
	<title>Speed is irrelevant</title>
	<author>Anonymous</author>
	<datestamp>1267183560000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed. Regarding HDs, I'd like to see results relevant to:</p><p>1. Number of Read/Write operations per task: Does the new format result in fewer head movements, therefore less wear on the hardware, thus increasing HD's life expectancy and MTBF?</p><p>2. Energy efficiency: Does the new format have lower power consumption, leading to lower operating temperature and better laptop/netbook battery autonomy?</p><p>3. Are there differences in sustained read/write performance? E.g. is the new format more suitable for video editing than the old one?</p><p>For me, the first issue is the more important than all, given that owning huge 2T disks is in fact like playing Russian roulette: without proper backup strategies, you risk all your data at once.</p></htmltext>
<tokenext>I ca n't grasp why all ( these specific and most ) benchmarks are so much obsessed with speed .
Regarding HDs , I 'd like to see results relevant to : 1 .
Number of Read/Write operations per task : Does the new format result in fewer head movements , therefore less wear on the hardware , thus increasing HD 's life expectancy and MTBF ? 2 .
Energy efficiency : Does the new format have lower power consumption , leading to lower operating temperature and better laptop/netbook battery autonomy ? 3 .
Are there differences in sustained read/write performance ?
E.g. is the new format more suitable for video editing than the old one ? For me , the first issue is the more important than all , given that owning huge 2T disks is in fact like playing Russian roulette : without proper backup strategies , you risk all your data at once .</tokentext>
<sentencetext>I can't grasp why all (these specific and most) benchmarks are so much obsessed with speed.
Regarding HDs, I'd like to see results relevant to:1.
Number of Read/Write operations per task: Does the new format result in fewer head movements, therefore less wear on the hardware, thus increasing HD's life expectancy and MTBF?2.
Energy efficiency: Does the new format have lower power consumption, leading to lower operating temperature and better laptop/netbook battery autonomy?3.
Are there differences in sustained read/write performance?
E.g. is the new format more suitable for video editing than the old one?For me, the first issue is the more important than all, given that owning huge 2T disks is in fact like playing Russian roulette: without proper backup strategies, you risk all your data at once.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291438</id>
	<title>Re:What About Linux Systems?</title>
	<author>marcansoft</author>
	<datestamp>1267183200000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>If Advanced Format drives were true 4k drives (i.e. they didn't lie to the OS and claim they were 512 byte drives), they'd work great on Linux (and not at all on XP). Since they lie, Linux tools will have to be updated to assume the drive lies and default to 4k alignment. Anyway, you can already use manual/advanced settings in most Linux parititioning tools to manually work around the issue.</p></htmltext>
<tokenext>If Advanced Format drives were true 4k drives ( i.e .
they did n't lie to the OS and claim they were 512 byte drives ) , they 'd work great on Linux ( and not at all on XP ) .
Since they lie , Linux tools will have to be updated to assume the drive lies and default to 4k alignment .
Anyway , you can already use manual/advanced settings in most Linux parititioning tools to manually work around the issue .</tokentext>
<sentencetext>If Advanced Format drives were true 4k drives (i.e.
they didn't lie to the OS and claim they were 512 byte drives), they'd work great on Linux (and not at all on XP).
Since they lie, Linux tools will have to be updated to assume the drive lies and default to 4k alignment.
Anyway, you can already use manual/advanced settings in most Linux parititioning tools to manually work around the issue.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291304</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291750</id>
	<title>Re:1 byte = 10 bits?</title>
	<author>KPexEA</author>
	<datestamp>1267185060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Group code recording?

<a href="http://en.wikipedia.org/wiki/Group\_code\_recording" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/Group\_code\_recording</a> [wikipedia.org]

Back on my old Commodore Pet drive this was how they encoded data since too many zeros caused the head to lose it's place.</htmltext>
<tokenext>Group code recording ?
http : //en.wikipedia.org/wiki/Group \ _code \ _recording [ wikipedia.org ] Back on my old Commodore Pet drive this was how they encoded data since too many zeros caused the head to lose it 's place .</tokentext>
<sentencetext>Group code recording?
http://en.wikipedia.org/wiki/Group\_code\_recording [wikipedia.org]

Back on my old Commodore Pet drive this was how they encoded data since too many zeros caused the head to lose it's place.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291340</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291744</id>
	<title>Re:Large sector size good?</title>
	<author>owlstead</author>
	<datestamp>1267185000000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>You didn't dodge any bullet. Any file that has a size slightly over each 4096 border will take more space. For large amounts of larger files (such as an MP3  collection), you will, on average, have 2048 bytes of empty space in your drive's sectors. Lets say you have an archive which also uses some small files (e.g. playlists, small pictures) say that the overhead is about 3 KB per file, and the average file size is about 3MB. Since 3000000 / 3000 is about 1/1000 you could  have  a whopping 1 pro mille loss. That's for MP3's, for movies the percentage will be much lower still. Of course, if your FS uses a block size of 4096 already then you are already paying this 1 promille of overhead.</p><p>Personally I would not try and sue MS or WD over this issue...</p></htmltext>
<tokenext>You did n't dodge any bullet .
Any file that has a size slightly over each 4096 border will take more space .
For large amounts of larger files ( such as an MP3 collection ) , you will , on average , have 2048 bytes of empty space in your drive 's sectors .
Lets say you have an archive which also uses some small files ( e.g .
playlists , small pictures ) say that the overhead is about 3 KB per file , and the average file size is about 3MB .
Since 3000000 / 3000 is about 1/1000 you could have a whopping 1 pro mille loss .
That 's for MP3 's , for movies the percentage will be much lower still .
Of course , if your FS uses a block size of 4096 already then you are already paying this 1 promille of overhead.Personally I would not try and sue MS or WD over this issue.. .</tokentext>
<sentencetext>You didn't dodge any bullet.
Any file that has a size slightly over each 4096 border will take more space.
For large amounts of larger files (such as an MP3  collection), you will, on average, have 2048 bytes of empty space in your drive's sectors.
Lets say you have an archive which also uses some small files (e.g.
playlists, small pictures) say that the overhead is about 3 KB per file, and the average file size is about 3MB.
Since 3000000 / 3000 is about 1/1000 you could  have  a whopping 1 pro mille loss.
That's for MP3's, for movies the percentage will be much lower still.
Of course, if your FS uses a block size of 4096 already then you are already paying this 1 promille of overhead.Personally I would not try and sue MS or WD over this issue...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291372</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295336</id>
	<title>FC SAN, Tape guys</title>
	<author>symbolset</author>
	<datestamp>1267302540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Look, I know the parent post is going to garner a boatload of hate from the FC SAN people who will protest for various reasons that their unicorns and rainbows magnify the effectiveness of the underlying storage until it's cheap and performant.  I'm sorry, but you're all full of it (to be shkind).  You need to find a new job.
</p><p>When you figure the cost of FC storage, the network, the backup, the service contracts and whatnot, it's $30K-120K/TB.  You guys got some cool stuff - I'll give you that.  But it ain't worth 300x-1200x the price of consumer tech, especially when it lacks the density required for modern apps, and is limited in bandwidth to legacy tech, and can't serve the IOPs that VMHosts need.  As it is your validation teams are slacking in approving modern density drives.  For what you're asking we could do RAMDisk.  Notwithstanding, 8Gbps is barely sufficient to feed one VMHost, let alone a blade chassis full of them.
</p><p>And tape guys: seriously, it's time to give it up.  Get a new job - please.</p></htmltext>
<tokenext>Look , I know the parent post is going to garner a boatload of hate from the FC SAN people who will protest for various reasons that their unicorns and rainbows magnify the effectiveness of the underlying storage until it 's cheap and performant .
I 'm sorry , but you 're all full of it ( to be shkind ) .
You need to find a new job .
When you figure the cost of FC storage , the network , the backup , the service contracts and whatnot , it 's $ 30K-120K/TB .
You guys got some cool stuff - I 'll give you that .
But it ai n't worth 300x-1200x the price of consumer tech , especially when it lacks the density required for modern apps , and is limited in bandwidth to legacy tech , and ca n't serve the IOPs that VMHosts need .
As it is your validation teams are slacking in approving modern density drives .
For what you 're asking we could do RAMDisk .
Notwithstanding , 8Gbps is barely sufficient to feed one VMHost , let alone a blade chassis full of them .
And tape guys : seriously , it 's time to give it up .
Get a new job - please .</tokentext>
<sentencetext>Look, I know the parent post is going to garner a boatload of hate from the FC SAN people who will protest for various reasons that their unicorns and rainbows magnify the effectiveness of the underlying storage until it's cheap and performant.
I'm sorry, but you're all full of it (to be shkind).
You need to find a new job.
When you figure the cost of FC storage, the network, the backup, the service contracts and whatnot, it's $30K-120K/TB.
You guys got some cool stuff - I'll give you that.
But it ain't worth 300x-1200x the price of consumer tech, especially when it lacks the density required for modern apps, and is limited in bandwidth to legacy tech, and can't serve the IOPs that VMHosts need.
As it is your validation teams are slacking in approving modern density drives.
For what you're asking we could do RAMDisk.
Notwithstanding, 8Gbps is barely sufficient to feed one VMHost, let alone a blade chassis full of them.
And tape guys: seriously, it's time to give it up.
Get a new job - please.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294890</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291366</id>
	<title>Re:Large sector size good?</title>
	<author>Cyberax</author>
	<datestamp>1267182720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Unless you use a clever filesystem which doesn't force file size to be a multiple of sector size.</p></htmltext>
<tokenext>Unless you use a clever filesystem which does n't force file size to be a multiple of sector size .</tokentext>
<sentencetext>Unless you use a clever filesystem which doesn't force file size to be a multiple of sector size.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292086</id>
	<title>Re:Large sector size good?</title>
	<author>rickb928</author>
	<datestamp>1267186920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>NetWare has been doing block suballocation for <a href="http://support.novell.com/techcenter/articles/ana19940603.html" title="novell.com">a while now</a> [novell.com].  Not a bad way to make use of a larger block size, and it was crucial when early 'large' drives had to tolerate large blocks, at least before LBA was common.  Novell tackled a lot of these problems fairly early as they lead the way in PC servers and had to deal with big volumes fairly quickly.  Today, we take a lot of this for granted, and we are swimming in disk space so it's not a big deal.  But once upon a time, this was not so.  80MB was priceless.  Those sure were the days, grooming volumes and wincing over a 5MB file... ahh....</p><p>Not the <a href="http://hardware.slashdot.org/article.pl?sid=06/03/24/0619231" title="slashdot.org">first</a> [slashdot.org] time block size was trumpeted as the next Insanely Great Thing.</p><p>I wish Windows fully implemented it.  Hard to say, though, since the info is vague.</p></htmltext>
<tokenext>NetWare has been doing block suballocation for a while now [ novell.com ] .
Not a bad way to make use of a larger block size , and it was crucial when early 'large ' drives had to tolerate large blocks , at least before LBA was common .
Novell tackled a lot of these problems fairly early as they lead the way in PC servers and had to deal with big volumes fairly quickly .
Today , we take a lot of this for granted , and we are swimming in disk space so it 's not a big deal .
But once upon a time , this was not so .
80MB was priceless .
Those sure were the days , grooming volumes and wincing over a 5MB file... ahh....Not the first [ slashdot.org ] time block size was trumpeted as the next Insanely Great Thing.I wish Windows fully implemented it .
Hard to say , though , since the info is vague .</tokentext>
<sentencetext>NetWare has been doing block suballocation for a while now [novell.com].
Not a bad way to make use of a larger block size, and it was crucial when early 'large' drives had to tolerate large blocks, at least before LBA was common.
Novell tackled a lot of these problems fairly early as they lead the way in PC servers and had to deal with big volumes fairly quickly.
Today, we take a lot of this for granted, and we are swimming in disk space so it's not a big deal.
But once upon a time, this was not so.
80MB was priceless.
Those sure were the days, grooming volumes and wincing over a 5MB file... ahh....Not the first [slashdot.org] time block size was trumpeted as the next Insanely Great Thing.I wish Windows fully implemented it.
Hard to say, though, since the info is vague.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294484</id>
	<title>Not just for hard drives</title>
	<author>QuoteMstr</author>
	<datestamp>1267204200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Those of us who work with RAID arrays have cared about partition alignment for a long time. If a write spans two RAID-5 stripes, the RAID controller has to work twice as hard to correctly update the parity information. Aligning partitions and filesystem structures on stripe boundaries is essential to obtaining good performance on certain types of RAID arrays.</p></htmltext>
<tokenext>Those of us who work with RAID arrays have cared about partition alignment for a long time .
If a write spans two RAID-5 stripes , the RAID controller has to work twice as hard to correctly update the parity information .
Aligning partitions and filesystem structures on stripe boundaries is essential to obtaining good performance on certain types of RAID arrays .</tokentext>
<sentencetext>Those of us who work with RAID arrays have cared about partition alignment for a long time.
If a write spans two RAID-5 stripes, the RAID controller has to work twice as hard to correctly update the parity information.
Aligning partitions and filesystem structures on stripe boundaries is essential to obtaining good performance on certain types of RAID arrays.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291972</id>
	<title>Savings which do not get passed to the user</title>
	<author>Anonymous</author>
	<datestamp>1267186260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>FTA:<br>"A WD10EARS and a WD10EADS have exactly the same unformatted capacity and Windows reports both drives offer 931GB of storage space."</p><p>TFA didn't say whether the advanced format drive is faster or not, more reliable or not, so as far as this reader can see is that users are paying more $$$ for less ECC bits and bigger sectors (which means less space for the same number of randomly-sized files).</p><p>Clever maybe for WD, but no obvious benefit.</p></htmltext>
<tokenext>FTA : " A WD10EARS and a WD10EADS have exactly the same unformatted capacity and Windows reports both drives offer 931GB of storage space .
" TFA did n't say whether the advanced format drive is faster or not , more reliable or not , so as far as this reader can see is that users are paying more $ $ $ for less ECC bits and bigger sectors ( which means less space for the same number of randomly-sized files ) .Clever maybe for WD , but no obvious benefit .</tokentext>
<sentencetext>FTA:"A WD10EARS and a WD10EADS have exactly the same unformatted capacity and Windows reports both drives offer 931GB of storage space.
"TFA didn't say whether the advanced format drive is faster or not, more reliable or not, so as far as this reader can see is that users are paying more $$$ for less ECC bits and bigger sectors (which means less space for the same number of randomly-sized files).Clever maybe for WD, but no obvious benefit.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291380</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291338</id>
	<title>Re:Large sector size good?</title>
	<author>forkazoo</author>
	<datestamp>1267182540000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><blockquote><div><p>I thought the point was to have a small sector size. With large sectors, say 4096K, a 1K file will actually take up the full 4096K. A 4097K file will take up 8194K. A thousand 1K files will end up taking up 4096000K. I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.</p></div></blockquote><p>The filesystem's minimum allocation unit size doesn't necessarily need to have a strong relationship with the physical sector size.  Some filesystems don't have the behavior of rounding up the consumed space for small files because they will store multiple small files inside a single allocation unit.  (IIRC, Reiser is such an FS.)</p><p>Also, we are actually talking about 4 kilobyte sectors.  TFS refers to it as 4096k, which would be a 4 megabyte sector.  (Which is wildly wrong.)  So, worst case for your example of a thousand 1k files is actually 4 megabytes, not 4 gigabytes as you suggest.  And, really, if my 2 terabyte drive gets an extra 11\% from the more efficient ECC with the 4k sectors, that gives me a free 220000 megabytes, which pretty adequately compensates for the 3 MB I theoretically lose in a worst case filesystem from your example thousand files.</p></div>
	</htmltext>
<tokenext>I thought the point was to have a small sector size .
With large sectors , say 4096K , a 1K file will actually take up the full 4096K .
A 4097K file will take up 8194K .
A thousand 1K files will end up taking up 4096000K .
I understand that with larger HDD 's that this becomes less of an issue , but unless you are dealing with a fewer number of large files , I do n't see how this can be more efficient when the size of every file is rounded up to the next 4096K.The filesystem 's minimum allocation unit size does n't necessarily need to have a strong relationship with the physical sector size .
Some filesystems do n't have the behavior of rounding up the consumed space for small files because they will store multiple small files inside a single allocation unit .
( IIRC , Reiser is such an FS .
) Also , we are actually talking about 4 kilobyte sectors .
TFS refers to it as 4096k , which would be a 4 megabyte sector .
( Which is wildly wrong .
) So , worst case for your example of a thousand 1k files is actually 4 megabytes , not 4 gigabytes as you suggest .
And , really , if my 2 terabyte drive gets an extra 11 \ % from the more efficient ECC with the 4k sectors , that gives me a free 220000 megabytes , which pretty adequately compensates for the 3 MB I theoretically lose in a worst case filesystem from your example thousand files .</tokentext>
<sentencetext>I thought the point was to have a small sector size.
With large sectors, say 4096K, a 1K file will actually take up the full 4096K.
A 4097K file will take up 8194K.
A thousand 1K files will end up taking up 4096000K.
I understand that with larger HDD's that this becomes less of an issue, but unless you are dealing with a fewer number of large files, I don't see how this can be more efficient when the size of every file is rounded up to the next 4096K.The filesystem's minimum allocation unit size doesn't necessarily need to have a strong relationship with the physical sector size.
Some filesystems don't have the behavior of rounding up the consumed space for small files because they will store multiple small files inside a single allocation unit.
(IIRC, Reiser is such an FS.
)Also, we are actually talking about 4 kilobyte sectors.
TFS refers to it as 4096k, which would be a 4 megabyte sector.
(Which is wildly wrong.
)  So, worst case for your example of a thousand 1k files is actually 4 megabytes, not 4 gigabytes as you suggest.
And, really, if my 2 terabyte drive gets an extra 11\% from the more efficient ECC with the 4k sectors, that gives me a free 220000 megabytes, which pretty adequately compensates for the 3 MB I theoretically lose in a worst case filesystem from your example thousand files.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291322
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291366
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291354
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31296354
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294890
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291174
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292474
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292660
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294778
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291256
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294490
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291262
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31297264
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291666
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291462
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291340
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295738
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291482
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291340
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291484
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291340
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291554
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295196
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291286
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295620
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292964
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292086
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292084
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291304
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291608
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291326
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295780
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291304
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291812
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291972
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291380
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291750
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291340
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31296552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292534
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295336
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294890
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291174
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291386
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291286
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291744
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291372
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291530
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291256
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291406
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291304
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31293624
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291338
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_26_1943241_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291438
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291304
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291232
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291392
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291340
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291484
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291750
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291482
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291462
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291324
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291492
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292474
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295620
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295196
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292964
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292534
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291666
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31297264
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295462
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291226
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291216
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292086
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291366
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292660
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291322
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291354
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291812
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291286
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291386
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291628
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291326
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291554
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291338
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31293624
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291608
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295738
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291372
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291744
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31296552
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291380
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291972
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291262
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294490
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291304
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31292084
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291438
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295780
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291406
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291174
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294890
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31295336
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31296354
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291256
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31294778
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291530
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_26_1943241.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_26_1943241.31291446
</commentlist>
</conversation>
