<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_02_14_1541244</id>
	<title>Linux Not Quite Ready For New 4K-Sector Drives</title>
	<author>CmdrTaco</author>
	<datestamp>1266168600000</datestamp>
	<htmltext>Theovon writes <i>"We've seen a few stories recently about the new <a href="//hardware.slashdot.org/story/06/03/24/0619231/Changes-in-HDD-Sector-Usage-After-30-Years">Western Digital Green drives</a>. According to WD, their new 4096-byte sector drives are problematic for Windows XP users but not Linux or most other OSes.  Linux users should not be complacent about this, because <a href="http://www.osnews.com/story/22872/Linux\_Not\_Fully\_Prepared\_for\_4096-Byte\_Sector\_Hard\_Drives">not all the Linux tools like fdisk have caught up</a>.  The result is a reduction in write throughput by a factor of 3.3 across the board (a 230\% overhead) when 4096-byte clusters are misaligned to 4096-byte physical sectors by one or more 512-byte logical sectors.  The author does some benchmarks to demonstrate this.  Also, from the comments on the article, it appears that even parted is not ready, since by default it aligns to 'cylinder' boundaries, which are not physical cylinder boundaries and are multiples of 63."</i></htmltext>
<tokenext>Theovon writes " We 've seen a few stories recently about the new Western Digital Green drives .
According to WD , their new 4096-byte sector drives are problematic for Windows XP users but not Linux or most other OSes .
Linux users should not be complacent about this , because not all the Linux tools like fdisk have caught up .
The result is a reduction in write throughput by a factor of 3.3 across the board ( a 230 \ % overhead ) when 4096-byte clusters are misaligned to 4096-byte physical sectors by one or more 512-byte logical sectors .
The author does some benchmarks to demonstrate this .
Also , from the comments on the article , it appears that even parted is not ready , since by default it aligns to 'cylinder ' boundaries , which are not physical cylinder boundaries and are multiples of 63 .
"</tokentext>
<sentencetext>Theovon writes "We've seen a few stories recently about the new Western Digital Green drives.
According to WD, their new 4096-byte sector drives are problematic for Windows XP users but not Linux or most other OSes.
Linux users should not be complacent about this, because not all the Linux tools like fdisk have caught up.
The result is a reduction in write throughput by a factor of 3.3 across the board (a 230\% overhead) when 4096-byte clusters are misaligned to 4096-byte physical sectors by one or more 512-byte logical sectors.
The author does some benchmarks to demonstrate this.
Also, from the comments on the article, it appears that even parted is not ready, since by default it aligns to 'cylinder' boundaries, which are not physical cylinder boundaries and are multiples of 63.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135536</id>
	<title>Check with your distribution</title>
	<author>macemoneta</author>
	<datestamp>1266173640000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>I know that Fedora seems to have addressed this with <a href="http://koji.fedoraproject.org/koji/buildinfo?buildID=150382" title="fedoraproject.org">parted 2.1.1</a> [fedoraproject.org] and <a href="http://koji.fedoraproject.org/koji/buildinfo?buildID=149973" title="fedoraproject.org">util-linux-ng 2.1</a> [fedoraproject.org].  Both are scheduled for Fedora 13, but can be pulled into Fedora 12 by those getting the hardware early.</p></htmltext>
<tokenext>I know that Fedora seems to have addressed this with parted 2.1.1 [ fedoraproject.org ] and util-linux-ng 2.1 [ fedoraproject.org ] .
Both are scheduled for Fedora 13 , but can be pulled into Fedora 12 by those getting the hardware early .</tokentext>
<sentencetext>I know that Fedora seems to have addressed this with parted 2.1.1 [fedoraproject.org] and util-linux-ng 2.1 [fedoraproject.org].
Both are scheduled for Fedora 13, but can be pulled into Fedora 12 by those getting the hardware early.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31138188</id>
	<title>not all 4K are created equal</title>
	<author>dltaylor</author>
	<datestamp>1266149940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There are four flavors of 4096 byte-sectored drives:</p><p>4096 physical/logical - the bookkeeping parts of the file system cause read/modify write cycles because they are nearly always less than 4096 bytes, but the performance hit is relatively small; parted is badly broken.  If they're less than 2TiB, then you can use an MBR, otherwise the kernel is broken for partition sizes.</p><p>4096 physical/512 logical; LBA 0 aligned "off by one" with physical block 0 - created to deal with stupid BIOS (and Win XP, where some drivers rely on it), mostly work fine with the default tools, but still have the bookkeeping issues.  Because Win Vista/7 and OS X use GPT AND don't worry about "track" boundaries, they work better than Linux.</p><p>4096 physical/512 logical; LBA 0 aligned with physical block 0 - works great with Win Vista/7 and OS X, but the Linux installers are still aligning on the bogus track boundary, and not asking the physical/logical alignment.  Performance, without some very smart tweaking by the person doing the formatting REALLY stinks.</p><p>4096 physical/512 logical, but are reporting 512 physical (usually aligned 0 for 0) - again to deal with BIOS/Win XP.  Basically, treat ALL drives produced starting with 2010 as having 4K sectors, aligned 0 for 0, unless they explicitly report otherwise, and use the same human-intervention-required  layout as above.</p><p>Currently, the tools are the most pressing issue, since they are really broken in this respect, but there are kernel issues, as well, with drives larger that 2TiB and 4096-byte sectors.</p></htmltext>
<tokenext>There are four flavors of 4096 byte-sectored drives : 4096 physical/logical - the bookkeeping parts of the file system cause read/modify write cycles because they are nearly always less than 4096 bytes , but the performance hit is relatively small ; parted is badly broken .
If they 're less than 2TiB , then you can use an MBR , otherwise the kernel is broken for partition sizes.4096 physical/512 logical ; LBA 0 aligned " off by one " with physical block 0 - created to deal with stupid BIOS ( and Win XP , where some drivers rely on it ) , mostly work fine with the default tools , but still have the bookkeeping issues .
Because Win Vista/7 and OS X use GPT AND do n't worry about " track " boundaries , they work better than Linux.4096 physical/512 logical ; LBA 0 aligned with physical block 0 - works great with Win Vista/7 and OS X , but the Linux installers are still aligning on the bogus track boundary , and not asking the physical/logical alignment .
Performance , without some very smart tweaking by the person doing the formatting REALLY stinks.4096 physical/512 logical , but are reporting 512 physical ( usually aligned 0 for 0 ) - again to deal with BIOS/Win XP .
Basically , treat ALL drives produced starting with 2010 as having 4K sectors , aligned 0 for 0 , unless they explicitly report otherwise , and use the same human-intervention-required layout as above.Currently , the tools are the most pressing issue , since they are really broken in this respect , but there are kernel issues , as well , with drives larger that 2TiB and 4096-byte sectors .</tokentext>
<sentencetext>There are four flavors of 4096 byte-sectored drives:4096 physical/logical - the bookkeeping parts of the file system cause read/modify write cycles because they are nearly always less than 4096 bytes, but the performance hit is relatively small; parted is badly broken.
If they're less than 2TiB, then you can use an MBR, otherwise the kernel is broken for partition sizes.4096 physical/512 logical; LBA 0 aligned "off by one" with physical block 0 - created to deal with stupid BIOS (and Win XP, where some drivers rely on it), mostly work fine with the default tools, but still have the bookkeeping issues.
Because Win Vista/7 and OS X use GPT AND don't worry about "track" boundaries, they work better than Linux.4096 physical/512 logical; LBA 0 aligned with physical block 0 - works great with Win Vista/7 and OS X, but the Linux installers are still aligning on the bogus track boundary, and not asking the physical/logical alignment.
Performance, without some very smart tweaking by the person doing the formatting REALLY stinks.4096 physical/512 logical, but are reporting 512 physical (usually aligned 0 for 0) - again to deal with BIOS/Win XP.
Basically, treat ALL drives produced starting with 2010 as having 4K sectors, aligned 0 for 0, unless they explicitly report otherwise, and use the same human-intervention-required  layout as above.Currently, the tools are the most pressing issue, since they are really broken in this respect, but there are kernel issues, as well, with drives larger that 2TiB and 4096-byte sectors.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135916</id>
	<title>Linux isn't good enough to wipe my ass.</title>
	<author>Anonymous</author>
	<datestamp>1266177180000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>Linux is one of the most over rated lumps of shit out there today.</htmltext>
<tokenext>Linux is one of the most over rated lumps of shit out there today .</tokentext>
<sentencetext>Linux is one of the most over rated lumps of shit out there today.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135580</id>
	<title>I just bought one of these</title>
	<author>xhorder</author>
	<datestamp>1266174060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Can some kind soul tell me specfically what version of what utility I need to use for me to be OK? Or what settings?</p><p>My head hurts from trying to understand cylinders and sectors and drive geometry...</p><p>thanks!</p></htmltext>
<tokenext>Can some kind soul tell me specfically what version of what utility I need to use for me to be OK ?
Or what settings ? My head hurts from trying to understand cylinders and sectors and drive geometry...thanks !</tokentext>
<sentencetext>Can some kind soul tell me specfically what version of what utility I need to use for me to be OK?
Or what settings?My head hurts from trying to understand cylinders and sectors and drive geometry...thanks!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136120</id>
	<title>DragonFly's solution</title>
	<author>m.dillon</author>
	<datestamp>1266179100000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>We're adjusting our disklabel64 utility and kernel support to set the partition base offset such that it is physically aligned instead of slice-aligned, and we are using 32K alignment.  That should fix the problem without having to mess around with fdisk.</p><p>The DragonFly 64-bit disklabel structure uses 64-bit byte offsets instead of sector addressing to specify everything.  It ensures things are at least sector aligned but we wanted to make disk images more portable across devices with potentially different sector sizes.  The HAMMER fs uses byte-granular addressing for the same reason, 16K aligned.</p><p>-Matt</p></htmltext>
<tokenext>We 're adjusting our disklabel64 utility and kernel support to set the partition base offset such that it is physically aligned instead of slice-aligned , and we are using 32K alignment .
That should fix the problem without having to mess around with fdisk.The DragonFly 64-bit disklabel structure uses 64-bit byte offsets instead of sector addressing to specify everything .
It ensures things are at least sector aligned but we wanted to make disk images more portable across devices with potentially different sector sizes .
The HAMMER fs uses byte-granular addressing for the same reason , 16K aligned.-Matt</tokentext>
<sentencetext>We're adjusting our disklabel64 utility and kernel support to set the partition base offset such that it is physically aligned instead of slice-aligned, and we are using 32K alignment.
That should fix the problem without having to mess around with fdisk.The DragonFly 64-bit disklabel structure uses 64-bit byte offsets instead of sector addressing to specify everything.
It ensures things are at least sector aligned but we wanted to make disk images more portable across devices with potentially different sector sizes.
The HAMMER fs uses byte-granular addressing for the same reason, 16K aligned.-Matt</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135496</id>
	<title>It's still Windows' fault</title>
	<author>Anonymous</author>
	<datestamp>1266173280000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Of course fdisk/parted only follow the convention to align to "cylinder" boundaries because Windows and some of its tools will do weird things if a partition is not aligned like that. Linux doesn't have any problems with arbitrary partition offsets and lengths.</p></htmltext>
<tokenext>Of course fdisk/parted only follow the convention to align to " cylinder " boundaries because Windows and some of its tools will do weird things if a partition is not aligned like that .
Linux does n't have any problems with arbitrary partition offsets and lengths .</tokentext>
<sentencetext>Of course fdisk/parted only follow the convention to align to "cylinder" boundaries because Windows and some of its tools will do weird things if a partition is not aligned like that.
Linux doesn't have any problems with arbitrary partition offsets and lengths.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135704</id>
	<title>Re:So don't do that...</title>
	<author>Anonymous</author>
	<datestamp>1266174960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Fdisk is so retardedly out-of-date that 4k-block alignment is pretty low on the list of worries. It's fairly easy, so it might get done, but there are much more pressing matters for fdisk -- GPT support, not assuming MBR/DOS as the fall-through label, etc. -- that 4k-blocks are pretty low on the list.</p></htmltext>
<tokenext>Fdisk is so retardedly out-of-date that 4k-block alignment is pretty low on the list of worries .
It 's fairly easy , so it might get done , but there are much more pressing matters for fdisk -- GPT support , not assuming MBR/DOS as the fall-through label , etc .
-- that 4k-blocks are pretty low on the list .</tokentext>
<sentencetext>Fdisk is so retardedly out-of-date that 4k-block alignment is pretty low on the list of worries.
It's fairly easy, so it might get done, but there are much more pressing matters for fdisk -- GPT support, not assuming MBR/DOS as the fall-through label, etc.
-- that 4k-blocks are pretty low on the list.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135492</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135568</id>
	<title>Re:Open Source to the rescue</title>
	<author>Anonymous</author>
	<datestamp>1266173940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The beauty of Open Source is that you can code it by yourself, so why don't you start right now? Another history is distribution and maintability.</p></htmltext>
<tokenext>The beauty of Open Source is that you can code it by yourself , so why do n't you start right now ?
Another history is distribution and maintability .</tokentext>
<sentencetext>The beauty of Open Source is that you can code it by yourself, so why don't you start right now?
Another history is distribution and maintability.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136980</id>
	<title>gdisk</title>
	<author>Anonymous</author>
	<datestamp>1266141780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You shouldn't be using fdisk anyway; the MBR partition table is a dinosaur compared to GPT. Only Microsoft OSs still rely on the MBR, all others can handle GPT partitions without issue.</p><p>Its replacement (direct replacement), same interface, same feature set:<br>http://www.rodsbooks.com/gdisk/walkthrough.html</p><p>And yes, gdisk does handle sector alingment. It defaults to 4k-alignment on all drives larger than 800GB. The default can be changed, of course.</p></htmltext>
<tokenext>You should n't be using fdisk anyway ; the MBR partition table is a dinosaur compared to GPT .
Only Microsoft OSs still rely on the MBR , all others can handle GPT partitions without issue.Its replacement ( direct replacement ) , same interface , same feature set : http : //www.rodsbooks.com/gdisk/walkthrough.htmlAnd yes , gdisk does handle sector alingment .
It defaults to 4k-alignment on all drives larger than 800GB .
The default can be changed , of course .</tokentext>
<sentencetext>You shouldn't be using fdisk anyway; the MBR partition table is a dinosaur compared to GPT.
Only Microsoft OSs still rely on the MBR, all others can handle GPT partitions without issue.Its replacement (direct replacement), same interface, same feature set:http://www.rodsbooks.com/gdisk/walkthrough.htmlAnd yes, gdisk does handle sector alingment.
It defaults to 4k-alignment on all drives larger than 800GB.
The default can be changed, of course.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136194</id>
	<title>Simple Solution</title>
	<author>Khyber</author>
	<datestamp>1266179700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Don't partition the drive in XP - format the entire thing and don't split it apart. Get a secondary physical drive.</p></htmltext>
<tokenext>Do n't partition the drive in XP - format the entire thing and do n't split it apart .
Get a secondary physical drive .</tokentext>
<sentencetext>Don't partition the drive in XP - format the entire thing and don't split it apart.
Get a secondary physical drive.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137968</id>
	<title>Re:Partitions are obsolete</title>
	<author>BitZtream</author>
	<datestamp>1266148260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If your worried about one big filesystem going south then I suggest you start using a modern filesystem without those concerns.</p><p>We've got well past the point where that should be an issue on any modern system.</p><p>Your acting like it's still the 70s, it's not, we've learned a few things since then.</p></htmltext>
<tokenext>If your worried about one big filesystem going south then I suggest you start using a modern filesystem without those concerns.We 've got well past the point where that should be an issue on any modern system.Your acting like it 's still the 70s , it 's not , we 've learned a few things since then .</tokentext>
<sentencetext>If your worried about one big filesystem going south then I suggest you start using a modern filesystem without those concerns.We've got well past the point where that should be an issue on any modern system.Your acting like it's still the 70s, it's not, we've learned a few things since then.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135952</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31138896</id>
	<title>Re:Poorly researched article.</title>
	<author>Theovon</author>
	<datestamp>1266154500000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>I wrote the linked article.</p><p>I completely agree that the article is narrowly focused.  VERY narrow.  My objective was to demonstrate a problem and point out that Linux has not FULLY adapted.  I didn't say Linux devs were idiots or that it would never be ready.  I was trying to express the idea that Linux [distros in general but perhaps not all] is not QUITE ready for these drives, because not all the tools have fully adapted.  Some tools make no mention of any problems in their man pages.  Some (like parted's defaults) are even misleading if you mistakenly think that "track aligned" is a good thing.</p><p>And I was trying to do that in the very limited number of words I had available for a title.</p><p>Also, WD claimed that Linux is unaffected.  Some distros probably are, but this could lead people to believe that the statement is universally true, which it isn't.  Thus, my over-all objective is to educate people to the fact that if they don't know what they're doing, they can get this wrong.  There are lots of mistakes I've made where I wished that someone had mentioned some critical fact on a how-to (like, don't use dmraid/fakeraid for RAID1 because reads aren't load-balanced; use mdraid instead).  I've filed plenty of bug reports on such issues.</p></htmltext>
<tokenext>I wrote the linked article.I completely agree that the article is narrowly focused .
VERY narrow .
My objective was to demonstrate a problem and point out that Linux has not FULLY adapted .
I did n't say Linux devs were idiots or that it would never be ready .
I was trying to express the idea that Linux [ distros in general but perhaps not all ] is not QUITE ready for these drives , because not all the tools have fully adapted .
Some tools make no mention of any problems in their man pages .
Some ( like parted 's defaults ) are even misleading if you mistakenly think that " track aligned " is a good thing.And I was trying to do that in the very limited number of words I had available for a title.Also , WD claimed that Linux is unaffected .
Some distros probably are , but this could lead people to believe that the statement is universally true , which it is n't .
Thus , my over-all objective is to educate people to the fact that if they do n't know what they 're doing , they can get this wrong .
There are lots of mistakes I 've made where I wished that someone had mentioned some critical fact on a how-to ( like , do n't use dmraid/fakeraid for RAID1 because reads are n't load-balanced ; use mdraid instead ) .
I 've filed plenty of bug reports on such issues .</tokentext>
<sentencetext>I wrote the linked article.I completely agree that the article is narrowly focused.
VERY narrow.
My objective was to demonstrate a problem and point out that Linux has not FULLY adapted.
I didn't say Linux devs were idiots or that it would never be ready.
I was trying to express the idea that Linux [distros in general but perhaps not all] is not QUITE ready for these drives, because not all the tools have fully adapted.
Some tools make no mention of any problems in their man pages.
Some (like parted's defaults) are even misleading if you mistakenly think that "track aligned" is a good thing.And I was trying to do that in the very limited number of words I had available for a title.Also, WD claimed that Linux is unaffected.
Some distros probably are, but this could lead people to believe that the statement is universally true, which it isn't.
Thus, my over-all objective is to educate people to the fact that if they don't know what they're doing, they can get this wrong.
There are lots of mistakes I've made where I wished that someone had mentioned some critical fact on a how-to (like, don't use dmraid/fakeraid for RAID1 because reads aren't load-balanced; use mdraid instead).
I've filed plenty of bug reports on such issues.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136190</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136536</id>
	<title>Other possible consequence of misalignment</title>
	<author>Anonymous</author>
	<datestamp>1266139020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I didn't notice a performance drop in Linux when using such a 4k sector disk with misaligned partitions, but random stalls lasting up to about a minute. Some kernel threads (like md?\_raid1, kdmflush) were in an uninterruptible sleep, and the hd led was continuously lit. It seemed like no data transfer happened at all. And after some time the disk seemed to work again as it should.</p><p>After aligning the partitions, by using 56 sectors per track, the disk seems to work flawlessly. Maybe it also works faster now, but I did not check it.</p></htmltext>
<tokenext>I did n't notice a performance drop in Linux when using such a 4k sector disk with misaligned partitions , but random stalls lasting up to about a minute .
Some kernel threads ( like md ? \ _raid1 , kdmflush ) were in an uninterruptible sleep , and the hd led was continuously lit .
It seemed like no data transfer happened at all .
And after some time the disk seemed to work again as it should.After aligning the partitions , by using 56 sectors per track , the disk seems to work flawlessly .
Maybe it also works faster now , but I did not check it .</tokentext>
<sentencetext>I didn't notice a performance drop in Linux when using such a 4k sector disk with misaligned partitions, but random stalls lasting up to about a minute.
Some kernel threads (like md?\_raid1, kdmflush) were in an uninterruptible sleep, and the hd led was continuously lit.
It seemed like no data transfer happened at all.
And after some time the disk seemed to work again as it should.After aligning the partitions, by using 56 sectors per track, the disk seems to work flawlessly.
Maybe it also works faster now, but I did not check it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137004</id>
	<title>I ain't your bitch, so fix it yourself</title>
	<author>Anonymous</author>
	<datestamp>1266141960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>For the record: I love my job (tinkering with Linux) but I hate my customers (Linuz zealots who do nothing practical but cheerlead on the sides).<br>Screw you, little fucks.</p><p>---<br>no-name kernel hacker.</p></htmltext>
<tokenext>For the record : I love my job ( tinkering with Linux ) but I hate my customers ( Linuz zealots who do nothing practical but cheerlead on the sides ) .Screw you , little fucks.---no-name kernel hacker .</tokentext>
<sentencetext>For the record: I love my job (tinkering with Linux) but I hate my customers (Linuz zealots who do nothing practical but cheerlead on the sides).Screw you, little fucks.---no-name kernel hacker.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135922</id>
	<title>Re:Open Source to the rescue</title>
	<author>hedwards</author>
	<datestamp>1266177300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Trust me, it's right after adding GPT support to Windows XP home on their agenda. Shouldn't take more than, I don't know forever to add.</htmltext>
<tokenext>Trust me , it 's right after adding GPT support to Windows XP home on their agenda .
Should n't take more than , I do n't know forever to add .</tokentext>
<sentencetext>Trust me, it's right after adding GPT support to Windows XP home on their agenda.
Shouldn't take more than, I don't know forever to add.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135510</id>
	<title>Re:Open Source to the rescue</title>
	<author>Anonymous</author>
	<datestamp>1266173400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>That's the beauty of Open Source.</i>
<br>
<br>
And that couldn't possibly happen with closed source?</htmltext>
<tokenext>That 's the beauty of Open Source .
And that could n't possibly happen with closed source ?</tokentext>
<sentencetext>That's the beauty of Open Source.
And that couldn't possibly happen with closed source?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31142304</id>
	<title>Re:Poorly researched article.</title>
	<author>jrumney</author>
	<datestamp>1266231600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>one particular way to install a drive, on one (un-named) version of Gentoo, on one particular model of a WD drive that had a bugzilla entry entered by the author all of 2 days ago</p></div></blockquote><p>
It's the default way to install a drive on just about every Linux distribution out there.  I bought one of these drives unsuspectingly last week to replace a drive which had died, and installed Ubuntu 9.10 on it. Sure enough, my root partition starts on sector 63, and every other partition is also on an odd boundary.  So is there any utility for Linux that can do what the Windows utility alluded to in TFA does and shift all my painstakingly restored data by 512B, or am I stuck with reformatting again and restoring from backup yet again?
</p></div>
	</htmltext>
<tokenext>one particular way to install a drive , on one ( un-named ) version of Gentoo , on one particular model of a WD drive that had a bugzilla entry entered by the author all of 2 days ago It 's the default way to install a drive on just about every Linux distribution out there .
I bought one of these drives unsuspectingly last week to replace a drive which had died , and installed Ubuntu 9.10 on it .
Sure enough , my root partition starts on sector 63 , and every other partition is also on an odd boundary .
So is there any utility for Linux that can do what the Windows utility alluded to in TFA does and shift all my painstakingly restored data by 512B , or am I stuck with reformatting again and restoring from backup yet again ?</tokentext>
<sentencetext>one particular way to install a drive, on one (un-named) version of Gentoo, on one particular model of a WD drive that had a bugzilla entry entered by the author all of 2 days ago
It's the default way to install a drive on just about every Linux distribution out there.
I bought one of these drives unsuspectingly last week to replace a drive which had died, and installed Ubuntu 9.10 on it.
Sure enough, my root partition starts on sector 63, and every other partition is also on an odd boundary.
So is there any utility for Linux that can do what the Windows utility alluded to in TFA does and shift all my painstakingly restored data by 512B, or am I stuck with reformatting again and restoring from backup yet again?

	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136190</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135762</id>
	<title>Drive lies and future fixes</title>
	<author>Sits</author>
	<datestamp>1266175500000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>There is an excellent thread talking about how <a href="http://thread.gmane.org/gmane.linux.utilities.util-linux-ng/2926/focus=2937" title="gmane.org">recent (2.6.31+) linux kernels try to report the underlying hard drive architecture</a> [gmane.org] (found via the <a href="http://www.osnews.com/permalink?409284" title="osnews.com">OSNews comments</a> [osnews.com]). Alas, it looks like some of these drives are not reporting this data correctly and thus automatic adjustment (at partitioning time) is not taking place. It looks like in the future rather than trying to do detection by reported capability <a href="http://thread.gmane.org/gmane.linux.utilities.util-linux-ng/2926/focus=2978" title="gmane.org">fdisk (and hopefully gparted) will default to sectors of 1MiB if the topology can't be found by default</a> [gmane.org] (unless your media is small).</p><p>Additionally, I gather that <a href="http://storagemojo.com/2009/12/21/why-we-need-4k-drives/#comment-207299" title="storagemojo.com">recent Fedoras will try to adjust things like LVM to match larger sectors too</a> [storagemojo.com]. Hopefully whatever is laying out LVM will also be fixed too.</p><p>Coincidentally, it looks like Oracle have a very committed dev trying to make  this stuff work by default...</p></htmltext>
<tokenext>There is an excellent thread talking about how recent ( 2.6.31 + ) linux kernels try to report the underlying hard drive architecture [ gmane.org ] ( found via the OSNews comments [ osnews.com ] ) .
Alas , it looks like some of these drives are not reporting this data correctly and thus automatic adjustment ( at partitioning time ) is not taking place .
It looks like in the future rather than trying to do detection by reported capability fdisk ( and hopefully gparted ) will default to sectors of 1MiB if the topology ca n't be found by default [ gmane.org ] ( unless your media is small ) .Additionally , I gather that recent Fedoras will try to adjust things like LVM to match larger sectors too [ storagemojo.com ] .
Hopefully whatever is laying out LVM will also be fixed too.Coincidentally , it looks like Oracle have a very committed dev trying to make this stuff work by default.. .</tokentext>
<sentencetext>There is an excellent thread talking about how recent (2.6.31+) linux kernels try to report the underlying hard drive architecture [gmane.org] (found via the OSNews comments [osnews.com]).
Alas, it looks like some of these drives are not reporting this data correctly and thus automatic adjustment (at partitioning time) is not taking place.
It looks like in the future rather than trying to do detection by reported capability fdisk (and hopefully gparted) will default to sectors of 1MiB if the topology can't be found by default [gmane.org] (unless your media is small).Additionally, I gather that recent Fedoras will try to adjust things like LVM to match larger sectors too [storagemojo.com].
Hopefully whatever is laying out LVM will also be fixed too.Coincidentally, it looks like Oracle have a very committed dev trying to make  this stuff work by default...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135426</id>
	<title>Good thread on this.</title>
	<author>Anonymous</author>
	<datestamp>1266172680000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><a href="http://www.osnews.com/thread?409281" title="osnews.com" rel="nofollow">http://www.osnews.com/thread?409281</a> [osnews.com]</htmltext>
<tokenext>http : //www.osnews.com/thread ? 409281 [ osnews.com ]</tokentext>
<sentencetext>http://www.osnews.com/thread?409281 [osnews.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135952</id>
	<title>Re:Partitions are obsolete</title>
	<author>Anonymous</author>
	<datestamp>1266177540000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Which is nice if you're wanting to ensure that you've got the lowest possible reliability and safety for your data.  While you're at it, make sure you're using a striped non-redundant array of disks as well, best use at least 4 in the array, otherwise you might get some of your data back.<br> <br>

You've got it exactly backwards, people shouldn't be partitioning disks into one huge partition. They should be able to split things up a bit to keep rapidly changing directories from mostly static ones and to manage the risk of filesystem corruption destroying important files.</htmltext>
<tokenext>Which is nice if you 're wanting to ensure that you 've got the lowest possible reliability and safety for your data .
While you 're at it , make sure you 're using a striped non-redundant array of disks as well , best use at least 4 in the array , otherwise you might get some of your data back .
You 've got it exactly backwards , people should n't be partitioning disks into one huge partition .
They should be able to split things up a bit to keep rapidly changing directories from mostly static ones and to manage the risk of filesystem corruption destroying important files .</tokentext>
<sentencetext>Which is nice if you're wanting to ensure that you've got the lowest possible reliability and safety for your data.
While you're at it, make sure you're using a striped non-redundant array of disks as well, best use at least 4 in the array, otherwise you might get some of your data back.
You've got it exactly backwards, people shouldn't be partitioning disks into one huge partition.
They should be able to split things up a bit to keep rapidly changing directories from mostly static ones and to manage the risk of filesystem corruption destroying important files.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135556</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136138</id>
	<title>Re:Open Source to the rescue</title>
	<author>Blakey Rat</author>
	<datestamp>1266179280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, since Windows Vista and Windows 7 already support this, I'd say "fairly soon" is demonstrably false. Unless you're happy with "fairly soon" being "after everybody else has been doing it for several years."</p></htmltext>
<tokenext>Well , since Windows Vista and Windows 7 already support this , I 'd say " fairly soon " is demonstrably false .
Unless you 're happy with " fairly soon " being " after everybody else has been doing it for several years .
"</tokentext>
<sentencetext>Well, since Windows Vista and Windows 7 already support this, I'd say "fairly soon" is demonstrably false.
Unless you're happy with "fairly soon" being "after everybody else has been doing it for several years.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135562</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31138194</id>
	<title>All of a sudden?</title>
	<author>Waccoon</author>
	<datestamp>1266149940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I had never ever heard of drive alignment until I bought an SSD.</p><p>Not to be unhelpfully pessimistic, but... didn't it even occur to people that drive alignment might be important?  Is 4K drive alignment just Y2K for hard drives?  Why did people only start thinking about this now?</p></htmltext>
<tokenext>I had never ever heard of drive alignment until I bought an SSD.Not to be unhelpfully pessimistic , but... did n't it even occur to people that drive alignment might be important ?
Is 4K drive alignment just Y2K for hard drives ?
Why did people only start thinking about this now ?</tokentext>
<sentencetext>I had never ever heard of drive alignment until I bought an SSD.Not to be unhelpfully pessimistic, but... didn't it even occur to people that drive alignment might be important?
Is 4K drive alignment just Y2K for hard drives?
Why did people only start thinking about this now?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460</id>
	<title>Open Source to the rescue</title>
	<author>bogaboga</author>
	<datestamp>1266172980000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>I am no kernel hacker but I can almost guarantee that some kernel hacker will provide a solution to this "short coming" fairly soon.</p><p>That's the beauty of Open Source.</p><p>I am aware though that "fairly soon" means many things to many people; which means that there could be a substantial delay before we get a working solution to this issue.</p><p>I am optimistic nevertheless.</p><p>Request to Western Digital: Provide all the information needed to develop a solution.</p></htmltext>
<tokenext>I am no kernel hacker but I can almost guarantee that some kernel hacker will provide a solution to this " short coming " fairly soon.That 's the beauty of Open Source.I am aware though that " fairly soon " means many things to many people ; which means that there could be a substantial delay before we get a working solution to this issue.I am optimistic nevertheless.Request to Western Digital : Provide all the information needed to develop a solution .</tokentext>
<sentencetext>I am no kernel hacker but I can almost guarantee that some kernel hacker will provide a solution to this "short coming" fairly soon.That's the beauty of Open Source.I am aware though that "fairly soon" means many things to many people; which means that there could be a substantial delay before we get a working solution to this issue.I am optimistic nevertheless.Request to Western Digital: Provide all the information needed to develop a solution.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135556</id>
	<title>Partitions are obsolete</title>
	<author>Anonymous</author>
	<datestamp>1266173820000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>Easiest fix: stop dividing your disks into partitions.</p></htmltext>
<tokenext>Easiest fix : stop dividing your disks into partitions .</tokentext>
<sentencetext>Easiest fix: stop dividing your disks into partitions.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136748</id>
	<title>Re:Open Source to the rescue</title>
	<author>mcrbids</author>
	<datestamp>1266140160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Actually, the beauty of closed source is that the OS "supports" it out of the box, except that it's buggy as all get out, works very slowly for stuff that was much faster in 1995, and despite many users noticing and complaining about the problem both to the vendor and in various blogs and online forums, it doesn't get fixed for months or years while a qualified dev is finally diverted when the problem is so severe that even the US Army won't buy until it's fixed.</p></htmltext>
<tokenext>Actually , the beauty of closed source is that the OS " supports " it out of the box , except that it 's buggy as all get out , works very slowly for stuff that was much faster in 1995 , and despite many users noticing and complaining about the problem both to the vendor and in various blogs and online forums , it does n't get fixed for months or years while a qualified dev is finally diverted when the problem is so severe that even the US Army wo n't buy until it 's fixed .</tokentext>
<sentencetext>Actually, the beauty of closed source is that the OS "supports" it out of the box, except that it's buggy as all get out, works very slowly for stuff that was much faster in 1995, and despite many users noticing and complaining about the problem both to the vendor and in various blogs and online forums, it doesn't get fixed for months or years while a qualified dev is finally diverted when the problem is so severe that even the US Army won't buy until it's fixed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135658</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31153460</id>
	<title>Re:Set 32 sectors per track</title>
	<author>RivieraKid</author>
	<datestamp>1266315420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's not so much a problem with track-to-track seeking, as it is a problem with the misaligned I/O itself.</p><p>If your on-disk structures do not align with the physical structures, then you experience a penalty for every host I/O to that disk. It's exactly the same issue with enterprise class storage arrays. the volume on the array is offset from the very beginning of the physical disk so is correctly aligned. However, every x86 style partition table laid down on those volumes consume the first 63K of the disk, meaning the data starts at the 64K mark - 1K before the start of the next disk track.</p><p>So, in a nutshell - for every host I/O, you will generate at least one additional I/O for the disk due to the badly aligned start of the partition table. Worst case, you'll generate two additional disk I/Os (for multi-track I/O) since the last track is also misaligned. For large block sequential I/O, you can pretty much ignore this effect, for small block random I/O - it's gonna kill performance. Align your partitions to 64K and the problem goes away - of course, this has to be done before you put any data on the partition and is not done by default with any OS I'm aware of today, which explains why nobody ever does it.</p><p>Note, since I mentioned storage arrays I'll clarify that this is not the same as the RAID write penalty.</p><p>Note also, this is usually only a problem for x86 style partitioned disks.</p></htmltext>
<tokenext>It 's not so much a problem with track-to-track seeking , as it is a problem with the misaligned I/O itself.If your on-disk structures do not align with the physical structures , then you experience a penalty for every host I/O to that disk .
It 's exactly the same issue with enterprise class storage arrays .
the volume on the array is offset from the very beginning of the physical disk so is correctly aligned .
However , every x86 style partition table laid down on those volumes consume the first 63K of the disk , meaning the data starts at the 64K mark - 1K before the start of the next disk track.So , in a nutshell - for every host I/O , you will generate at least one additional I/O for the disk due to the badly aligned start of the partition table .
Worst case , you 'll generate two additional disk I/Os ( for multi-track I/O ) since the last track is also misaligned .
For large block sequential I/O , you can pretty much ignore this effect , for small block random I/O - it 's gon na kill performance .
Align your partitions to 64K and the problem goes away - of course , this has to be done before you put any data on the partition and is not done by default with any OS I 'm aware of today , which explains why nobody ever does it.Note , since I mentioned storage arrays I 'll clarify that this is not the same as the RAID write penalty.Note also , this is usually only a problem for x86 style partitioned disks .</tokentext>
<sentencetext>It's not so much a problem with track-to-track seeking, as it is a problem with the misaligned I/O itself.If your on-disk structures do not align with the physical structures, then you experience a penalty for every host I/O to that disk.
It's exactly the same issue with enterprise class storage arrays.
the volume on the array is offset from the very beginning of the physical disk so is correctly aligned.
However, every x86 style partition table laid down on those volumes consume the first 63K of the disk, meaning the data starts at the 64K mark - 1K before the start of the next disk track.So, in a nutshell - for every host I/O, you will generate at least one additional I/O for the disk due to the badly aligned start of the partition table.
Worst case, you'll generate two additional disk I/Os (for multi-track I/O) since the last track is also misaligned.
For large block sequential I/O, you can pretty much ignore this effect, for small block random I/O - it's gonna kill performance.
Align your partitions to 64K and the problem goes away - of course, this has to be done before you put any data on the partition and is not done by default with any OS I'm aware of today, which explains why nobody ever does it.Note, since I mentioned storage arrays I'll clarify that this is not the same as the RAID write penalty.Note also, this is usually only a problem for x86 style partitioned disks.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135984</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136128</id>
	<title>Re:Open Source to the rescue</title>
	<author>Blakey Rat</author>
	<datestamp>1266179100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Don't forget that this "beauty of open source" doesn't even get around to thinking about fixing the issue until a competing OS has had support for it for 3 entire years.</p></htmltext>
<tokenext>Do n't forget that this " beauty of open source " does n't even get around to thinking about fixing the issue until a competing OS has had support for it for 3 entire years .</tokentext>
<sentencetext>Don't forget that this "beauty of open source" doesn't even get around to thinking about fixing the issue until a competing OS has had support for it for 3 entire years.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135658</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135630</id>
	<title>if vista/win7 really do support this correctly...</title>
	<author>buddyglass</author>
	<datestamp>1266174420000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>...then I find this to be somewhat of an indictment against the open source model when applied to OS development.  The default seems to be to fix fires as they arise.  If there are no drives with 4k sectors then we don't need to support drives with 4k sectors.  Once drives with 4k sectors arrive its up the individual maintainers of each affected tool (fdisk, et. al.) to update their code.  Contrast this with a dictatorial model used by Microsoft, where they said, basically, "We know these are going to arrive sometime in the next couple years and we want to be ready when they do.  So all you subsystem maintainers whose code is affected by it better build in support now, ahead of time."</p><p>Of course, if the Win7 support is crappy or only partially works, then its no indictment at all.  Not having used one of these new drives in conjunction with a recent version of Windows, I can't really say one way or the other.</p></htmltext>
<tokenext>...then I find this to be somewhat of an indictment against the open source model when applied to OS development .
The default seems to be to fix fires as they arise .
If there are no drives with 4k sectors then we do n't need to support drives with 4k sectors .
Once drives with 4k sectors arrive its up the individual maintainers of each affected tool ( fdisk , et .
al. ) to update their code .
Contrast this with a dictatorial model used by Microsoft , where they said , basically , " We know these are going to arrive sometime in the next couple years and we want to be ready when they do .
So all you subsystem maintainers whose code is affected by it better build in support now , ahead of time .
" Of course , if the Win7 support is crappy or only partially works , then its no indictment at all .
Not having used one of these new drives in conjunction with a recent version of Windows , I ca n't really say one way or the other .</tokentext>
<sentencetext>...then I find this to be somewhat of an indictment against the open source model when applied to OS development.
The default seems to be to fix fires as they arise.
If there are no drives with 4k sectors then we don't need to support drives with 4k sectors.
Once drives with 4k sectors arrive its up the individual maintainers of each affected tool (fdisk, et.
al.) to update their code.
Contrast this with a dictatorial model used by Microsoft, where they said, basically, "We know these are going to arrive sometime in the next couple years and we want to be ready when they do.
So all you subsystem maintainers whose code is affected by it better build in support now, ahead of time.
"Of course, if the Win7 support is crappy or only partially works, then its no indictment at all.
Not having used one of these new drives in conjunction with a recent version of Windows, I can't really say one way or the other.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135984</id>
	<title>Re:Set 32 sectors per track</title>
	<author>Z00L00K</author>
	<datestamp>1266177780000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Essentially we are back to the old problems of the ST412 interface where we had to figure out the best interleave for the drives as well when we were formatting them. Most drives then did have a fairly conservative interleave, but a reformat of them could improve the throughput considerably. A reformat could be done so that the whole track could be read in 2 rotations instead of 3, and what that does to performance is fairly easy to understand. C800:5 was a commonly used BIOS address where the low level format routine did reside.</p><p>But from what I understand this problem is an offset problem when the head steps from track to track, and that's also an issue to be considered. And today it's not common knowledge/practice to low level format hard drives.</p><p>And why stick at 4k sectors? Depending on the system you may want to use a different sector size. If you run Oracle on some systems the block size is 8k, and in that case you may want to have 8k disk blocks too since it would be good for performance.</p><p>Anyway - sooner or later we will have flash drives instead, and then this isn't a problem.</p></htmltext>
<tokenext>Essentially we are back to the old problems of the ST412 interface where we had to figure out the best interleave for the drives as well when we were formatting them .
Most drives then did have a fairly conservative interleave , but a reformat of them could improve the throughput considerably .
A reformat could be done so that the whole track could be read in 2 rotations instead of 3 , and what that does to performance is fairly easy to understand .
C800 : 5 was a commonly used BIOS address where the low level format routine did reside.But from what I understand this problem is an offset problem when the head steps from track to track , and that 's also an issue to be considered .
And today it 's not common knowledge/practice to low level format hard drives.And why stick at 4k sectors ?
Depending on the system you may want to use a different sector size .
If you run Oracle on some systems the block size is 8k , and in that case you may want to have 8k disk blocks too since it would be good for performance.Anyway - sooner or later we will have flash drives instead , and then this is n't a problem .</tokentext>
<sentencetext>Essentially we are back to the old problems of the ST412 interface where we had to figure out the best interleave for the drives as well when we were formatting them.
Most drives then did have a fairly conservative interleave, but a reformat of them could improve the throughput considerably.
A reformat could be done so that the whole track could be read in 2 rotations instead of 3, and what that does to performance is fairly easy to understand.
C800:5 was a commonly used BIOS address where the low level format routine did reside.But from what I understand this problem is an offset problem when the head steps from track to track, and that's also an issue to be considered.
And today it's not common knowledge/practice to low level format hard drives.And why stick at 4k sectors?
Depending on the system you may want to use a different sector size.
If you run Oracle on some systems the block size is 8k, and in that case you may want to have 8k disk blocks too since it would be good for performance.Anyway - sooner or later we will have flash drives instead, and then this isn't a problem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135416</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31139746</id>
	<title>linux not quite ready for pseudo 512bsector drives</title>
	<author>Anonymous</author>
	<datestamp>1266160440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>clearly WD's fault.</p><p>anyone aware of 4k-sector \_desktop\_ drives, that have a real 4k interface?<br>(some new samsung 2,5" and 1,8" hdds are made like that according to heise's ct (german magazine))</p></htmltext>
<tokenext>clearly WD 's fault.anyone aware of 4k-sector \ _desktop \ _ drives , that have a real 4k interface ?
( some new samsung 2,5 " and 1,8 " hdds are made like that according to heise 's ct ( german magazine ) )</tokentext>
<sentencetext>clearly WD's fault.anyone aware of 4k-sector \_desktop\_ drives, that have a real 4k interface?
(some new samsung 2,5" and 1,8" hdds are made like that according to heise's ct (german magazine))</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136060</id>
	<title>Re:Open Source to the rescue</title>
	<author>Anonymous</author>
	<datestamp>1266178440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>ah yes, the "beauty" of open source:  "someone else will do it."</htmltext>
<tokenext>ah yes , the " beauty " of open source : " someone else will do it .
"</tokentext>
<sentencetext>ah yes, the "beauty" of open source:  "someone else will do it.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135492</id>
	<title>So don't do that...</title>
	<author>russotto</author>
	<datestamp>1266173220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>Author claims a massive performance drop if things aren't aligned right.  Ubuntu already does it with parted and fdisk can do it manually.  So, no big problem; fdisk ought to be fixed to have sane defaults with a 4096 byte block size, sure.  That can't be all that difficult.</p><p>The author also seems to think that only a 30\% increase in times for misaligned writes should be expected.  I'm not sure why.  In a naive implementation I'd expect a 100\% increase in time (each block now needs to be written twice).  Linux, obviously, doesn't use a naive implementation.  It's expected that if the hardware violates the assumptions behind the techniques Linux uses to achieve high performance, that those techniques end up making things very slow instead.</p></htmltext>
<tokenext>Author claims a massive performance drop if things are n't aligned right .
Ubuntu already does it with parted and fdisk can do it manually .
So , no big problem ; fdisk ought to be fixed to have sane defaults with a 4096 byte block size , sure .
That ca n't be all that difficult.The author also seems to think that only a 30 \ % increase in times for misaligned writes should be expected .
I 'm not sure why .
In a naive implementation I 'd expect a 100 \ % increase in time ( each block now needs to be written twice ) .
Linux , obviously , does n't use a naive implementation .
It 's expected that if the hardware violates the assumptions behind the techniques Linux uses to achieve high performance , that those techniques end up making things very slow instead .</tokentext>
<sentencetext>Author claims a massive performance drop if things aren't aligned right.
Ubuntu already does it with parted and fdisk can do it manually.
So, no big problem; fdisk ought to be fixed to have sane defaults with a 4096 byte block size, sure.
That can't be all that difficult.The author also seems to think that only a 30\% increase in times for misaligned writes should be expected.
I'm not sure why.
In a naive implementation I'd expect a 100\% increase in time (each block now needs to be written twice).
Linux, obviously, doesn't use a naive implementation.
It's expected that if the hardware violates the assumptions behind the techniques Linux uses to achieve high performance, that those techniques end up making things very slow instead.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31144738</id>
	<title>Re:I was worried about this... and am still unclea</title>
	<author>Anonymous</author>
	<datestamp>1266252720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There is a jumper on the drive that you have to set to disable the 512 byte sector emulation.</p></htmltext>
<tokenext>There is a jumper on the drive that you have to set to disable the 512 byte sector emulation .</tokentext>
<sentencetext>There is a jumper on the drive that you have to set to disable the 512 byte sector emulation.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136018</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135798</id>
	<title>Lies!</title>
	<author>Anonymous</author>
	<datestamp>1266176040000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext><p>These are lie because teh linux is the best!</p></htmltext>
<tokenext>These are lie because teh linux is the best !</tokentext>
<sentencetext>These are lie because teh linux is the best!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137020</id>
	<title>parted solution for these drives</title>
	<author>Anonymous</author>
	<datestamp>1266142080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Posted by Aleksander Adamowski to this thread on the util-linux mailing list: http://thread.gmane.org/gmane.linux.utilities.util-linux-ng/2926<nobr> <wbr></nobr>... "So for any other owners of WD EARS drives, if these don't report<br>physical 4096-byte sectors to you, don't believe them and align your<br>partitions at the aforementioned sectors (a generally good idea is to<br>run the postmark benchark to compare performance on aligned and<br>non-aligned partitions).</p><p>Just in case anyone doesn't know how to align these partitions<br>(WARNING: the instructions below will likely destroy any data that's<br>on the given drive, only do this with drives you're intending to<br>erase):</p><p># parted<nobr> <wbr></nobr>/dev/YOUR\_DEVICE\_NAME</p><p>(parted) mklabel gpt<br># Here ^ I've chosen the GPT partition table format, but others may be<br>OK too - untested by me.</p><p>(parted) unit s<br># Here ^ we're choosing sectors as units of measurement</p><p>(parted) mkpart primary ext2 40 -1<br># Here ^ we're creating a partition that starts at sector 40, which is<br>divisible by 8.<br># You can also try 48, 56, 64 and others - these should offer the same<br>high performance,<br># but some space will go to waste - it's only some tiny kilobytes, though.</p><p># Parted will likely complain about the end location of the ending sector:<br>Warning: You requested a partition from 40s to 2930277167s.<br>The closest location we can manage is 40s to 2930277134s.<br>Is this still acceptable to you?<br>Yes/No?<br># Of course, we answer Yes.</p><p>(parted) quit</p><p># After that, create a filesystem as usual, e.g:<br># mkfs.ext4 -T largefile4<nobr> <wbr></nobr>/dev/YOUR\_DEVICE\_NAME</p><p>This should get the optimum performance from your 4 kB physical sector<br>drives even when they report 512 B sectors only to the OS."</p><p>I had this issue bite me in the ass when building an OpenFiler-based home NAS this week. The real kicker is that the drive label specifically states a need to jumper two pins on the drive and align your partitions when using WinXP, but "all other" operating systems are good-to-go with no intervention required. Maybe that would be the case if the drive weren't misreporting its sector size to the kernel, but I should've known better regardless.</p></htmltext>
<tokenext>Posted by Aleksander Adamowski to this thread on the util-linux mailing list : http : //thread.gmane.org/gmane.linux.utilities.util-linux-ng/2926 ... " So for any other owners of WD EARS drives , if these do n't reportphysical 4096-byte sectors to you , do n't believe them and align yourpartitions at the aforementioned sectors ( a generally good idea is torun the postmark benchark to compare performance on aligned andnon-aligned partitions ) .Just in case anyone does n't know how to align these partitions ( WARNING : the instructions below will likely destroy any data that'son the given drive , only do this with drives you 're intending toerase ) : # parted /dev/YOUR \ _DEVICE \ _NAME ( parted ) mklabel gpt # Here ^ I 've chosen the GPT partition table format , but others may beOK too - untested by me .
( parted ) unit s # Here ^ we 're choosing sectors as units of measurement ( parted ) mkpart primary ext2 40 -1 # Here ^ we 're creating a partition that starts at sector 40 , which isdivisible by 8. # You can also try 48 , 56 , 64 and others - these should offer the samehigh performance , # but some space will go to waste - it 's only some tiny kilobytes , though. # Parted will likely complain about the end location of the ending sector : Warning : You requested a partition from 40s to 2930277167s.The closest location we can manage is 40s to 2930277134s.Is this still acceptable to you ? Yes/No ? # Of course , we answer Yes .
( parted ) quit # After that , create a filesystem as usual , e.g : # mkfs.ext4 -T largefile4 /dev/YOUR \ _DEVICE \ _NAMEThis should get the optimum performance from your 4 kB physical sectordrives even when they report 512 B sectors only to the OS .
" I had this issue bite me in the ass when building an OpenFiler-based home NAS this week .
The real kicker is that the drive label specifically states a need to jumper two pins on the drive and align your partitions when using WinXP , but " all other " operating systems are good-to-go with no intervention required .
Maybe that would be the case if the drive were n't misreporting its sector size to the kernel , but I should 've known better regardless .</tokentext>
<sentencetext>Posted by Aleksander Adamowski to this thread on the util-linux mailing list: http://thread.gmane.org/gmane.linux.utilities.util-linux-ng/2926 ... "So for any other owners of WD EARS drives, if these don't reportphysical 4096-byte sectors to you, don't believe them and align yourpartitions at the aforementioned sectors (a generally good idea is torun the postmark benchark to compare performance on aligned andnon-aligned partitions).Just in case anyone doesn't know how to align these partitions(WARNING: the instructions below will likely destroy any data that'son the given drive, only do this with drives you're intending toerase):# parted /dev/YOUR\_DEVICE\_NAME(parted) mklabel gpt# Here ^ I've chosen the GPT partition table format, but others may beOK too - untested by me.
(parted) unit s# Here ^ we're choosing sectors as units of measurement(parted) mkpart primary ext2 40 -1# Here ^ we're creating a partition that starts at sector 40, which isdivisible by 8.# You can also try 48, 56, 64 and others - these should offer the samehigh performance,# but some space will go to waste - it's only some tiny kilobytes, though.# Parted will likely complain about the end location of the ending sector:Warning: You requested a partition from 40s to 2930277167s.The closest location we can manage is 40s to 2930277134s.Is this still acceptable to you?Yes/No?# Of course, we answer Yes.
(parted) quit# After that, create a filesystem as usual, e.g:# mkfs.ext4 -T largefile4 /dev/YOUR\_DEVICE\_NAMEThis should get the optimum performance from your 4 kB physical sectordrives even when they report 512 B sectors only to the OS.
"I had this issue bite me in the ass when building an OpenFiler-based home NAS this week.
The real kicker is that the drive label specifically states a need to jumper two pins on the drive and align your partitions when using WinXP, but "all other" operating systems are good-to-go with no intervention required.
Maybe that would be the case if the drive weren't misreporting its sector size to the kernel, but I should've known better regardless.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136190</id>
	<title>Poorly researched article.</title>
	<author>Vellmont</author>
	<datestamp>1266179700000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>The article represents one data point, for one particular way to install a drive, on one (un-named) version of Gentoo, on one particular model of a WD drive that had a bugzilla entry entered by the author all of 2 days ago.  So this is supposed to be an indictment of all of Linux?</p><p>The author even mentions that Ubuntu has an option on parted that accomplishes the task properly.  I'd be much more interested in an article that talks about how the default installer handles this task rather than concentrating on one particular expert tool that does so.  It's still good to know that fdisk on his un-named Gentoo distribution does the wrong thing..  but this hardly means we should fire up the klaxon and declare "Linux not fully prepared for 4096 sector hard drives!".  It's certainly interesting, but I'll withhold judgment until we actually know more about the implications of this across the entire spectrum of Linux distributions and the various 4096 sector HDs.</p></htmltext>
<tokenext>The article represents one data point , for one particular way to install a drive , on one ( un-named ) version of Gentoo , on one particular model of a WD drive that had a bugzilla entry entered by the author all of 2 days ago .
So this is supposed to be an indictment of all of Linux ? The author even mentions that Ubuntu has an option on parted that accomplishes the task properly .
I 'd be much more interested in an article that talks about how the default installer handles this task rather than concentrating on one particular expert tool that does so .
It 's still good to know that fdisk on his un-named Gentoo distribution does the wrong thing.. but this hardly means we should fire up the klaxon and declare " Linux not fully prepared for 4096 sector hard drives ! " .
It 's certainly interesting , but I 'll withhold judgment until we actually know more about the implications of this across the entire spectrum of Linux distributions and the various 4096 sector HDs .</tokentext>
<sentencetext>The article represents one data point, for one particular way to install a drive, on one (un-named) version of Gentoo, on one particular model of a WD drive that had a bugzilla entry entered by the author all of 2 days ago.
So this is supposed to be an indictment of all of Linux?The author even mentions that Ubuntu has an option on parted that accomplishes the task properly.
I'd be much more interested in an article that talks about how the default installer handles this task rather than concentrating on one particular expert tool that does so.
It's still good to know that fdisk on his un-named Gentoo distribution does the wrong thing..  but this hardly means we should fire up the klaxon and declare "Linux not fully prepared for 4096 sector hard drives!".
It's certainly interesting, but I'll withhold judgment until we actually know more about the implications of this across the entire spectrum of Linux distributions and the various 4096 sector HDs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135586</id>
	<title>Checking for alignment issues?</title>
	<author>Anonymous</author>
	<datestamp>1266174120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>How can one quickly check for alignment issues?</p></htmltext>
<tokenext>How can one quickly check for alignment issues ?</tokentext>
<sentencetext>How can one quickly check for alignment issues?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31139808</id>
	<title>Re:Open Source to the rescue</title>
	<author>Anonymous</author>
	<datestamp>1266160860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Don't talk so fast.  There isn't going to be a 17 year solution.  An open source solution can be (and usually is) available as I type this.  I've hacked kernels (and also on the kernel mailing list).  And posted bugs.  And with a senior systems engineer at Intel, who is also a Linux kernel hacker, came up with a solution in less than 6 hours for a nasty chipset problem, with an APIC not assigning interrupts properly.  It only affected that chipset.  Linus Torvalds found out about the problem by learning about the solution already in place.  I think with the kernel, a lot of it is like that.  Bits and bytes are not 'deep in the kernel'.  Bits and bytes are in the driver, which is in Linux dynamically loaded (and also dynamically unloaded).  If its one program (or two or three), and the change only involved three or four places in the code, the fix is already in.</p></htmltext>
<tokenext>Do n't talk so fast .
There is n't going to be a 17 year solution .
An open source solution can be ( and usually is ) available as I type this .
I 've hacked kernels ( and also on the kernel mailing list ) .
And posted bugs .
And with a senior systems engineer at Intel , who is also a Linux kernel hacker , came up with a solution in less than 6 hours for a nasty chipset problem , with an APIC not assigning interrupts properly .
It only affected that chipset .
Linus Torvalds found out about the problem by learning about the solution already in place .
I think with the kernel , a lot of it is like that .
Bits and bytes are not 'deep in the kernel' .
Bits and bytes are in the driver , which is in Linux dynamically loaded ( and also dynamically unloaded ) .
If its one program ( or two or three ) , and the change only involved three or four places in the code , the fix is already in .</tokentext>
<sentencetext>Don't talk so fast.
There isn't going to be a 17 year solution.
An open source solution can be (and usually is) available as I type this.
I've hacked kernels (and also on the kernel mailing list).
And posted bugs.
And with a senior systems engineer at Intel, who is also a Linux kernel hacker, came up with a solution in less than 6 hours for a nasty chipset problem, with an APIC not assigning interrupts properly.
It only affected that chipset.
Linus Torvalds found out about the problem by learning about the solution already in place.
I think with the kernel, a lot of it is like that.
Bits and bytes are not 'deep in the kernel'.
Bits and bytes are in the driver, which is in Linux dynamically loaded (and also dynamically unloaded).
If its one program (or two or three), and the change only involved three or four places in the code, the fix is already in.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31138312</id>
	<title>Re:DragonFly's solution</title>
	<author>Just Some Guy</author>
	<datestamp>1266150660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>We're adjusting our disklabel64 utility and kernel support to set the partition base offset such that it is physically aligned instead of slice-aligned, and we are using 32K alignment.</p></div><p>Darn bloated OS wasting 2\% off the front of my floppy drive.</p></div>
	</htmltext>
<tokenext>We 're adjusting our disklabel64 utility and kernel support to set the partition base offset such that it is physically aligned instead of slice-aligned , and we are using 32K alignment.Darn bloated OS wasting 2 \ % off the front of my floppy drive .</tokentext>
<sentencetext>We're adjusting our disklabel64 utility and kernel support to set the partition base offset such that it is physically aligned instead of slice-aligned, and we are using 32K alignment.Darn bloated OS wasting 2\% off the front of my floppy drive.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136120</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135432</id>
	<title>Interesting</title>
	<author>Anonymous</author>
	<datestamp>1266172740000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext>I actually have 2 of the these drives in my desktop right now.   There is a slight decrease in performance compared to Windows 7 but nothing that it unacceptable or even a need for concern.  If you need to worry about the performance lost with the 4k sectors then just go solid state.</htmltext>
<tokenext>I actually have 2 of the these drives in my desktop right now .
There is a slight decrease in performance compared to Windows 7 but nothing that it unacceptable or even a need for concern .
If you need to worry about the performance lost with the 4k sectors then just go solid state .</tokentext>
<sentencetext>I actually have 2 of the these drives in my desktop right now.
There is a slight decrease in performance compared to Windows 7 but nothing that it unacceptable or even a need for concern.
If you need to worry about the performance lost with the 4k sectors then just go solid state.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135996</id>
	<title>Re:if vista/win7 really do support this correctly.</title>
	<author>ArghBlarg</author>
	<datestamp>1266177900000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>I see it rather as an indictment against closed-source OSes, if XP turns out to be incompatible with these new drives and MS never releases a patch to add support. People will need to upgrade for no good reason to one of MS's new operating systems. People should not have to deal with a complete upheaval of their tested and true systems due to a small hardware change such as this.</p><p>I can imagine MS is quietly chuckling with glee to itself, if this issue becomes a deal-breaker for machines still running XP.</p></htmltext>
<tokenext>I see it rather as an indictment against closed-source OSes , if XP turns out to be incompatible with these new drives and MS never releases a patch to add support .
People will need to upgrade for no good reason to one of MS 's new operating systems .
People should not have to deal with a complete upheaval of their tested and true systems due to a small hardware change such as this.I can imagine MS is quietly chuckling with glee to itself , if this issue becomes a deal-breaker for machines still running XP .</tokentext>
<sentencetext>I see it rather as an indictment against closed-source OSes, if XP turns out to be incompatible with these new drives and MS never releases a patch to add support.
People will need to upgrade for no good reason to one of MS's new operating systems.
People should not have to deal with a complete upheaval of their tested and true systems due to a small hardware change such as this.I can imagine MS is quietly chuckling with glee to itself, if this issue becomes a deal-breaker for machines still running XP.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135630</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136694</id>
	<title>Re:This only effects the newer 1TB+ WD Green drive</title>
	<author>King Kwame Kilpatric</author>
	<datestamp>1266139800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The affected drives are listed <a href="http://wdc.custhelp.com/cgi-bin/wdc.cfg/php/enduser/std\_adp.php?p\_faqid=5324" title="custhelp.com" rel="nofollow"> on Western Digital's site</a> [custhelp.com].</htmltext>
<tokenext>The affected drives are listed on Western Digital 's site [ custhelp.com ] .</tokentext>
<sentencetext>The affected drives are listed  on Western Digital's site [custhelp.com].</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135864</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137762</id>
	<title>Re:Poorly researched article.</title>
	<author>Anonymous</author>
	<datestamp>1266146880000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Hi pot, enjoying calling the kettle black? Nice to see you criticizing people for "poor" research when you hardly done any yourself.</p><p>(a) Gentoo has no versions. Nor cutesy names like Ubuntu. It is a source-based distro and everything is compiled on installation so doesn't need this careful versioning nonsense.</p><p>(b) Speaking of Gentoo, this topic here has already had a nice long discussion on the Gentoo mailing list. In fact I dare to venture this is where the author of the article got his ideas. If you want data points: <a href="http://comments.gmane.org/gmane.linux.gentoo.user/225974" title="gmane.org" rel="nofollow">this thread has quite a few more</a> [gmane.org].</p><p>(c) It is not just one version of fdisk on some backwater 15 year old distribution. On most modern distributions (check the fdisk man page yourself) fdisk defaults to aligning on cylinder boundaries. And it will complain slightly if you make partitions not beginning or ending on cylinder boundaries. The fault, however, really is two fold: one is historical, which we cannot do anything about, and the other is the fact that these new drives are effectively lying to the operating system about their disk geometry for the sake of "interoperability" with Windows XP.</p></htmltext>
<tokenext>Hi pot , enjoying calling the kettle black ?
Nice to see you criticizing people for " poor " research when you hardly done any yourself .
( a ) Gentoo has no versions .
Nor cutesy names like Ubuntu .
It is a source-based distro and everything is compiled on installation so does n't need this careful versioning nonsense .
( b ) Speaking of Gentoo , this topic here has already had a nice long discussion on the Gentoo mailing list .
In fact I dare to venture this is where the author of the article got his ideas .
If you want data points : this thread has quite a few more [ gmane.org ] .
( c ) It is not just one version of fdisk on some backwater 15 year old distribution .
On most modern distributions ( check the fdisk man page yourself ) fdisk defaults to aligning on cylinder boundaries .
And it will complain slightly if you make partitions not beginning or ending on cylinder boundaries .
The fault , however , really is two fold : one is historical , which we can not do anything about , and the other is the fact that these new drives are effectively lying to the operating system about their disk geometry for the sake of " interoperability " with Windows XP .</tokentext>
<sentencetext>Hi pot, enjoying calling the kettle black?
Nice to see you criticizing people for "poor" research when you hardly done any yourself.
(a) Gentoo has no versions.
Nor cutesy names like Ubuntu.
It is a source-based distro and everything is compiled on installation so doesn't need this careful versioning nonsense.
(b) Speaking of Gentoo, this topic here has already had a nice long discussion on the Gentoo mailing list.
In fact I dare to venture this is where the author of the article got his ideas.
If you want data points: this thread has quite a few more [gmane.org].
(c) It is not just one version of fdisk on some backwater 15 year old distribution.
On most modern distributions (check the fdisk man page yourself) fdisk defaults to aligning on cylinder boundaries.
And it will complain slightly if you make partitions not beginning or ending on cylinder boundaries.
The fault, however, really is two fold: one is historical, which we cannot do anything about, and the other is the fact that these new drives are effectively lying to the operating system about their disk geometry for the sake of "interoperability" with Windows XP.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136190</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31156412</id>
	<title>Should I be concerned?</title>
	<author>busydoingnothing</author>
	<datestamp>1266342780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I just bought two of the WD Green 500GB drives to be used in a hardware RAID (Adaptec 2610SA, aka Dell CERC SATA1.5/6ch) on my Ubuntu-based server. I was going to format it in ext3. Will this problem affect me?</htmltext>
<tokenext>I just bought two of the WD Green 500GB drives to be used in a hardware RAID ( Adaptec 2610SA , aka Dell CERC SATA1.5/6ch ) on my Ubuntu-based server .
I was going to format it in ext3 .
Will this problem affect me ?</tokentext>
<sentencetext>I just bought two of the WD Green 500GB drives to be used in a hardware RAID (Adaptec 2610SA, aka Dell CERC SATA1.5/6ch) on my Ubuntu-based server.
I was going to format it in ext3.
Will this problem affect me?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136354</id>
	<title>Re:if vista/win7 really do support this correctly.</title>
	<author>earthforce\_1</author>
	<datestamp>1266180900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I wouldn't be too fond of the MS development model from what I hear from those who were on the inside:<br><a href="http://www.nytimes.com/2010/02/04/opinion/04brass.html?pagewanted=all" title="nytimes.com">http://www.nytimes.com/2010/02/04/opinion/04brass.html?pagewanted=all</a> [nytimes.com]</p><p>Inside Microsoft, political infighting trumps common sense.  If you really want to hold up a closed source development model as an example of "what works" take a look at Apple.  They crank out far better products with a fraction of the resources.</p></htmltext>
<tokenext>I would n't be too fond of the MS development model from what I hear from those who were on the inside : http : //www.nytimes.com/2010/02/04/opinion/04brass.html ? pagewanted = all [ nytimes.com ] Inside Microsoft , political infighting trumps common sense .
If you really want to hold up a closed source development model as an example of " what works " take a look at Apple .
They crank out far better products with a fraction of the resources .</tokentext>
<sentencetext>I wouldn't be too fond of the MS development model from what I hear from those who were on the inside:http://www.nytimes.com/2010/02/04/opinion/04brass.html?pagewanted=all [nytimes.com]Inside Microsoft, political infighting trumps common sense.
If you really want to hold up a closed source development model as an example of "what works" take a look at Apple.
They crank out far better products with a fraction of the resources.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135630</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136018</id>
	<title>I was worried about this... and am still unclear</title>
	<author>Anonymous</author>
	<datestamp>1266178140000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>I just got one of the 1TB 64mb WD drives that is known to be 4kb sector based.</p><p>Here is how it shows up in dmesg:<br>[    3.420488] sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)</p><p>and here's what hdparm -I says:<br>ATA device, with non-removable media<br>
        Model Number:       WDC WD10EARS-00Y5B1<br>
        Serial Number:      WD-WCAV55227529<br>
        Firmware Revision:  80.00A80<br>
        Transport:          Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6<br>Standards:<br>
        Supported: 8 7 6 5<br>
        Likely used: 8<br>Configuration:<br>
        Logical         max     current<br>
        cylinders       16383   16383<br>
        heads           16      16<br>
        sectors/track   63      63<br>
        --<br>
        CHS current addressable sectors:   16514064<br>
        LBA    user addressable sectors:  268435455<br>
        LBA48  user addressable sectors: 1953525168<br>
        Logical/Physical Sector size:           512 bytes<br>
        device size with M = 1024*1024:      953869 MBytes<br>
        device size with M = 1000*1000:     1000204 MBytes (1000 GB)<br>
        cache/buffer size  = unknown<br>Capabilities:<br>
        LBA, IORDY(can be disabled)<br>
        Queue depth: 32<br>
        Standby timer values: spec'd by Standard, with device specific minimum<br>
        R/W multiple sector transfer: Max = 16  Current = 1<br>
        Recommended acoustic management value: 128, current value: 254<br>
        DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6<br>
             Cycle time: min=120ns recommended=120ns<br>
        PIO: pio0 pio1 pio2 pio3 pio4<br>
             Cycle time: no flow control=120ns  IORDY flow control=120ns<br>Commands/features:<br>
        Enabled Supported:<br>
           *    SMART feature set<br>
                Security Mode feature set<br>
           *    Power Management feature set<br>
           *    Write cache<br>
           *    Look-ahead<br>
           *    Host Protected Area feature set<br>
           *    WRITE\_BUFFER command<br>
           *    READ\_B</p></htmltext>
<tokenext>I just got one of the 1TB 64mb WD drives that is known to be 4kb sector based.Here is how it shows up in dmesg : [ 3.420488 ] sd 1 : 0 : 0 : 0 : [ sdb ] 1953525168 512-byte logical blocks : ( 1.00 TB/931 GiB ) and here 's what hdparm -I says : ATA device , with non-removable media Model Number : WDC WD10EARS-00Y5B1 Serial Number : WD-WCAV55227529 Firmware Revision : 80.00A80 Transport : Serial , SATA 1.0a , SATA II Extensions , SATA Rev 2.5 , SATA Rev 2.6Standards : Supported : 8 7 6 5 Likely used : 8Configuration : Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors : 16514064 LBA user addressable sectors : 268435455 LBA48 user addressable sectors : 1953525168 Logical/Physical Sector size : 512 bytes device size with M = 1024 * 1024 : 953869 MBytes device size with M = 1000 * 1000 : 1000204 MBytes ( 1000 GB ) cache/buffer size = unknownCapabilities : LBA , IORDY ( can be disabled ) Queue depth : 32 Standby timer values : spec 'd by Standard , with device specific minimum R/W multiple sector transfer : Max = 16 Current = 1 Recommended acoustic management value : 128 , current value : 254 DMA : mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 * udma6 Cycle time : min = 120ns recommended = 120ns PIO : pio0 pio1 pio2 pio3 pio4 Cycle time : no flow control = 120ns IORDY flow control = 120nsCommands/features : Enabled Supported : * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE \ _BUFFER command * READ \ _B</tokentext>
<sentencetext>I just got one of the 1TB 64mb WD drives that is known to be 4kb sector based.Here is how it shows up in dmesg:[    3.420488] sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)and here's what hdparm -I says:ATA device, with non-removable media
        Model Number:       WDC WD10EARS-00Y5B1
        Serial Number:      WD-WCAV55227529
        Firmware Revision:  80.00A80
        Transport:          Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6Standards:
        Supported: 8 7 6 5
        Likely used: 8Configuration:
        Logical         max     current
        cylinders       16383   16383
        heads           16      16
        sectors/track   63      63
        --
        CHS current addressable sectors:   16514064
        LBA    user addressable sectors:  268435455
        LBA48  user addressable sectors: 1953525168
        Logical/Physical Sector size:           512 bytes
        device size with M = 1024*1024:      953869 MBytes
        device size with M = 1000*1000:     1000204 MBytes (1000 GB)
        cache/buffer size  = unknownCapabilities:
        LBA, IORDY(can be disabled)
        Queue depth: 32
        Standby timer values: spec'd by Standard, with device specific minimum
        R/W multiple sector transfer: Max = 16  Current = 1
        Recommended acoustic management value: 128, current value: 254
        DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
             Cycle time: min=120ns recommended=120ns
        PIO: pio0 pio1 pio2 pio3 pio4
             Cycle time: no flow control=120ns  IORDY flow control=120nsCommands/features:
        Enabled Supported:
           *    SMART feature set
                Security Mode feature set
           *    Power Management feature set
           *    Write cache
           *    Look-ahead
           *    Host Protected Area feature set
           *    WRITE\_BUFFER command
           *    READ\_B</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135562</id>
	<title>Re:Open Source to the rescue</title>
	<author>bogaboga</author>
	<datestamp>1266173940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I guess you only read but did not understand! Key words in my piece are: <b>"Fairly soon."</b></p></htmltext>
<tokenext>I guess you only read but did not understand !
Key words in my piece are : " Fairly soon .
"</tokentext>
<sentencetext>I guess you only read but did not understand!
Key words in my piece are: "Fairly soon.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135626</id>
	<title>Fuck a 5h[it</title>
	<author>Anonymous</author>
	<datestamp>1266174360000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><A HREF="http://goat.cx/" title="goat.cx" rel="nofollow">It was fun. If I'm subscribers. Please and as BSD sinks are just way over Guests. Some people there are only in a head spinning am protesting of events today, *BSD but FreeBSD</a> [goat.cx]</htmltext>
<tokenext>It was fun .
If I 'm subscribers .
Please and as BSD sinks are just way over Guests .
Some people there are only in a head spinning am protesting of events today , * BSD but FreeBSD [ goat.cx ]</tokentext>
<sentencetext>It was fun.
If I'm subscribers.
Please and as BSD sinks are just way over Guests.
Some people there are only in a head spinning am protesting of events today, *BSD but FreeBSD [goat.cx]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31141776</id>
	<title>C/H/S Needs to go</title>
	<author>LostMyBeaver</author>
	<datestamp>1266224760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Cylinders, head, sector addressing really needs to go anyway. MBR has been hacked so many times over the years to support this method of formatting but in reality it's a total waste. Old operating systems that were forced to work with this method of addressing had to translate to and from the CHS format since sector allocation has ALWAYS been linear. Even FAT-12 used linear addressing methods.<br><br>CHS is also misleading and CHS optimizations are wasteful. Since all modern drives (starting with the first Connor IDE 20 meg drive in the early days) supported some form of intelligent sector remapping that would keep spare sectors available for relocating data after the magnetic medium of a heavily used sector elsewhere began to fail.<br><br>Sector remapping makes it so that CHS optimizations are entirely irrelevant since even brand new drives, straight off the production line ship with bad sectors that have been remapped elsewhere. For better drive performance, algorithms space out the spare sectors across the drive so that when accessing a spare sector, the head doesn't have to slam to the inner or outer rings of the disc. But still, CHS doesn't apply to absolute positions anymore.<br><br>This is 2010 now, I wrote file systems which functioned on 4096 byte sectors on ESDI drives back in the 80's. Made my drives much bigger doing it too. It's time that we move to larger sector sizes again. Modern ECC isn't that much better than 80's grade, however the processing power available to us is so much more that performing ECC on larger blocks of data is achievable. Also, using RAID-5, 5EE or RAID6 makes it so that we can depend less on single drive redundancy. SCSI and IDE should be extended so that controller can inform the drive of bad sectors it finds when performing RAID XORing.</htmltext>
<tokenext>Cylinders , head , sector addressing really needs to go anyway .
MBR has been hacked so many times over the years to support this method of formatting but in reality it 's a total waste .
Old operating systems that were forced to work with this method of addressing had to translate to and from the CHS format since sector allocation has ALWAYS been linear .
Even FAT-12 used linear addressing methods.CHS is also misleading and CHS optimizations are wasteful .
Since all modern drives ( starting with the first Connor IDE 20 meg drive in the early days ) supported some form of intelligent sector remapping that would keep spare sectors available for relocating data after the magnetic medium of a heavily used sector elsewhere began to fail.Sector remapping makes it so that CHS optimizations are entirely irrelevant since even brand new drives , straight off the production line ship with bad sectors that have been remapped elsewhere .
For better drive performance , algorithms space out the spare sectors across the drive so that when accessing a spare sector , the head does n't have to slam to the inner or outer rings of the disc .
But still , CHS does n't apply to absolute positions anymore.This is 2010 now , I wrote file systems which functioned on 4096 byte sectors on ESDI drives back in the 80 's .
Made my drives much bigger doing it too .
It 's time that we move to larger sector sizes again .
Modern ECC is n't that much better than 80 's grade , however the processing power available to us is so much more that performing ECC on larger blocks of data is achievable .
Also , using RAID-5 , 5EE or RAID6 makes it so that we can depend less on single drive redundancy .
SCSI and IDE should be extended so that controller can inform the drive of bad sectors it finds when performing RAID XORing .</tokentext>
<sentencetext>Cylinders, head, sector addressing really needs to go anyway.
MBR has been hacked so many times over the years to support this method of formatting but in reality it's a total waste.
Old operating systems that were forced to work with this method of addressing had to translate to and from the CHS format since sector allocation has ALWAYS been linear.
Even FAT-12 used linear addressing methods.CHS is also misleading and CHS optimizations are wasteful.
Since all modern drives (starting with the first Connor IDE 20 meg drive in the early days) supported some form of intelligent sector remapping that would keep spare sectors available for relocating data after the magnetic medium of a heavily used sector elsewhere began to fail.Sector remapping makes it so that CHS optimizations are entirely irrelevant since even brand new drives, straight off the production line ship with bad sectors that have been remapped elsewhere.
For better drive performance, algorithms space out the spare sectors across the drive so that when accessing a spare sector, the head doesn't have to slam to the inner or outer rings of the disc.
But still, CHS doesn't apply to absolute positions anymore.This is 2010 now, I wrote file systems which functioned on 4096 byte sectors on ESDI drives back in the 80's.
Made my drives much bigger doing it too.
It's time that we move to larger sector sizes again.
Modern ECC isn't that much better than 80's grade, however the processing power available to us is so much more that performing ECC on larger blocks of data is achievable.
Also, using RAID-5, 5EE or RAID6 makes it so that we can depend less on single drive redundancy.
SCSI and IDE should be extended so that controller can inform the drive of bad sectors it finds when performing RAID XORing.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135416</id>
	<title>Set 32 sectors per track</title>
	<author>tchuladdiass</author>
	<datestamp>1266172620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>The simple solution is to set you Sectors per Track to 32.  This would make sure that everything is properly aligned (except the first partition, usually<nobr> <wbr></nobr>/boot, which is mis-aligned by one cylinder).</p></htmltext>
<tokenext>The simple solution is to set you Sectors per Track to 32 .
This would make sure that everything is properly aligned ( except the first partition , usually /boot , which is mis-aligned by one cylinder ) .</tokentext>
<sentencetext>The simple solution is to set you Sectors per Track to 32.
This would make sure that everything is properly aligned (except the first partition, usually /boot, which is mis-aligned by one cylinder).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135388</id>
	<title>first misaligned post</title>
	<author>Anonymous</author>
	<datestamp>1266172440000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>damnit, obviously since this is not technically the 'first post', my web browser must be misaligned by a post</p></htmltext>
<tokenext>damnit , obviously since this is not technically the 'first post ' , my web browser must be misaligned by a post</tokentext>
<sentencetext>damnit, obviously since this is not technically the 'first post', my web browser must be misaligned by a post</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137718</id>
	<title>RTFM please</title>
	<author>Anonymous</author>
	<datestamp>1266146520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I would be soooo embarrassed if it was me who wrote that...</p><p><div class="quote"><p>It has been suggested that WD might internally offset block addresses by 1 so that LBA 63 maps to LBA 64. This way, Windows XP partitions would not really be misaligned. I performed a test that demonstrates that WD has not done this</p></div><p>Or you could read the label on the <a href="http://public.blu.livefilestore.com/y1p\_Q9tsGIGzPYCUbPdOVzh77rUTNxhJJD6gYf24VY\_Q3mGnStnEGgKN8gGozyCF1CsEoEwaxSuERP7EOOKcUmkFw/P1020185.jpg" title="livefilestore.com" rel="nofollow">drive</a> [livefilestore.com]: "Windows XP, single partition: set jumpers 7-8 prior to installation".</p><p><div class="quote"><p>For<nobr> <wbr></nobr>/dev/sdd, I used fdisk to add a Linux (0x83) primary partition, taking up the whole disk, using fdisk defaults. By default, the partition starts at LBA 63.</p></div><p>So... you're a technical writer, you're blatantly ignoring WD's recommended practices, and you're still blaming all of Linux based on the <em>intentional misuse of one single tool</em>? The author might have had a point if <em>fdisk</em> was the only tool to create partitions. The reality is that I don't know a single graphical installer that uses fdisk. He even acknowledges this by mentioning Ubuntu, but apparently did not care enough to do a little research into other installers:<br>openSuSE: Yast partitioning<br>Fedora: anaconda (libparted?)<br>Debian: partman (libparted)<br>Slackware: cfdisk (?)</p><p>So, to sum up: most mainstream distros use a tool from this century. The only places where you might find fdisk is in text-mode installers, and those are mostly used by skilled technical people (but also by bad article writers, apparently). Of course, I'm not saying that the libparted-based installers perform any better in this respect, but neither is he. However, that would be an article worthy to write.</p><p><div class="quote"><p>Since this is one large file, and it can be written linearly to the disk, I expected that we would see a very slight performance hit. I think this is something that itself should be investigated. There's no reason for long contiguous writes to get hit this hard, and it's something that the kernel developers need to look into and fix.</p></div><p>How does the author know that it's a contiguous write? How full was the destination partition? What filesystem was used (extents-based, journalled)? What cache writeback mode was used? -1 for suggesting a kernel bug without giving enough detail to support that accusation.</p><p><div class="quote"><p>Timothy Miller is a Ph.D. student at The Ohio State University, specializing in Computer Architecture, and Artificial Intelligence</p></div><p>I'm not used to looking for a hidden agenda among technical people... but I wonder what his angle is here.</p></div>
	</htmltext>
<tokenext>I would be soooo embarrassed if it was me who wrote that...It has been suggested that WD might internally offset block addresses by 1 so that LBA 63 maps to LBA 64 .
This way , Windows XP partitions would not really be misaligned .
I performed a test that demonstrates that WD has not done thisOr you could read the label on the drive [ livefilestore.com ] : " Windows XP , single partition : set jumpers 7-8 prior to installation " .For /dev/sdd , I used fdisk to add a Linux ( 0x83 ) primary partition , taking up the whole disk , using fdisk defaults .
By default , the partition starts at LBA 63.So... you 're a technical writer , you 're blatantly ignoring WD 's recommended practices , and you 're still blaming all of Linux based on the intentional misuse of one single tool ?
The author might have had a point if fdisk was the only tool to create partitions .
The reality is that I do n't know a single graphical installer that uses fdisk .
He even acknowledges this by mentioning Ubuntu , but apparently did not care enough to do a little research into other installers : openSuSE : Yast partitioningFedora : anaconda ( libparted ?
) Debian : partman ( libparted ) Slackware : cfdisk ( ?
) So , to sum up : most mainstream distros use a tool from this century .
The only places where you might find fdisk is in text-mode installers , and those are mostly used by skilled technical people ( but also by bad article writers , apparently ) .
Of course , I 'm not saying that the libparted-based installers perform any better in this respect , but neither is he .
However , that would be an article worthy to write.Since this is one large file , and it can be written linearly to the disk , I expected that we would see a very slight performance hit .
I think this is something that itself should be investigated .
There 's no reason for long contiguous writes to get hit this hard , and it 's something that the kernel developers need to look into and fix.How does the author know that it 's a contiguous write ?
How full was the destination partition ?
What filesystem was used ( extents-based , journalled ) ?
What cache writeback mode was used ?
-1 for suggesting a kernel bug without giving enough detail to support that accusation.Timothy Miller is a Ph.D. student at The Ohio State University , specializing in Computer Architecture , and Artificial IntelligenceI 'm not used to looking for a hidden agenda among technical people... but I wonder what his angle is here .</tokentext>
<sentencetext>I would be soooo embarrassed if it was me who wrote that...It has been suggested that WD might internally offset block addresses by 1 so that LBA 63 maps to LBA 64.
This way, Windows XP partitions would not really be misaligned.
I performed a test that demonstrates that WD has not done thisOr you could read the label on the drive [livefilestore.com]: "Windows XP, single partition: set jumpers 7-8 prior to installation".For /dev/sdd, I used fdisk to add a Linux (0x83) primary partition, taking up the whole disk, using fdisk defaults.
By default, the partition starts at LBA 63.So... you're a technical writer, you're blatantly ignoring WD's recommended practices, and you're still blaming all of Linux based on the intentional misuse of one single tool?
The author might have had a point if fdisk was the only tool to create partitions.
The reality is that I don't know a single graphical installer that uses fdisk.
He even acknowledges this by mentioning Ubuntu, but apparently did not care enough to do a little research into other installers:openSuSE: Yast partitioningFedora: anaconda (libparted?
)Debian: partman (libparted)Slackware: cfdisk (?
)So, to sum up: most mainstream distros use a tool from this century.
The only places where you might find fdisk is in text-mode installers, and those are mostly used by skilled technical people (but also by bad article writers, apparently).
Of course, I'm not saying that the libparted-based installers perform any better in this respect, but neither is he.
However, that would be an article worthy to write.Since this is one large file, and it can be written linearly to the disk, I expected that we would see a very slight performance hit.
I think this is something that itself should be investigated.
There's no reason for long contiguous writes to get hit this hard, and it's something that the kernel developers need to look into and fix.How does the author know that it's a contiguous write?
How full was the destination partition?
What filesystem was used (extents-based, journalled)?
What cache writeback mode was used?
-1 for suggesting a kernel bug without giving enough detail to support that accusation.Timothy Miller is a Ph.D. student at The Ohio State University, specializing in Computer Architecture, and Artificial IntelligenceI'm not used to looking for a hidden agenda among technical people... but I wonder what his angle is here.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135392</id>
	<title>Parted / GPT</title>
	<author>Anonymous</author>
	<datestamp>1266172440000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>I heard using parted and GPT labels instead of MSDOS will optimize it on 4096 byte sectors automatically. Any truth to it?</p></htmltext>
<tokenext>I heard using parted and GPT labels instead of MSDOS will optimize it on 4096 byte sectors automatically .
Any truth to it ?</tokentext>
<sentencetext>I heard using parted and GPT labels instead of MSDOS will optimize it on 4096 byte sectors automatically.
Any truth to it?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135864</id>
	<title>This only effects the newer 1TB+ WD Green drives</title>
	<author>HouseOfMisterE</author>
	<datestamp>1266176700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It appears that this does not effect the older 1TB+ Western Digital Green drives such as the WDC10EADS.  Those use 333GB platters and are native 512-byte sectors.  The newer (newest) Western Green drives, like the WDC10EARS, use 500MB platters and have 4K sectors.  One way to tell the drives apart with a quick glance is the old Green drives had 32MB of cache and the new ones have 64MB of cache.</p></htmltext>
<tokenext>It appears that this does not effect the older 1TB + Western Digital Green drives such as the WDC10EADS .
Those use 333GB platters and are native 512-byte sectors .
The newer ( newest ) Western Green drives , like the WDC10EARS , use 500MB platters and have 4K sectors .
One way to tell the drives apart with a quick glance is the old Green drives had 32MB of cache and the new ones have 64MB of cache .</tokentext>
<sentencetext>It appears that this does not effect the older 1TB+ Western Digital Green drives such as the WDC10EADS.
Those use 333GB platters and are native 512-byte sectors.
The newer (newest) Western Green drives, like the WDC10EARS, use 500MB platters and have 4K sectors.
One way to tell the drives apart with a quick glance is the old Green drives had 32MB of cache and the new ones have 64MB of cache.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135658</id>
	<title>Re:Open Source to the rescue</title>
	<author>buddyglass</author>
	<datestamp>1266174600000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>That's the beauty of Open Source.</p></div></blockquote><p>I guess the beauty of Closed Source, then, is that the OS supports it out of the box, without some user having to notice the problem, benchmark the performance hit, figure out (more or less) why its happening, make a big blog post, then wait for a qualified dev to fix the problem and for the major distros to pick up the fix?</p></div>
	</htmltext>
<tokenext>That 's the beauty of Open Source.I guess the beauty of Closed Source , then , is that the OS supports it out of the box , without some user having to notice the problem , benchmark the performance hit , figure out ( more or less ) why its happening , make a big blog post , then wait for a qualified dev to fix the problem and for the major distros to pick up the fix ?</tokentext>
<sentencetext>That's the beauty of Open Source.I guess the beauty of Closed Source, then, is that the OS supports it out of the box, without some user having to notice the problem, benchmark the performance hit, figure out (more or less) why its happening, make a big blog post, then wait for a qualified dev to fix the problem and for the major distros to pick up the fix?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137922</id>
	<title>linux is not ready because of fdisk?</title>
	<author>pizzap</author>
	<datestamp>1266148020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>fdisk an elegant tool for a more civilized age.. no wait. fdisk is antiquated and we only use it, because we are afraid to leave the msdos partition table behind out of the irrational fear some other software would stop working.</htmltext>
<tokenext>fdisk an elegant tool for a more civilized age.. no wait .
fdisk is antiquated and we only use it , because we are afraid to leave the msdos partition table behind out of the irrational fear some other software would stop working .</tokentext>
<sentencetext>fdisk an elegant tool for a more civilized age.. no wait.
fdisk is antiquated and we only use it, because we are afraid to leave the msdos partition table behind out of the irrational fear some other software would stop working.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31141988</id>
	<title>WD has a solution</title>
	<author>a-zA-Z0-9$\_.+!*'(),x</author>
	<datestamp>1266227400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><a href="http://www.wdc.com/en/products/advancedformat/" title="wdc.com" rel="nofollow">http://www.wdc.com/en/products/advancedformat/</a> [wdc.com]<blockquote><div><p>WD Align software aligns partitions on the Advanced Format drive to ensure it provides full performance for certain configurations.</p></div>
</blockquote></div>
	</htmltext>
<tokenext>http : //www.wdc.com/en/products/advancedformat/ [ wdc.com ] WD Align software aligns partitions on the Advanced Format drive to ensure it provides full performance for certain configurations .</tokentext>
<sentencetext>http://www.wdc.com/en/products/advancedformat/ [wdc.com]WD Align software aligns partitions on the Advanced Format drive to ensure it provides full performance for certain configurations.

	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136542</id>
	<title>How do I align sectors on my drive</title>
	<author>Anonymous</author>
	<datestamp>1266139020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I have two WD Green drives that each have one big ext4 partition on them, and they appear to be nonaligned (the partition starts on sector 63). Is there a simple way to align the partition to the 4k sectors?</p></htmltext>
<tokenext>I have two WD Green drives that each have one big ext4 partition on them , and they appear to be nonaligned ( the partition starts on sector 63 ) .
Is there a simple way to align the partition to the 4k sectors ?</tokentext>
<sentencetext>I have two WD Green drives that each have one big ext4 partition on them, and they appear to be nonaligned (the partition starts on sector 63).
Is there a simple way to align the partition to the 4k sectors?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136042</id>
	<title>I have such a problem</title>
	<author>orkysoft</author>
	<datestamp>1266178380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have a tiny 1.8" usb harddisk with 4096-byte sectors, and the Ubuntu installer crashes when it tries to read the partitioning information. Very annoying.</p></htmltext>
<tokenext>I have a tiny 1.8 " usb harddisk with 4096-byte sectors , and the Ubuntu installer crashes when it tries to read the partitioning information .
Very annoying .</tokentext>
<sentencetext>I have a tiny 1.8" usb harddisk with 4096-byte sectors, and the Ubuntu installer crashes when it tries to read the partitioning information.
Very annoying.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135944</id>
	<title>Re:Open Source to the rescue</title>
	<author>marcansoft</author>
	<datestamp>1266177480000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>The beauty isn't Closed Source, it's Market Monopoly. You know, the one guaranteeing that device manufacturers make up for your failures by deviating from standards in order to make sure that their devices work out of the box with your broken OS.</p></htmltext>
<tokenext>The beauty is n't Closed Source , it 's Market Monopoly .
You know , the one guaranteeing that device manufacturers make up for your failures by deviating from standards in order to make sure that their devices work out of the box with your broken OS .</tokentext>
<sentencetext>The beauty isn't Closed Source, it's Market Monopoly.
You know, the one guaranteeing that device manufacturers make up for your failures by deviating from standards in order to make sure that their devices work out of the box with your broken OS.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135658</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137762
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136190
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136354
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135630
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136694
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135864
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136128
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135658
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136060
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31139808
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137004
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31153460
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135984
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135416
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135704
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135492
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135922
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31144738
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136018
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31138312
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136120
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31142304
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136190
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136138
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135562
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135568
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136748
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135658
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135996
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135630
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137968
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135952
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135556
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135944
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135658
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_14_1541244_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31138896
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136190
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136120
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31138312
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135864
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136694
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135388
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136536
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136190
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31142304
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137762
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31138896
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31156412
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136018
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31144738
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135630
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136354
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135996
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135492
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135704
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135426
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135432
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135460
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136060
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135568
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137004
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31139808
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135658
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136748
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135944
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136128
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135510
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135922
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135562
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31136138
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135556
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135952
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31137968
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135416
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135984
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31153460
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135586
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135580
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31135392
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_14_1541244.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_14_1541244.31141988
</commentlist>
</conversation>
