<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_06_17_2222230</id>
	<title>Solid State Drives Tested With TRIM Support</title>
	<author>samzenpus</author>
	<datestamp>1245238800000</datestamp>
	<htmltext>Vigile writes <i>"Despite the rising excitement over SSDs, some of it has been tempered by <a href="//hardware.slashdot.org/article.pl?sid=09/02/13/2337258&amp;tid=198">performance degradation issues</a>.  The promised land is supposed to be the mighty TRIM command &mdash; a way for the OS to indicate to the SSD a range of blocks that are no longer needed because of deleted files.  Apparently Windows 7 will implement TRIM of some kind but for now you can use a proprietary TRIM tool on a few select SSDs using Indilinx controllers.  A new article at PC Perspective evaluates <a href="http://www.pcper.com/article.php?aid=733">performance on a pair of Indilinx drives</a> as well as the <a href="http://www.pcper.com/article.php?aid=733&amp;type=expert&amp;pid=14">TRIM utility and its efficacy</a>."</i></htmltext>
<tokenext>Vigile writes " Despite the rising excitement over SSDs , some of it has been tempered by performance degradation issues .
The promised land is supposed to be the mighty TRIM command    a way for the OS to indicate to the SSD a range of blocks that are no longer needed because of deleted files .
Apparently Windows 7 will implement TRIM of some kind but for now you can use a proprietary TRIM tool on a few select SSDs using Indilinx controllers .
A new article at PC Perspective evaluates performance on a pair of Indilinx drives as well as the TRIM utility and its efficacy .
"</tokentext>
<sentencetext>Vigile writes "Despite the rising excitement over SSDs, some of it has been tempered by performance degradation issues.
The promised land is supposed to be the mighty TRIM command — a way for the OS to indicate to the SSD a range of blocks that are no longer needed because of deleted files.
Apparently Windows 7 will implement TRIM of some kind but for now you can use a proprietary TRIM tool on a few select SSDs using Indilinx controllers.
A new article at PC Perspective evaluates performance on a pair of Indilinx drives as well as the TRIM utility and its efficacy.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368361</id>
	<title>Re:What I really want to know</title>
	<author>Anonymous</author>
	<datestamp>1245246960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>i would think EXT4 would be the FS of choice for a SSD, if i'm wrong, i wouldnt be surprised but why those over EXT4?</p></htmltext>
<tokenext>i would think EXT4 would be the FS of choice for a SSD , if i 'm wrong , i wouldnt be surprised but why those over EXT4 ?</tokentext>
<sentencetext>i would think EXT4 would be the FS of choice for a SSD, if i'm wrong, i wouldnt be surprised but why those over EXT4?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368057</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28375013</id>
	<title>I thought NILF stood for</title>
	<author>TravisO</author>
	<datestamp>1245344040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Nerds I'd Love to F#!K</p></htmltext>
<tokenext>Nerds I 'd Love to F # ! K</tokentext>
<sentencetext>Nerds I'd Love to F#!K</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368107</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28374375</id>
	<title>Re:Why Windows 7 in the summary?</title>
	<author>atamido</author>
	<datestamp>1245341460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Intel has specifically stated that the RAM cache on their drives is not used for writes.  It is used while remapping sections of the drive, condensing sections, etc.</p></htmltext>
<tokenext>Intel has specifically stated that the RAM cache on their drives is not used for writes .
It is used while remapping sections of the drive , condensing sections , etc .</tokentext>
<sentencetext>Intel has specifically stated that the RAM cache on their drives is not used for writes.
It is used while remapping sections of the drive, condensing sections, etc.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368147</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369717</id>
	<title>Re:fragmentation?</title>
	<author>Anonymous</author>
	<datestamp>1245260460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>browser.cache.disk.parent\_directory</p></htmltext>
<tokenext>browser.cache.disk.parent \ _directory</tokentext>
<sentencetext>browser.cache.disk.parent\_directory</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368699</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368029</id>
	<title>Why Windows 7 in the summary?</title>
	<author>loufoque</author>
	<datestamp>1245244020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Why is Windows 7 even in the summary?<br>People who buy high-end disk drives and care about Windows must be quite a minority. The point of hard disk drives with fast writing performance is for servers.</p></htmltext>
<tokenext>Why is Windows 7 even in the summary ? People who buy high-end disk drives and care about Windows must be quite a minority .
The point of hard disk drives with fast writing performance is for servers .</tokentext>
<sentencetext>Why is Windows 7 even in the summary?People who buy high-end disk drives and care about Windows must be quite a minority.
The point of hard disk drives with fast writing performance is for servers.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368209</id>
	<title>Re:But its the future</title>
	<author>Anonymous</author>
	<datestamp>1245245280000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>I finally got the opportunity to test out SSDs this year. There may be the odd teething problem to get over, but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.</p></div><p>Well damn, I'll just have to tell our customer that has something like a <b>30 petabyte <i>TAPE</i> archive</b> that's growing by about a terabyte or more each and every day that they're spending money on something you say is, umm, outdated and these newfangled devices that the next power surge will totally fry are the wave of the future.</p><p>Guess what?  There's a whole lot more money spent on proven rock-solid technology by large organizations then you apparently know.</p><p>Tape and hard drives are going <b>NOWHERE</b>.  For a long, long time to come.</p></div>
	</htmltext>
<tokenext>I finally got the opportunity to test out SSDs this year .
There may be the odd teething problem to get over , but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.Well damn , I 'll just have to tell our customer that has something like a 30 petabyte TAPE archive that 's growing by about a terabyte or more each and every day that they 're spending money on something you say is , umm , outdated and these newfangled devices that the next power surge will totally fry are the wave of the future.Guess what ?
There 's a whole lot more money spent on proven rock-solid technology by large organizations then you apparently know.Tape and hard drives are going NOWHERE .
For a long , long time to come .</tokentext>
<sentencetext>I finally got the opportunity to test out SSDs this year.
There may be the odd teething problem to get over, but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.Well damn, I'll just have to tell our customer that has something like a 30 petabyte TAPE archive that's growing by about a terabyte or more each and every day that they're spending money on something you say is, umm, outdated and these newfangled devices that the next power surge will totally fry are the wave of the future.Guess what?
There's a whole lot more money spent on proven rock-solid technology by large organizations then you apparently know.Tape and hard drives are going NOWHERE.
For a long, long time to come.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369549</id>
	<title>A tip for people reading "fragmentation</title>
	<author>Ilgaz</author>
	<datestamp>1245259140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Coriolis Systems (who produces iDefrag) jokingly referred to that issue on their blog.</p><p>" Ironically even SSDs, where you would expect the uniform access time to render fragmentation a problem of the past, still have various problems caused by exactly the same issue(1)'</p><p>of course, they add:</p><p>1 For avoidance of doubt, we strongly recommend that you don't try to defragment your SSD-based volumes. The fragmentation issue on SSDs is internal to their implementation, and defragmenting the filesystem would only make matters worse.</p><p>In case you spot a good friend who got suggested by Microsoft to defrag their drive (Win7 does it even without asking), you better tell it is not the "magnetic disk fragmentation" issue. It is really different and I heard some real bad stories from people who defragmented (!) their SSD drives.</p></htmltext>
<tokenext>Coriolis Systems ( who produces iDefrag ) jokingly referred to that issue on their blog .
" Ironically even SSDs , where you would expect the uniform access time to render fragmentation a problem of the past , still have various problems caused by exactly the same issue ( 1 ) 'of course , they add : 1 For avoidance of doubt , we strongly recommend that you do n't try to defragment your SSD-based volumes .
The fragmentation issue on SSDs is internal to their implementation , and defragmenting the filesystem would only make matters worse.In case you spot a good friend who got suggested by Microsoft to defrag their drive ( Win7 does it even without asking ) , you better tell it is not the " magnetic disk fragmentation " issue .
It is really different and I heard some real bad stories from people who defragmented ( !
) their SSD drives .</tokentext>
<sentencetext>Coriolis Systems (who produces iDefrag) jokingly referred to that issue on their blog.
" Ironically even SSDs, where you would expect the uniform access time to render fragmentation a problem of the past, still have various problems caused by exactly the same issue(1)'of course, they add:1 For avoidance of doubt, we strongly recommend that you don't try to defragment your SSD-based volumes.
The fragmentation issue on SSDs is internal to their implementation, and defragmenting the filesystem would only make matters worse.In case you spot a good friend who got suggested by Microsoft to defrag their drive (Win7 does it even without asking), you better tell it is not the "magnetic disk fragmentation" issue.
It is really different and I heard some real bad stories from people who defragmented (!
) their SSD drives.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369393</id>
	<title>ReiserFS 3</title>
	<author>bobbuck</author>
	<datestamp>1245257400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I bought a WinTec FileMate Ultra 24G from Tiger Direct that plugs into the ExpressCard Slot. I am now using that as the boot partition with reiserfs (v3), elevator=noop, and mounted noatime. This might not give the very best performance but it is much faster than the stock HD. OpenOffice loads in 2 seconds. I turned down the<nobr> <wbr></nobr>/sys/block/sdb/queue/read\_ahead\_kb but I'm not sure where it should be. I put my logs on tmpfs. Some people put the Firefox cache on tmpfs.</htmltext>
<tokenext>I bought a WinTec FileMate Ultra 24G from Tiger Direct that plugs into the ExpressCard Slot .
I am now using that as the boot partition with reiserfs ( v3 ) , elevator = noop , and mounted noatime .
This might not give the very best performance but it is much faster than the stock HD .
OpenOffice loads in 2 seconds .
I turned down the /sys/block/sdb/queue/read \ _ahead \ _kb but I 'm not sure where it should be .
I put my logs on tmpfs .
Some people put the Firefox cache on tmpfs .</tokentext>
<sentencetext>I bought a WinTec FileMate Ultra 24G from Tiger Direct that plugs into the ExpressCard Slot.
I am now using that as the boot partition with reiserfs (v3), elevator=noop, and mounted noatime.
This might not give the very best performance but it is much faster than the stock HD.
OpenOffice loads in 2 seconds.
I turned down the /sys/block/sdb/queue/read\_ahead\_kb but I'm not sure where it should be.
I put my logs on tmpfs.
Some people put the Firefox cache on tmpfs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368215</id>
	<title>Re:High failure rate</title>
	<author>Anonymous</author>
	<datestamp>1245245340000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>That's a statistic that doesn't make any sense.</p><p>20\% under what conditions, and in what timeframe? Over a long enough time period everything has a 100\% failure rate.</p><p>Normal hard disks also <b>will</b> eventually fail, due to physical wear.</p><p>Also if it lasts long enough, at some point, reliability will stop being important. Even if it still works, very few people will want to use a 100MB hard disk from 15 years ago.</p></htmltext>
<tokenext>That 's a statistic that does n't make any sense.20 \ % under what conditions , and in what timeframe ?
Over a long enough time period everything has a 100 \ % failure rate.Normal hard disks also will eventually fail , due to physical wear.Also if it lasts long enough , at some point , reliability will stop being important .
Even if it still works , very few people will want to use a 100MB hard disk from 15 years ago .</tokentext>
<sentencetext>That's a statistic that doesn't make any sense.20\% under what conditions, and in what timeframe?
Over a long enough time period everything has a 100\% failure rate.Normal hard disks also will eventually fail, due to physical wear.Also if it lasts long enough, at some point, reliability will stop being important.
Even if it still works, very few people will want to use a 100MB hard disk from 15 years ago.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367951</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369301</id>
	<title>Re:Why Windows 7 in the summary?</title>
	<author>Anonymous</author>
	<datestamp>1245256260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The Intel X25 has a tiny buffer in the controller chip like every other SSD out there.  The buffer is needed for basic operation.  The DRAM inside the X25 isn't used as a cache for user data.  http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&amp;p=10</p></htmltext>
<tokenext>The Intel X25 has a tiny buffer in the controller chip like every other SSD out there .
The buffer is needed for basic operation .
The DRAM inside the X25 is n't used as a cache for user data .
http : //www.anandtech.com/cpuchipsets/intel/showdoc.aspx ? i = 3403&amp;p = 10</tokentext>
<sentencetext>The Intel X25 has a tiny buffer in the controller chip like every other SSD out there.
The buffer is needed for basic operation.
The DRAM inside the X25 isn't used as a cache for user data.
http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&amp;p=10</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368147</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369715</id>
	<title>Re:What I really want to know</title>
	<author>benow</author>
	<datestamp>1245260460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Yes, the speedup is dramatic.  The random access and multi-threaded speedup play a large role, and are left out of many comparisons.  MLC and a good interface make a difference, certainly, but the major speedup is from random access.
<p>
Lots of RAM and an SSD will make a box fly.</p></htmltext>
<tokenext>Yes , the speedup is dramatic .
The random access and multi-threaded speedup play a large role , and are left out of many comparisons .
MLC and a good interface make a difference , certainly , but the major speedup is from random access .
Lots of RAM and an SSD will make a box fly .</tokentext>
<sentencetext>Yes, the speedup is dramatic.
The random access and multi-threaded speedup play a large role, and are left out of many comparisons.
MLC and a good interface make a difference, certainly, but the major speedup is from random access.
Lots of RAM and an SSD will make a box fly.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368611</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368199</id>
	<title>Re:fragmentation?</title>
	<author>cbhacking</author>
	<datestamp>1245245220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Disclaimer: I am not a SSD firmware author, although I've spoken to a few.*</p><p>As best I can understand it, the problem is that writes are scattered across the physical media by wear-leveling firmware on the disk. In order to do this, the firmware must have a "free list" of sorts that allows it to find an un-worn area for the next write. Of course, this unworn area also needs to not currently be storing any relevant data.</p><p>Now, consider a SSD in use. Initially, the whole disk is free, and writes can go anywhere at all. They do, too - you end up with meaningful (at some point) data covering the entirety of the physical memory cells pretty quickly (consider things like logfiles, pagefiles, hibernation data, temporary data, and so forth). Obviously, most of that data doesn't mean anything anymore - to the filesystem, only perhaps 20\% of the SSD is actually used, after 6 months. However, the SSD's firmware things that every single part has now been used.</p><p>Obviously, the firmware needs to be able to detect when data on disk gets obsoleted, and can safely be deleted. The problems with this are that this leads to *very* complicated translation tables - logical disk blocks end up having no relation at all to physical ones, and the SSD needs to track those mappings. The other problem is that these tables get *huge* - a typical home system might have between 100K and 1M files on it after a few months of usage, but probably generates and deletes many thousands per day (consider web site cookies, for example - each time they get updated, the wear leveling will write that data to a new portion of the physical storage).</p><p>Maintaining the tables themselves is possible, and when a logical block gets overwritten to a new physical location, the old location can be freed. The problem is that this freeing comes at the same time that the SSD needs to find a new location to write to, and the only knowledge it has about physical blocks which can safely be overwritten is ones where the logical block has been overwritten already (to a different physical location). Obviously, the lookup into the table of active blocks has to be indexed by logical block, which may make it difficult to locate the oldest "free" physical blocks. This could lead to searches that, even with near-instant IO, result in noticeable slowdowns.</p><p>Enter the TRIM command, whereby an OS can tell the SSD that a given range of logical blocks (which haven't been overwritten yet) are now able to be recycled. This command allows the SSD to identify physical blocks which can safely be overwritten, and place them in its physical write queue, before the next write command comes down from the disk controller. It's unlikely to be a magic bullet, but should improve things substantially.</p><p>* As stated above, I don't personally write this stuff, so I may be mis-remembering or mis-interpreting. If anybody can explain it better, please do.</p></htmltext>
<tokenext>Disclaimer : I am not a SSD firmware author , although I 've spoken to a few .
* As best I can understand it , the problem is that writes are scattered across the physical media by wear-leveling firmware on the disk .
In order to do this , the firmware must have a " free list " of sorts that allows it to find an un-worn area for the next write .
Of course , this unworn area also needs to not currently be storing any relevant data.Now , consider a SSD in use .
Initially , the whole disk is free , and writes can go anywhere at all .
They do , too - you end up with meaningful ( at some point ) data covering the entirety of the physical memory cells pretty quickly ( consider things like logfiles , pagefiles , hibernation data , temporary data , and so forth ) .
Obviously , most of that data does n't mean anything anymore - to the filesystem , only perhaps 20 \ % of the SSD is actually used , after 6 months .
However , the SSD 's firmware things that every single part has now been used.Obviously , the firmware needs to be able to detect when data on disk gets obsoleted , and can safely be deleted .
The problems with this are that this leads to * very * complicated translation tables - logical disk blocks end up having no relation at all to physical ones , and the SSD needs to track those mappings .
The other problem is that these tables get * huge * - a typical home system might have between 100K and 1M files on it after a few months of usage , but probably generates and deletes many thousands per day ( consider web site cookies , for example - each time they get updated , the wear leveling will write that data to a new portion of the physical storage ) .Maintaining the tables themselves is possible , and when a logical block gets overwritten to a new physical location , the old location can be freed .
The problem is that this freeing comes at the same time that the SSD needs to find a new location to write to , and the only knowledge it has about physical blocks which can safely be overwritten is ones where the logical block has been overwritten already ( to a different physical location ) .
Obviously , the lookup into the table of active blocks has to be indexed by logical block , which may make it difficult to locate the oldest " free " physical blocks .
This could lead to searches that , even with near-instant IO , result in noticeable slowdowns.Enter the TRIM command , whereby an OS can tell the SSD that a given range of logical blocks ( which have n't been overwritten yet ) are now able to be recycled .
This command allows the SSD to identify physical blocks which can safely be overwritten , and place them in its physical write queue , before the next write command comes down from the disk controller .
It 's unlikely to be a magic bullet , but should improve things substantially .
* As stated above , I do n't personally write this stuff , so I may be mis-remembering or mis-interpreting .
If anybody can explain it better , please do .</tokentext>
<sentencetext>Disclaimer: I am not a SSD firmware author, although I've spoken to a few.
*As best I can understand it, the problem is that writes are scattered across the physical media by wear-leveling firmware on the disk.
In order to do this, the firmware must have a "free list" of sorts that allows it to find an un-worn area for the next write.
Of course, this unworn area also needs to not currently be storing any relevant data.Now, consider a SSD in use.
Initially, the whole disk is free, and writes can go anywhere at all.
They do, too - you end up with meaningful (at some point) data covering the entirety of the physical memory cells pretty quickly (consider things like logfiles, pagefiles, hibernation data, temporary data, and so forth).
Obviously, most of that data doesn't mean anything anymore - to the filesystem, only perhaps 20\% of the SSD is actually used, after 6 months.
However, the SSD's firmware things that every single part has now been used.Obviously, the firmware needs to be able to detect when data on disk gets obsoleted, and can safely be deleted.
The problems with this are that this leads to *very* complicated translation tables - logical disk blocks end up having no relation at all to physical ones, and the SSD needs to track those mappings.
The other problem is that these tables get *huge* - a typical home system might have between 100K and 1M files on it after a few months of usage, but probably generates and deletes many thousands per day (consider web site cookies, for example - each time they get updated, the wear leveling will write that data to a new portion of the physical storage).Maintaining the tables themselves is possible, and when a logical block gets overwritten to a new physical location, the old location can be freed.
The problem is that this freeing comes at the same time that the SSD needs to find a new location to write to, and the only knowledge it has about physical blocks which can safely be overwritten is ones where the logical block has been overwritten already (to a different physical location).
Obviously, the lookup into the table of active blocks has to be indexed by logical block, which may make it difficult to locate the oldest "free" physical blocks.
This could lead to searches that, even with near-instant IO, result in noticeable slowdowns.Enter the TRIM command, whereby an OS can tell the SSD that a given range of logical blocks (which haven't been overwritten yet) are now able to be recycled.
This command allows the SSD to identify physical blocks which can safely be overwritten, and place them in its physical write queue, before the next write command comes down from the disk controller.
It's unlikely to be a magic bullet, but should improve things substantially.
* As stated above, I don't personally write this stuff, so I may be mis-remembering or mis-interpreting.
If anybody can explain it better, please do.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369983</id>
	<title>Re:fragmentation?</title>
	<author>rdebath</author>
	<datestamp>1245263400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
Yes this would work very well.
</p><p>
BUT. you MUST tell the drive you've done this, with the previous drives the only way is to use the drive's secure erase command to wipe the drive.
</p><p>
With these new drives you could just TRIM all the free space occasionally.</p></htmltext>
<tokenext>Yes this would work very well .
BUT. you MUST tell the drive you 've done this , with the previous drives the only way is to use the drive 's secure erase command to wipe the drive .
With these new drives you could just TRIM all the free space occasionally .</tokentext>
<sentencetext>
Yes this would work very well.
BUT. you MUST tell the drive you've done this, with the previous drives the only way is to use the drive's secure erase command to wipe the drive.
With these new drives you could just TRIM all the free space occasionally.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368699</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368621</id>
	<title>Re:But its the future</title>
	<author>noidentity</author>
	<datestamp>1245249720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>As long as magnetic drives give lower effective price per bit, they will be used.</htmltext>
<tokenext>As long as magnetic drives give lower effective price per bit , they will be used .</tokentext>
<sentencetext>As long as magnetic drives give lower effective price per bit, they will be used.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857</id>
	<title>But its the future</title>
	<author>Anonymous</author>
	<datestamp>1245242640000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>I finally got the opportunity to test out SSDs this year. There may be the odd teething problem to get over, but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.</p></htmltext>
<tokenext>I finally got the opportunity to test out SSDs this year .
There may be the odd teething problem to get over , but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago .</tokentext>
<sentencetext>I finally got the opportunity to test out SSDs this year.
There may be the odd teething problem to get over, but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368283</id>
	<title>Re:What I really want to know</title>
	<author>blitzkrieg3</author>
	<datestamp>1245246120000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>You beat me to it, but in the spirit of adding value, there's a good article <a href="http://www.linux-mag.com/id/7345/" title="linux-mag.com">here</a> [linux-mag.com].  Another benefit of nilfs2 is that you can easily snapshot and undelete files, giving it a sort of built in "time machine" technology (to use apple's terminology).
<br> <br>
I'm just surprised that none of the linux distros are talking about it yet.  You would think with the apple and ibm laptops using SSD today that there would be some option somewhere.  I think everyone is distracted by btrfs.</htmltext>
<tokenext>You beat me to it , but in the spirit of adding value , there 's a good article here [ linux-mag.com ] .
Another benefit of nilfs2 is that you can easily snapshot and undelete files , giving it a sort of built in " time machine " technology ( to use apple 's terminology ) .
I 'm just surprised that none of the linux distros are talking about it yet .
You would think with the apple and ibm laptops using SSD today that there would be some option somewhere .
I think everyone is distracted by btrfs .</tokentext>
<sentencetext>You beat me to it, but in the spirit of adding value, there's a good article here [linux-mag.com].
Another benefit of nilfs2 is that you can easily snapshot and undelete files, giving it a sort of built in "time machine" technology (to use apple's terminology).
I'm just surprised that none of the linux distros are talking about it yet.
You would think with the apple and ibm laptops using SSD today that there would be some option somewhere.
I think everyone is distracted by btrfs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368057</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28376047</id>
	<title>Re:fragmentation?</title>
	<author>sexconker</author>
	<datestamp>1245348000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A more serious and in depth description of it.</p><p><a href="http://www.anandtech.com/storage/showdoc.aspx?i=3531&amp;p=8" title="anandtech.com">http://www.anandtech.com/storage/showdoc.aspx?i=3531&amp;p=8</a> [anandtech.com]</p></htmltext>
<tokenext>A more serious and in depth description of it.http : //www.anandtech.com/storage/showdoc.aspx ? i = 3531&amp;p = 8 [ anandtech.com ]</tokentext>
<sentencetext>A more serious and in depth description of it.http://www.anandtech.com/storage/showdoc.aspx?i=3531&amp;p=8 [anandtech.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368653</id>
	<title>Re:But its the future</title>
	<author>Anonymous</author>
	<datestamp>1245250200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>What do you mean "a few years ago"<nobr> <wbr></nobr>.... my company still uses tape drives &gt;.</p></htmltext>
<tokenext>What do you mean " a few years ago " .... my company still uses tape drives &gt; .</tokentext>
<sentencetext>What do you mean "a few years ago" .... my company still uses tape drives &gt;.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369185</id>
	<title>Re:fragmentation?</title>
	<author>42forty-two42</author>
	<datestamp>1245255120000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>The problem isn't scanning metadata - the problem is relocating data prior to an erase. Flash memory is built into erase blocks that are quite large - 64k to 128k is typical. You can write to smaller regions, but to reset them for another write you have to pave over the neighborhood. However the OS is sending writes at the 512-byte sector granularity. So the drive has to essentially mark the old location for the data as obsolete, and place it somewhere else.<br><br>When the drive has been used enough, however, it may have trouble finding an empty, erased sector to write to. So it has to erase some erase block. But if all erase blocks still have good data (eg, each has half used, important data and half obsolete, overwritten data), you need to relocate some of that data elsewhere.<br><br>What the trim command does is tell the drive that it need not preserve the data of a given sector - otherwise, if you were to delete a file, the drive would still have to preserve its data each time one of these relocation operations occur, since it doesn't know anything about the filesystem's allocation maps. By using TRIM, the drive is aware of what data is deleted, and can thus be discarded when it's time to erase blocks. It also increases the percentage of truly unused flash sectors, increasing the probability that a write can go through without having to wait for a relocation.<br><br>Note that this is completely independent from filesystem fragmentation - indeed, a defrag can even make things worse, by making the flash drive think both old and new locations for some data need preserving.</htmltext>
<tokenext>The problem is n't scanning metadata - the problem is relocating data prior to an erase .
Flash memory is built into erase blocks that are quite large - 64k to 128k is typical .
You can write to smaller regions , but to reset them for another write you have to pave over the neighborhood .
However the OS is sending writes at the 512-byte sector granularity .
So the drive has to essentially mark the old location for the data as obsolete , and place it somewhere else.When the drive has been used enough , however , it may have trouble finding an empty , erased sector to write to .
So it has to erase some erase block .
But if all erase blocks still have good data ( eg , each has half used , important data and half obsolete , overwritten data ) , you need to relocate some of that data elsewhere.What the trim command does is tell the drive that it need not preserve the data of a given sector - otherwise , if you were to delete a file , the drive would still have to preserve its data each time one of these relocation operations occur , since it does n't know anything about the filesystem 's allocation maps .
By using TRIM , the drive is aware of what data is deleted , and can thus be discarded when it 's time to erase blocks .
It also increases the percentage of truly unused flash sectors , increasing the probability that a write can go through without having to wait for a relocation.Note that this is completely independent from filesystem fragmentation - indeed , a defrag can even make things worse , by making the flash drive think both old and new locations for some data need preserving .</tokentext>
<sentencetext>The problem isn't scanning metadata - the problem is relocating data prior to an erase.
Flash memory is built into erase blocks that are quite large - 64k to 128k is typical.
You can write to smaller regions, but to reset them for another write you have to pave over the neighborhood.
However the OS is sending writes at the 512-byte sector granularity.
So the drive has to essentially mark the old location for the data as obsolete, and place it somewhere else.When the drive has been used enough, however, it may have trouble finding an empty, erased sector to write to.
So it has to erase some erase block.
But if all erase blocks still have good data (eg, each has half used, important data and half obsolete, overwritten data), you need to relocate some of that data elsewhere.What the trim command does is tell the drive that it need not preserve the data of a given sector - otherwise, if you were to delete a file, the drive would still have to preserve its data each time one of these relocation operations occur, since it doesn't know anything about the filesystem's allocation maps.
By using TRIM, the drive is aware of what data is deleted, and can thus be discarded when it's time to erase blocks.
It also increases the percentage of truly unused flash sectors, increasing the probability that a write can go through without having to wait for a relocation.Note that this is completely independent from filesystem fragmentation - indeed, a defrag can even make things worse, by making the flash drive think both old and new locations for some data need preserving.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368199</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368699</id>
	<title>Re:fragmentation?</title>
	<author>sootman</author>
	<datestamp>1245250740000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p><i>Obviously, the firmware needs to be able to detect when data on disk gets obsoleted, and can safely be deleted. The problems with this are that this leads to *very* complicated translation tables - logical disk blocks end up having no relation at all to physical ones, and the SSD needs to track those mappings.</i></p><p>Would it solve the problem (or, I guess I should say, remove the symptoms... for a while, at least) to do a full backup, format the SSD, and restore? I know it's not an ideal solution but rsync or Time Machine would make it pretty painless.</p><p>Also, if I had an SSD and was browsing a lot I could see making a ramdisk for things like browser cache files. Too bad Safari and Firefox don't seem to let you specify where you want your cache to be anymore, like old browsers used to. I guess you could make a symlink or something but then you'd HAVE to have that drive mounted.</p></htmltext>
<tokenext>Obviously , the firmware needs to be able to detect when data on disk gets obsoleted , and can safely be deleted .
The problems with this are that this leads to * very * complicated translation tables - logical disk blocks end up having no relation at all to physical ones , and the SSD needs to track those mappings.Would it solve the problem ( or , I guess I should say , remove the symptoms... for a while , at least ) to do a full backup , format the SSD , and restore ?
I know it 's not an ideal solution but rsync or Time Machine would make it pretty painless.Also , if I had an SSD and was browsing a lot I could see making a ramdisk for things like browser cache files .
Too bad Safari and Firefox do n't seem to let you specify where you want your cache to be anymore , like old browsers used to .
I guess you could make a symlink or something but then you 'd HAVE to have that drive mounted .</tokentext>
<sentencetext>Obviously, the firmware needs to be able to detect when data on disk gets obsoleted, and can safely be deleted.
The problems with this are that this leads to *very* complicated translation tables - logical disk blocks end up having no relation at all to physical ones, and the SSD needs to track those mappings.Would it solve the problem (or, I guess I should say, remove the symptoms... for a while, at least) to do a full backup, format the SSD, and restore?
I know it's not an ideal solution but rsync or Time Machine would make it pretty painless.Also, if I had an SSD and was browsing a lot I could see making a ramdisk for things like browser cache files.
Too bad Safari and Firefox don't seem to let you specify where you want your cache to be anymore, like old browsers used to.
I guess you could make a symlink or something but then you'd HAVE to have that drive mounted.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368199</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367951</id>
	<title>High failure rate</title>
	<author>Anonymous</author>
	<datestamp>1245243420000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext>I've heard that the failure rate on SSD's can be as high as 20\%. As I am to lazy to google this or even RTFA I am wondering if this is true. If it is true then adoption rates are going to be very low and this technology may never takeoff before something new and better comes around. Of course even if it isn't true than there is still the perception by a lot of (Ignorant) people (like me) that there is a high failure rate so adoption will still be very slow<p>[perceived] Bottom line SSDs don't work well so, let's just wait until something better comes along.
<br> <br>
Also doesn't one of the hardware manufactures (Samsung I think) have a patent on SSD so no one else can make the drives any way. Proprietary == Dead</p></htmltext>
<tokenext>I 've heard that the failure rate on SSD 's can be as high as 20 \ % .
As I am to lazy to google this or even RTFA I am wondering if this is true .
If it is true then adoption rates are going to be very low and this technology may never takeoff before something new and better comes around .
Of course even if it is n't true than there is still the perception by a lot of ( Ignorant ) people ( like me ) that there is a high failure rate so adoption will still be very slow [ perceived ] Bottom line SSDs do n't work well so , let 's just wait until something better comes along .
Also does n't one of the hardware manufactures ( Samsung I think ) have a patent on SSD so no one else can make the drives any way .
Proprietary = = Dead</tokentext>
<sentencetext>I've heard that the failure rate on SSD's can be as high as 20\%.
As I am to lazy to google this or even RTFA I am wondering if this is true.
If it is true then adoption rates are going to be very low and this technology may never takeoff before something new and better comes around.
Of course even if it isn't true than there is still the perception by a lot of (Ignorant) people (like me) that there is a high failure rate so adoption will still be very slow[perceived] Bottom line SSDs don't work well so, let's just wait until something better comes along.
Also doesn't one of the hardware manufactures (Samsung I think) have a patent on SSD so no one else can make the drives any way.
Proprietary == Dead</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368057</id>
	<title>Re:What I really want to know</title>
	<author>Anonymous</author>
	<datestamp>1245244200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>NILFS2 I suppose.<br>Supposedly beats the crap out of LogFS, YAFFS and JFFS2 when using SSDs.</p></htmltext>
<tokenext>NILFS2 I suppose.Supposedly beats the crap out of LogFS , YAFFS and JFFS2 when using SSDs .</tokentext>
<sentencetext>NILFS2 I suppose.Supposedly beats the crap out of LogFS, YAFFS and JFFS2 when using SSDs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368197</id>
	<title>Re:fragmentation?</title>
	<author>sexconker</author>
	<datestamp>1245245220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Because, basically, flash drives are laid in levels.</p><p>When you delete, you simply map logical space as free.</p><p>If you go to use that free space later, you find that area, and drop shit into it.  It's I dunno, a 32 KB block of memory called a page.  If the page is full (to the point where you can't fit your new shit) of "deleted" files, you first need to write over those deleted files, then write your actual data.</p><p>If the logical space is full with good, fragmented (with deleted files interspersed) files, you need to read out to memory, reorder the living data and remove the deleted data, add in the full page back.</p><p>Think of it as having a notebook.<br>You can write to 1 page at a time, only.</p><p>Page 1 write</p><p>Page 2 write</p><p>Page 3 write</p><p>Page 2 delete</p><p>Page 2 write (still space)</p><p>Page 2 write (not enough space, write to page 4 instead)</p><p>Page 2 delete</p><p>Page 2 write (not enough space, no more blank pages, read page 2 and copy non-deleted shit to scratch paper, add new shit to scratch paper, cover page 2 in white out, copy scratch paper to whited-out page 2)</p></htmltext>
<tokenext>Because , basically , flash drives are laid in levels.When you delete , you simply map logical space as free.If you go to use that free space later , you find that area , and drop shit into it .
It 's I dunno , a 32 KB block of memory called a page .
If the page is full ( to the point where you ca n't fit your new shit ) of " deleted " files , you first need to write over those deleted files , then write your actual data.If the logical space is full with good , fragmented ( with deleted files interspersed ) files , you need to read out to memory , reorder the living data and remove the deleted data , add in the full page back.Think of it as having a notebook.You can write to 1 page at a time , only.Page 1 writePage 2 writePage 3 writePage 2 deletePage 2 write ( still space ) Page 2 write ( not enough space , write to page 4 instead ) Page 2 deletePage 2 write ( not enough space , no more blank pages , read page 2 and copy non-deleted shit to scratch paper , add new shit to scratch paper , cover page 2 in white out , copy scratch paper to whited-out page 2 )</tokentext>
<sentencetext>Because, basically, flash drives are laid in levels.When you delete, you simply map logical space as free.If you go to use that free space later, you find that area, and drop shit into it.
It's I dunno, a 32 KB block of memory called a page.
If the page is full (to the point where you can't fit your new shit) of "deleted" files, you first need to write over those deleted files, then write your actual data.If the logical space is full with good, fragmented (with deleted files interspersed) files, you need to read out to memory, reorder the living data and remove the deleted data, add in the full page back.Think of it as having a notebook.You can write to 1 page at a time, only.Page 1 writePage 2 writePage 3 writePage 2 deletePage 2 write (still space)Page 2 write (not enough space, write to page 4 instead)Page 2 deletePage 2 write (not enough space, no more blank pages, read page 2 and copy non-deleted shit to scratch paper, add new shit to scratch paper, cover page 2 in white out, copy scratch paper to whited-out page 2)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28401901</id>
	<title>Re:fragmentation?</title>
	<author>badkarmadayaccount</author>
	<datestamp>1245513420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Reminds me of defragmentation techniques for NT 4. Backup to tape, restore. ROFLMAO</htmltext>
<tokenext>Reminds me of defragmentation techniques for NT 4 .
Backup to tape , restore .
ROFLMAO</tokentext>
<sentencetext>Reminds me of defragmentation techniques for NT 4.
Backup to tape, restore.
ROFLMAO</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368699</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368109</id>
	<title>Re:Why Windows 7 in the summary?</title>
	<author>Anonymous</author>
	<datestamp>1245244680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>*Must* be? You selfish, self-serving jerk. Start paying attention to things outside the idealistic server room setting. There are a lot of home users that want that speed for their own reasons.</p></htmltext>
<tokenext>* Must * be ?
You selfish , self-serving jerk .
Start paying attention to things outside the idealistic server room setting .
There are a lot of home users that want that speed for their own reasons .</tokentext>
<sentencetext>*Must* be?
You selfish, self-serving jerk.
Start paying attention to things outside the idealistic server room setting.
There are a lot of home users that want that speed for their own reasons.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368029</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28371851</id>
	<title>Re:What I really want to know</title>
	<author>Hal\_Porter</author>
	<datestamp>1245327120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>MILFs has a issue for long term use though. When you first use it is very trim and has little overhead and is highly responsive. Later on it gradually bloats to the point it is unusable and/or spends its time servicing requests from other systems and ignoring you. Then you pretty much have to waste it and replace it with Reiser.</p></htmltext>
<tokenext>MILFs has a issue for long term use though .
When you first use it is very trim and has little overhead and is highly responsive .
Later on it gradually bloats to the point it is unusable and/or spends its time servicing requests from other systems and ignoring you .
Then you pretty much have to waste it and replace it with Reiser .</tokentext>
<sentencetext>MILFs has a issue for long term use though.
When you first use it is very trim and has little overhead and is highly responsive.
Later on it gradually bloats to the point it is unusable and/or spends its time servicing requests from other systems and ignoring you.
Then you pretty much have to waste it and replace it with Reiser.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368057</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28371715</id>
	<title>Re:fragmentation?</title>
	<author>metacell</author>
	<datestamp>1245325620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Would it solve the problem (or, I guess I should say, remove the symptoms... for a while, at least) to do a full backup, format the SSD, and restore?</p></div><p>I think it would alleviate the problems for a while, provided you do a low-level reformat on the SSD. The unused blocks would be marked as unused by the SSD until each of them had been overwritten once. (Which unfortunately happens pretty quickly, because of the wear level balancing.)</p></div>
	</htmltext>
<tokenext>Would it solve the problem ( or , I guess I should say , remove the symptoms... for a while , at least ) to do a full backup , format the SSD , and restore ? I think it would alleviate the problems for a while , provided you do a low-level reformat on the SSD .
The unused blocks would be marked as unused by the SSD until each of them had been overwritten once .
( Which unfortunately happens pretty quickly , because of the wear level balancing .
)</tokentext>
<sentencetext>Would it solve the problem (or, I guess I should say, remove the symptoms... for a while, at least) to do a full backup, format the SSD, and restore?I think it would alleviate the problems for a while, provided you do a low-level reformat on the SSD.
The unused blocks would be marked as unused by the SSD until each of them had been overwritten once.
(Which unfortunately happens pretty quickly, because of the wear level balancing.
)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368699</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369111</id>
	<title>Re:But its the future</title>
	<author>Anonymous</author>
	<datestamp>1245254340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The thing, you see, is that hard drives have a much better linear write speed than SSDs.</p><p>The problem (for HDs) is that the IDE protocol (regardless of whether it's over IDE or SATA), and likewise the SCSI protocol (over... whatever) try to expose a high level interface: linearly addressed blocks. And you don't know (at the OS level) where they are, how long it's going to take to get there and write to them and so on. On top of that, the OS exposes a higher again level interface: files.</p><p>The result is an interface that's much slower than could be.</p><p>To take a simple example, let's say you're running a database. A transaction commits (or prepares). The DB doesn't care *where* the prepare or commit is written, and it would be quite happy to reserve 4 blocks per cylinder (latency<nobr> <wbr></nobr>.5 ms on a 15k rpm disk) or 16 (.13 ms) to write it down ASAP, instead of the current 5-10 ms. But it can't. It'd need know where the heads are and then say "on that same cylinder (or one next to it), platter m, sector n1, or n2, or n3, write *this* *next time you're on it*, then tell me (I'll clean up later). Not possible.</p><p>The OS gets in the way. Theoretically it could expose a different FS access layer that would allow it, but the HD interfaces we have just can't support it.</p><p>And that's the issue. And it's the same issue with SSD. OSes are perfectly capable of handling 64k write blocks, avoid rewriting too many times, packing writes, and so on, but we're stuck with this "nice" "addressable" "block" crap and drives trying to do too many things that the OS could do much better.</p><p>It's a really f*cked up situation.</p></htmltext>
<tokenext>The thing , you see , is that hard drives have a much better linear write speed than SSDs.The problem ( for HDs ) is that the IDE protocol ( regardless of whether it 's over IDE or SATA ) , and likewise the SCSI protocol ( over... whatever ) try to expose a high level interface : linearly addressed blocks .
And you do n't know ( at the OS level ) where they are , how long it 's going to take to get there and write to them and so on .
On top of that , the OS exposes a higher again level interface : files.The result is an interface that 's much slower than could be.To take a simple example , let 's say you 're running a database .
A transaction commits ( or prepares ) .
The DB does n't care * where * the prepare or commit is written , and it would be quite happy to reserve 4 blocks per cylinder ( latency .5 ms on a 15k rpm disk ) or 16 ( .13 ms ) to write it down ASAP , instead of the current 5-10 ms. But it ca n't .
It 'd need know where the heads are and then say " on that same cylinder ( or one next to it ) , platter m , sector n1 , or n2 , or n3 , write * this * * next time you 're on it * , then tell me ( I 'll clean up later ) .
Not possible.The OS gets in the way .
Theoretically it could expose a different FS access layer that would allow it , but the HD interfaces we have just ca n't support it.And that 's the issue .
And it 's the same issue with SSD .
OSes are perfectly capable of handling 64k write blocks , avoid rewriting too many times , packing writes , and so on , but we 're stuck with this " nice " " addressable " " block " crap and drives trying to do too many things that the OS could do much better.It 's a really f * cked up situation .</tokentext>
<sentencetext>The thing, you see, is that hard drives have a much better linear write speed than SSDs.The problem (for HDs) is that the IDE protocol (regardless of whether it's over IDE or SATA), and likewise the SCSI protocol (over... whatever) try to expose a high level interface: linearly addressed blocks.
And you don't know (at the OS level) where they are, how long it's going to take to get there and write to them and so on.
On top of that, the OS exposes a higher again level interface: files.The result is an interface that's much slower than could be.To take a simple example, let's say you're running a database.
A transaction commits (or prepares).
The DB doesn't care *where* the prepare or commit is written, and it would be quite happy to reserve 4 blocks per cylinder (latency .5 ms on a 15k rpm disk) or 16 (.13 ms) to write it down ASAP, instead of the current 5-10 ms. But it can't.
It'd need know where the heads are and then say "on that same cylinder (or one next to it), platter m, sector n1, or n2, or n3, write *this* *next time you're on it*, then tell me (I'll clean up later).
Not possible.The OS gets in the way.
Theoretically it could expose a different FS access layer that would allow it, but the HD interfaces we have just can't support it.And that's the issue.
And it's the same issue with SSD.
OSes are perfectly capable of handling 64k write blocks, avoid rewriting too many times, packing writes, and so on, but we're stuck with this "nice" "addressable" "block" crap and drives trying to do too many things that the OS could do much better.It's a really f*cked up situation.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368211</id>
	<title>Re:fragmentation?</title>
	<author>Bigjeff5</author>
	<datestamp>1245245280000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>In very simple terms (because I'm no expert), it's because of the way SSDs deal with wear leveling and the fact that a single write is non-sequential.  When it writes data, it is writing to multiple segments across multiple chips.  It is very fast to do it this way, in fact the linear alternative creates heavy wear and is significantly slower (think single chip usb flash drives) than even spinning disk tech, and so this non-sequential write is essential.</p><p>Now, to achieve this, each chip is broken down into segments, and those segments are broken down into smaller segments, which are broken down into bytes, which are then broken down bits.  When the SSD writes, it writes to the next available bit in the next available segment on each of the chips in the drive.  Because it keeps track of exactly where it left off, this process is extremely fast, as all new data goes to the next place in line.</p><p>The problem comes when you fill up the hard drive and then delete data.  When you delete data, you are deleting little bits spread all over the physical drive.  Unless it is a tiny file, every chip will have a little bit of the file.  What's worse, unless it was a massive file, you probably wont be clearing whole sequential segments on the drive.  To add to that even further, the OS doesn't actually delete anything, it just flags it!  So what this means is after you cleared a bunch of room on your hard drive, when writing new data your SSD is still massively fragmented, and to write new data the drive has to find free bits and clear them first.  Think worst case scenario for spinning disk fragmentation and that's what you have - and you will get it every single time you fill up an SSD.  You can actually re-format the drive and it won't necessarily fix the fragmentation problem, because formating won't reset the segments on the chip to factory state and update the internal drive index in such a way that it maximizes speed again.</p><p>Now, because the SSD is sort of like a very large RAID array with very tiny disks, even in this state is still faster than a conventional spinning-disk hard drive.  But it is nowhere near as fast it was when it was clean and new.</p><p>Thus, the TRIM functions that have been mentioned.  Basically these go through and do a de-frag of the data, which requires maximising the space at the "back" of each chip, then re-setting those free segments to the factory state.  Depending on how much needs to be moved, this can have wear concerns, so you don't really want to do this all the time.  The idea with SSDs is to fill them all the way up, then clear out as much room as you possibly can before trimming the drive.  Once trimmed the drive should be back to pre-fragmentation speeds, but you have also just written many more times to some bits on the drive than others, which raises wear concers if the process has to be repeated too many times.</p></htmltext>
<tokenext>In very simple terms ( because I 'm no expert ) , it 's because of the way SSDs deal with wear leveling and the fact that a single write is non-sequential .
When it writes data , it is writing to multiple segments across multiple chips .
It is very fast to do it this way , in fact the linear alternative creates heavy wear and is significantly slower ( think single chip usb flash drives ) than even spinning disk tech , and so this non-sequential write is essential.Now , to achieve this , each chip is broken down into segments , and those segments are broken down into smaller segments , which are broken down into bytes , which are then broken down bits .
When the SSD writes , it writes to the next available bit in the next available segment on each of the chips in the drive .
Because it keeps track of exactly where it left off , this process is extremely fast , as all new data goes to the next place in line.The problem comes when you fill up the hard drive and then delete data .
When you delete data , you are deleting little bits spread all over the physical drive .
Unless it is a tiny file , every chip will have a little bit of the file .
What 's worse , unless it was a massive file , you probably wont be clearing whole sequential segments on the drive .
To add to that even further , the OS does n't actually delete anything , it just flags it !
So what this means is after you cleared a bunch of room on your hard drive , when writing new data your SSD is still massively fragmented , and to write new data the drive has to find free bits and clear them first .
Think worst case scenario for spinning disk fragmentation and that 's what you have - and you will get it every single time you fill up an SSD .
You can actually re-format the drive and it wo n't necessarily fix the fragmentation problem , because formating wo n't reset the segments on the chip to factory state and update the internal drive index in such a way that it maximizes speed again.Now , because the SSD is sort of like a very large RAID array with very tiny disks , even in this state is still faster than a conventional spinning-disk hard drive .
But it is nowhere near as fast it was when it was clean and new.Thus , the TRIM functions that have been mentioned .
Basically these go through and do a de-frag of the data , which requires maximising the space at the " back " of each chip , then re-setting those free segments to the factory state .
Depending on how much needs to be moved , this can have wear concerns , so you do n't really want to do this all the time .
The idea with SSDs is to fill them all the way up , then clear out as much room as you possibly can before trimming the drive .
Once trimmed the drive should be back to pre-fragmentation speeds , but you have also just written many more times to some bits on the drive than others , which raises wear concers if the process has to be repeated too many times .</tokentext>
<sentencetext>In very simple terms (because I'm no expert), it's because of the way SSDs deal with wear leveling and the fact that a single write is non-sequential.
When it writes data, it is writing to multiple segments across multiple chips.
It is very fast to do it this way, in fact the linear alternative creates heavy wear and is significantly slower (think single chip usb flash drives) than even spinning disk tech, and so this non-sequential write is essential.Now, to achieve this, each chip is broken down into segments, and those segments are broken down into smaller segments, which are broken down into bytes, which are then broken down bits.
When the SSD writes, it writes to the next available bit in the next available segment on each of the chips in the drive.
Because it keeps track of exactly where it left off, this process is extremely fast, as all new data goes to the next place in line.The problem comes when you fill up the hard drive and then delete data.
When you delete data, you are deleting little bits spread all over the physical drive.
Unless it is a tiny file, every chip will have a little bit of the file.
What's worse, unless it was a massive file, you probably wont be clearing whole sequential segments on the drive.
To add to that even further, the OS doesn't actually delete anything, it just flags it!
So what this means is after you cleared a bunch of room on your hard drive, when writing new data your SSD is still massively fragmented, and to write new data the drive has to find free bits and clear them first.
Think worst case scenario for spinning disk fragmentation and that's what you have - and you will get it every single time you fill up an SSD.
You can actually re-format the drive and it won't necessarily fix the fragmentation problem, because formating won't reset the segments on the chip to factory state and update the internal drive index in such a way that it maximizes speed again.Now, because the SSD is sort of like a very large RAID array with very tiny disks, even in this state is still faster than a conventional spinning-disk hard drive.
But it is nowhere near as fast it was when it was clean and new.Thus, the TRIM functions that have been mentioned.
Basically these go through and do a de-frag of the data, which requires maximising the space at the "back" of each chip, then re-setting those free segments to the factory state.
Depending on how much needs to be moved, this can have wear concerns, so you don't really want to do this all the time.
The idea with SSDs is to fill them all the way up, then clear out as much room as you possibly can before trimming the drive.
Once trimmed the drive should be back to pre-fragmentation speeds, but you have also just written many more times to some bits on the drive than others, which raises wear concers if the process has to be repeated too many times.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368147</id>
	<title>Re:Why Windows 7 in the summary?</title>
	<author>Robotbeat</author>
	<datestamp>1245244860000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Even the best consumer-level SSDs like the Intel x-25m/e use a volatile RAM cache to speed up the writes. In fact, with the cache disabled, random write IOPS drops to about 1200, which is only about three or four times as good as a 15k 2.5" drive. The more expensive truly-enterprise SSD drives which don't need a volatile write cache cost at LEAST $20/GB, so the $/(safe random write iop) ratio is actually still pretty close, and cheap SATA drives may actually be even on that metric as the fast enterprise SSDs. Granted, this shouldn't be the case in a year, but that's where it is right now. (Also, the performance-per-slot is a lot higher for SSDs, which can translate into different $ and power and space savings.)</p></htmltext>
<tokenext>Even the best consumer-level SSDs like the Intel x-25m/e use a volatile RAM cache to speed up the writes .
In fact , with the cache disabled , random write IOPS drops to about 1200 , which is only about three or four times as good as a 15k 2.5 " drive .
The more expensive truly-enterprise SSD drives which do n't need a volatile write cache cost at LEAST $ 20/GB , so the $ / ( safe random write iop ) ratio is actually still pretty close , and cheap SATA drives may actually be even on that metric as the fast enterprise SSDs .
Granted , this should n't be the case in a year , but that 's where it is right now .
( Also , the performance-per-slot is a lot higher for SSDs , which can translate into different $ and power and space savings .
)</tokentext>
<sentencetext>Even the best consumer-level SSDs like the Intel x-25m/e use a volatile RAM cache to speed up the writes.
In fact, with the cache disabled, random write IOPS drops to about 1200, which is only about three or four times as good as a 15k 2.5" drive.
The more expensive truly-enterprise SSD drives which don't need a volatile write cache cost at LEAST $20/GB, so the $/(safe random write iop) ratio is actually still pretty close, and cheap SATA drives may actually be even on that metric as the fast enterprise SSDs.
Granted, this shouldn't be the case in a year, but that's where it is right now.
(Also, the performance-per-slot is a lot higher for SSDs, which can translate into different $ and power and space savings.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368029</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368711</id>
	<title>Re:High failure rate</title>
	<author>fractoid</author>
	<datestamp>1245250860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I've heard that the failure rate on SSD's can be as high as 20\%.</p></div><p>As Heinlein put it wonderfully in 'Tunnel in the Sky': </p><p><div class="quote"><p> <i>The death rate is the same for us as for anybody<nobr> <wbr></nobr>... one person, one death, sooner or later.</i> - Cpt. Helen Walker</p></div></div>
	</htmltext>
<tokenext>I 've heard that the failure rate on SSD 's can be as high as 20 \ % .As Heinlein put it wonderfully in 'Tunnel in the Sky ' : The death rate is the same for us as for anybody ... one person , one death , sooner or later .
- Cpt .
Helen Walker</tokentext>
<sentencetext>I've heard that the failure rate on SSD's can be as high as 20\%.As Heinlein put it wonderfully in 'Tunnel in the Sky':  The death rate is the same for us as for anybody ... one person, one death, sooner or later.
- Cpt.
Helen Walker
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367951</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368337</id>
	<title>Re:High failure rate</title>
	<author>Bigjeff5</author>
	<datestamp>1245246660000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>I've never heard of a 20\% fail rate for SSDs.  I've heard of wear concerns, as each little bit on the drive can only be written a set number of times (it's at 10,000 or so, if I remember correctly).  However, thanks to the majic of wear leveling and the large amount of separate chips in an SSD drive, you can fill up your drive completely and you will have only written to each bit exactly once.  That means you could theoretically fill your SSD up 10,000 times before you would expect failure.  Reality is a bit lower than that, maybe 3,000-5,000 times due to having to TRIM to re-arrange the bits, but it's still significant.</p><p>Of course, even with the performance hit TFA talks about after filling your SSD (which is fixed with the TRIM function TFA also talks about) the fastest spinning disks are still much much slower than all but the very worst SSDs out there.</p><p>Anyway, the 20\% fail rate may have been a specific manufacturer of SSDs, there are already some really shitty ones out there.</p><p>Lastly,</p><p><div class="quote"><p>Also doesn't one of the hardware manufactures (Samsung I think) have a patent on SSD so no one else can make the drives any way. Proprietary == Dead</p></div><p>You may need to get some more education about how patents work, because if that were true IBM would not have the fastest SSD on the markent.  See, they do this thing called licensing, which basically means company Y purchases an agreement from company X to use their technology to manufacture a product.  It creates an incentive for company X to allow other manufacturers to use their technology, flooding with the market with both quality and crap, but ultimately lowering the price and speeding innovation regardless of the high quality stuff (and improving the quality of the cheap stuff, it works both ways usually).</p><p>It's actually the reason patents exist.  We only get in a fuss when people patent stuff that either a.) should never need a patent (which means the patentor can sue for damages for infringement) or b.) some company goes around buying patents from legitimate inventors for the sole purpose of hoping said patents become infringed upon by an unwitting third party.  The former is a failure in the patent system, and the latter is patent trolling, which is an unethical and disgusting abuse of the process.</p></div>
	</htmltext>
<tokenext>I 've never heard of a 20 \ % fail rate for SSDs .
I 've heard of wear concerns , as each little bit on the drive can only be written a set number of times ( it 's at 10,000 or so , if I remember correctly ) .
However , thanks to the majic of wear leveling and the large amount of separate chips in an SSD drive , you can fill up your drive completely and you will have only written to each bit exactly once .
That means you could theoretically fill your SSD up 10,000 times before you would expect failure .
Reality is a bit lower than that , maybe 3,000-5,000 times due to having to TRIM to re-arrange the bits , but it 's still significant.Of course , even with the performance hit TFA talks about after filling your SSD ( which is fixed with the TRIM function TFA also talks about ) the fastest spinning disks are still much much slower than all but the very worst SSDs out there.Anyway , the 20 \ % fail rate may have been a specific manufacturer of SSDs , there are already some really shitty ones out there.Lastly,Also does n't one of the hardware manufactures ( Samsung I think ) have a patent on SSD so no one else can make the drives any way .
Proprietary = = DeadYou may need to get some more education about how patents work , because if that were true IBM would not have the fastest SSD on the markent .
See , they do this thing called licensing , which basically means company Y purchases an agreement from company X to use their technology to manufacture a product .
It creates an incentive for company X to allow other manufacturers to use their technology , flooding with the market with both quality and crap , but ultimately lowering the price and speeding innovation regardless of the high quality stuff ( and improving the quality of the cheap stuff , it works both ways usually ) .It 's actually the reason patents exist .
We only get in a fuss when people patent stuff that either a .
) should never need a patent ( which means the patentor can sue for damages for infringement ) or b .
) some company goes around buying patents from legitimate inventors for the sole purpose of hoping said patents become infringed upon by an unwitting third party .
The former is a failure in the patent system , and the latter is patent trolling , which is an unethical and disgusting abuse of the process .</tokentext>
<sentencetext>I've never heard of a 20\% fail rate for SSDs.
I've heard of wear concerns, as each little bit on the drive can only be written a set number of times (it's at 10,000 or so, if I remember correctly).
However, thanks to the majic of wear leveling and the large amount of separate chips in an SSD drive, you can fill up your drive completely and you will have only written to each bit exactly once.
That means you could theoretically fill your SSD up 10,000 times before you would expect failure.
Reality is a bit lower than that, maybe 3,000-5,000 times due to having to TRIM to re-arrange the bits, but it's still significant.Of course, even with the performance hit TFA talks about after filling your SSD (which is fixed with the TRIM function TFA also talks about) the fastest spinning disks are still much much slower than all but the very worst SSDs out there.Anyway, the 20\% fail rate may have been a specific manufacturer of SSDs, there are already some really shitty ones out there.Lastly,Also doesn't one of the hardware manufactures (Samsung I think) have a patent on SSD so no one else can make the drives any way.
Proprietary == DeadYou may need to get some more education about how patents work, because if that were true IBM would not have the fastest SSD on the markent.
See, they do this thing called licensing, which basically means company Y purchases an agreement from company X to use their technology to manufacture a product.
It creates an incentive for company X to allow other manufacturers to use their technology, flooding with the market with both quality and crap, but ultimately lowering the price and speeding innovation regardless of the high quality stuff (and improving the quality of the cheap stuff, it works both ways usually).It's actually the reason patents exist.
We only get in a fuss when people patent stuff that either a.
) should never need a patent (which means the patentor can sue for damages for infringement) or b.
) some company goes around buying patents from legitimate inventors for the sole purpose of hoping said patents become infringed upon by an unwitting third party.
The former is a failure in the patent system, and the latter is patent trolling, which is an unethical and disgusting abuse of the process.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367951</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369183</id>
	<title>Re:But its the future</title>
	<author>uncqual</author>
	<datestamp>1245255120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You kids today... I've been hearing about the death of spinning platters for <i>two</i> decades.
<br> <br>
Eventually they will virtually disappear as paper tape, cards and, more recently, floppies have -- but it will take a lot longer than most expect.
<br> <br>
Now get off my lawn.</htmltext>
<tokenext>You kids today... I 've been hearing about the death of spinning platters for two decades .
Eventually they will virtually disappear as paper tape , cards and , more recently , floppies have -- but it will take a lot longer than most expect .
Now get off my lawn .</tokentext>
<sentencetext>You kids today... I've been hearing about the death of spinning platters for two decades.
Eventually they will virtually disappear as paper tape, cards and, more recently, floppies have -- but it will take a lot longer than most expect.
Now get off my lawn.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368505</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368079</id>
	<title>Re:High failure rate</title>
	<author>Anonymous</author>
	<datestamp>1245244380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Proprietary == Dead</p><p>Yes, because NOBODY is going to buy a hard drive that runs at a speed  equivalent to 35,000 RPM just because its proprietary.</p></htmltext>
<tokenext>Proprietary = = DeadYes , because NOBODY is going to buy a hard drive that runs at a speed equivalent to 35,000 RPM just because its proprietary .</tokentext>
<sentencetext>Proprietary == DeadYes, because NOBODY is going to buy a hard drive that runs at a speed  equivalent to 35,000 RPM just because its proprietary.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367951</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935</id>
	<title>fragmentation?</title>
	<author>convolvatron</author>
	<datestamp>1245243360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>can someone explain why fragmentation in the mapping between logical blocks and<br>physical addresses causes performance degradation?</p><p>is it an issue with logically sequential reads being spread across multiple pages?</p><p>a multi-level lookup to perform the mapping?</p><p>?</p></htmltext>
<tokenext>can someone explain why fragmentation in the mapping between logical blocks andphysical addresses causes performance degradation ? is it an issue with logically sequential reads being spread across multiple pages ? a multi-level lookup to perform the mapping ?
?</tokentext>
<sentencetext>can someone explain why fragmentation in the mapping between logical blocks andphysical addresses causes performance degradation?is it an issue with logically sequential reads being spread across multiple pages?a multi-level lookup to perform the mapping?
?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28371053</id>
	<title>Re:What I really want to know</title>
	<author>zdzichu</author>
	<datestamp>1245318120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In Linux TRIM commands are issued by ext4 and btrfs. Btrfs also have two SSD modes for allocator, but is not meant for production now. There are probably other linux filesystem issuing TRIM, as it's implemented few kernel releases ago.</p></htmltext>
<tokenext>In Linux TRIM commands are issued by ext4 and btrfs .
Btrfs also have two SSD modes for allocator , but is not meant for production now .
There are probably other linux filesystem issuing TRIM , as it 's implemented few kernel releases ago .</tokentext>
<sentencetext>In Linux TRIM commands are issued by ext4 and btrfs.
Btrfs also have two SSD modes for allocator, but is not meant for production now.
There are probably other linux filesystem issuing TRIM, as it's implemented few kernel releases ago.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367839</id>
	<title>I love trim</title>
	<author>Anonymous</author>
	<datestamp>1245242460000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>but I love bald pussy even more!</htmltext>
<tokenext>but I love bald pussy even more !</tokentext>
<sentencetext>but I love bald pussy even more!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369397</id>
	<title>Re:fragmentation?</title>
	<author>7 digits</author>
	<datestamp>1245257460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Once upon a time, a technical subject on<nobr> <wbr></nobr>/. gave insightful and informative responses that were modded up. Time changes, I guess.</p><p>The "fragmentation" that SSD drive have don't really come from wear leveling, or from having to find some place to write things, but from the following properties:</p><p>* Filesystems read and write 4KiB pages.<br>* SSD can read many time 4KiB pages FAST, can write ONCE 4KiB pages FAST, but can only erase a whole 512KiB blocks SLOWLY.</p><p>When the drive is mostly empty, the SSD have no trouble finding blanks area to store the 4KiB write from the OS (he can even cheat with wear leveling to re-locate 4K pages to blank spaces when the OS re-write the same block). After some usage, ALL THE DRIVE HAVE BEEN WRITTEN TO ONCE. From the point of view of the SSD all the disk is full. From the point of view of the filesystem, there is unallocated space (for instance, space occupied for files that have been deleted).</p><p>At this point, when the OS send a write command to a specific page, the SSD is forced to to the following:</p><p>* read the 512KiB block that contain the page<br>* erase the block (SLOW)<br>* modify the page<br>* write back the 512KiB block</p><p>Of course, various kludges/caches are used to limit the issue, but the end result is here: writes are getting slow, and small writes are getting very slow.</p><p>The TRIM command is a command that tell the SSD drive that some 4KiB page can be safely erased (because it contains data from a delete file, for instance), and the SSD stores a map of the TRIM status of each page.</p><p>Then the SSD can do one of the following two things:</p><p>* If all the pages of a block are TRIMed, it can asynchronously erase the block. So, the next 4KiB write can be relocated to that block with free space, and also the 127 next 4KiB writes.<br>* If a write request come and there is no space to write data to, the drive can READ/ERASE/MODIFY/WRITE the block with most TRIMed space, which will speed up the next few writes.<br>(of course, you can have more complex algorithms to pre-erase at the cost of additional wear)</p></htmltext>
<tokenext>Once upon a time , a technical subject on / .
gave insightful and informative responses that were modded up .
Time changes , I guess.The " fragmentation " that SSD drive have do n't really come from wear leveling , or from having to find some place to write things , but from the following properties : * Filesystems read and write 4KiB pages .
* SSD can read many time 4KiB pages FAST , can write ONCE 4KiB pages FAST , but can only erase a whole 512KiB blocks SLOWLY.When the drive is mostly empty , the SSD have no trouble finding blanks area to store the 4KiB write from the OS ( he can even cheat with wear leveling to re-locate 4K pages to blank spaces when the OS re-write the same block ) .
After some usage , ALL THE DRIVE HAVE BEEN WRITTEN TO ONCE .
From the point of view of the SSD all the disk is full .
From the point of view of the filesystem , there is unallocated space ( for instance , space occupied for files that have been deleted ) .At this point , when the OS send a write command to a specific page , the SSD is forced to to the following : * read the 512KiB block that contain the page * erase the block ( SLOW ) * modify the page * write back the 512KiB blockOf course , various kludges/caches are used to limit the issue , but the end result is here : writes are getting slow , and small writes are getting very slow.The TRIM command is a command that tell the SSD drive that some 4KiB page can be safely erased ( because it contains data from a delete file , for instance ) , and the SSD stores a map of the TRIM status of each page.Then the SSD can do one of the following two things : * If all the pages of a block are TRIMed , it can asynchronously erase the block .
So , the next 4KiB write can be relocated to that block with free space , and also the 127 next 4KiB writes .
* If a write request come and there is no space to write data to , the drive can READ/ERASE/MODIFY/WRITE the block with most TRIMed space , which will speed up the next few writes .
( of course , you can have more complex algorithms to pre-erase at the cost of additional wear )</tokentext>
<sentencetext>Once upon a time, a technical subject on /.
gave insightful and informative responses that were modded up.
Time changes, I guess.The "fragmentation" that SSD drive have don't really come from wear leveling, or from having to find some place to write things, but from the following properties:* Filesystems read and write 4KiB pages.
* SSD can read many time 4KiB pages FAST, can write ONCE 4KiB pages FAST, but can only erase a whole 512KiB blocks SLOWLY.When the drive is mostly empty, the SSD have no trouble finding blanks area to store the 4KiB write from the OS (he can even cheat with wear leveling to re-locate 4K pages to blank spaces when the OS re-write the same block).
After some usage, ALL THE DRIVE HAVE BEEN WRITTEN TO ONCE.
From the point of view of the SSD all the disk is full.
From the point of view of the filesystem, there is unallocated space (for instance, space occupied for files that have been deleted).At this point, when the OS send a write command to a specific page, the SSD is forced to to the following:* read the 512KiB block that contain the page* erase the block (SLOW)* modify the page* write back the 512KiB blockOf course, various kludges/caches are used to limit the issue, but the end result is here: writes are getting slow, and small writes are getting very slow.The TRIM command is a command that tell the SSD drive that some 4KiB page can be safely erased (because it contains data from a delete file, for instance), and the SSD stores a map of the TRIM status of each page.Then the SSD can do one of the following two things:* If all the pages of a block are TRIMed, it can asynchronously erase the block.
So, the next 4KiB write can be relocated to that block with free space, and also the 127 next 4KiB writes.
* If a write request come and there is no space to write data to, the drive can READ/ERASE/MODIFY/WRITE the block with most TRIMed space, which will speed up the next few writes.
(of course, you can have more complex algorithms to pre-erase at the cost of additional wear)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368211</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368769</id>
	<title>Re:But its the future</title>
	<author>morgan\_greywolf</author>
	<datestamp>1245251220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Mod parent up.  SSDs are a very immature technology and are not, yet, ready for the enterprise data center.  Wait a few years until the technology matures. Magnetic hard drives have been around for what?  30-40 years?   They're stable and proven.  How many multi-petabyte enterprise data centers have you seen running SSDs as their primary storage?  None.  Yeah, that's what I thought.</p><p>They also have a long way to go before they compete with mangetic hard drives in terms of cost.</p></htmltext>
<tokenext>Mod parent up .
SSDs are a very immature technology and are not , yet , ready for the enterprise data center .
Wait a few years until the technology matures .
Magnetic hard drives have been around for what ?
30-40 years ?
They 're stable and proven .
How many multi-petabyte enterprise data centers have you seen running SSDs as their primary storage ?
None. Yeah , that 's what I thought.They also have a long way to go before they compete with mangetic hard drives in terms of cost .</tokentext>
<sentencetext>Mod parent up.
SSDs are a very immature technology and are not, yet, ready for the enterprise data center.
Wait a few years until the technology matures.
Magnetic hard drives have been around for what?
30-40 years?
They're stable and proven.
How many multi-petabyte enterprise data centers have you seen running SSDs as their primary storage?
None.  Yeah, that's what I thought.They also have a long way to go before they compete with mangetic hard drives in terms of cost.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368209</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891</id>
	<title>What I really want to know</title>
	<author>Anonymous</author>
	<datestamp>1245242880000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Which Linux filesystem works best with SSDs?  I don't intend to touch Win7.</p></htmltext>
<tokenext>Which Linux filesystem works best with SSDs ?
I do n't intend to touch Win7 .</tokentext>
<sentencetext>Which Linux filesystem works best with SSDs?
I don't intend to touch Win7.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368107</id>
	<title>Re:What I really want to know</title>
	<author>Anonymous</author>
	<datestamp>1245244680000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>NILFS - <a href="http://www.linux-mag.com/id/7345/" title="linux-mag.com">http://www.linux-mag.com/id/7345/</a> [linux-mag.com]</p></htmltext>
<tokenext>NILFS - http : //www.linux-mag.com/id/7345/ [ linux-mag.com ]</tokentext>
<sentencetext>NILFS - http://www.linux-mag.com/id/7345/ [linux-mag.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369153</id>
	<title>Re:What I really want to know</title>
	<author>RiotingPacifist</author>
	<datestamp>1245254700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>since when has ext been the best choice for anything, ext has always been about balance i doubt its the best choice for SSD, Id put my money on a <a href="http://en.wikipedia.org/wiki/Log-structured\_file\_system" title="wikipedia.org">log filesystem</a> [wikipedia.org], e.g you couldn't be more wrong and GP is correct because NILFS2 will write to used blocks much less often than conventional systems. OFC ext will be better than FAT because file-allocation table block is going to be a problem and it turns out ext4 with COW will also be good (but not as good as a log system and the journal itself will be a problem)</p></htmltext>
<tokenext>since when has ext been the best choice for anything , ext has always been about balance i doubt its the best choice for SSD , Id put my money on a log filesystem [ wikipedia.org ] , e.g you could n't be more wrong and GP is correct because NILFS2 will write to used blocks much less often than conventional systems .
OFC ext will be better than FAT because file-allocation table block is going to be a problem and it turns out ext4 with COW will also be good ( but not as good as a log system and the journal itself will be a problem )</tokentext>
<sentencetext>since when has ext been the best choice for anything, ext has always been about balance i doubt its the best choice for SSD, Id put my money on a log filesystem [wikipedia.org], e.g you couldn't be more wrong and GP is correct because NILFS2 will write to used blocks much less often than conventional systems.
OFC ext will be better than FAT because file-allocation table block is going to be a problem and it turns out ext4 with COW will also be good (but not as good as a log system and the journal itself will be a problem)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368361</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368611</id>
	<title>Re:What I really want to know</title>
	<author>onefriedrice</author>
	<datestamp>1245249660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I've got ext4 on my SSD.  It performs very well, but nilfs is a better fit for an SSD.  I'll reformat to nilfs sometime within the next few kernel release cycles.  Nevertheless, ext4 is just fine--I even have journaling and all the other bells and whistles.  I'm not afraid of the additional wear as I suspect the drive will fail by some other technical malfunction long before the flash cells wear out.<br> <br>

By the way, it's true what they say: An SSD is the one component that will provide you with the most noticeable performance boost your computer has ever had, and it's one of the cheapest, too.  I just got a 30G for ~$120 and my root filesystem fits comfortably on it (obviously my data is on a spinning disk).  Now I boot in seconds and applications (yes, even Firefox) load instantly--makes "bloat" virtually irrelevant.  Seriously, I still like platter drives for their capacity, but you don't need a lot of space to store your root filesystem and you can't beat the performance improvement for just over a hundred bucks spent.<br> <br>

In my opinion, an SSD need no longer be considered a toy for early adopters.  I certainly don't consider myself an early adopter.  It just makes sense.  Obviously SSD drives aren't as "mature" as our beloved platter drives, but they're not exactly brand new technology either.</htmltext>
<tokenext>I 've got ext4 on my SSD .
It performs very well , but nilfs is a better fit for an SSD .
I 'll reformat to nilfs sometime within the next few kernel release cycles .
Nevertheless , ext4 is just fine--I even have journaling and all the other bells and whistles .
I 'm not afraid of the additional wear as I suspect the drive will fail by some other technical malfunction long before the flash cells wear out .
By the way , it 's true what they say : An SSD is the one component that will provide you with the most noticeable performance boost your computer has ever had , and it 's one of the cheapest , too .
I just got a 30G for ~ $ 120 and my root filesystem fits comfortably on it ( obviously my data is on a spinning disk ) .
Now I boot in seconds and applications ( yes , even Firefox ) load instantly--makes " bloat " virtually irrelevant .
Seriously , I still like platter drives for their capacity , but you do n't need a lot of space to store your root filesystem and you ca n't beat the performance improvement for just over a hundred bucks spent .
In my opinion , an SSD need no longer be considered a toy for early adopters .
I certainly do n't consider myself an early adopter .
It just makes sense .
Obviously SSD drives are n't as " mature " as our beloved platter drives , but they 're not exactly brand new technology either .</tokentext>
<sentencetext>I've got ext4 on my SSD.
It performs very well, but nilfs is a better fit for an SSD.
I'll reformat to nilfs sometime within the next few kernel release cycles.
Nevertheless, ext4 is just fine--I even have journaling and all the other bells and whistles.
I'm not afraid of the additional wear as I suspect the drive will fail by some other technical malfunction long before the flash cells wear out.
By the way, it's true what they say: An SSD is the one component that will provide you with the most noticeable performance boost your computer has ever had, and it's one of the cheapest, too.
I just got a 30G for ~$120 and my root filesystem fits comfortably on it (obviously my data is on a spinning disk).
Now I boot in seconds and applications (yes, even Firefox) load instantly--makes "bloat" virtually irrelevant.
Seriously, I still like platter drives for their capacity, but you don't need a lot of space to store your root filesystem and you can't beat the performance improvement for just over a hundred bucks spent.
In my opinion, an SSD need no longer be considered a toy for early adopters.
I certainly don't consider myself an early adopter.
It just makes sense.
Obviously SSD drives aren't as "mature" as our beloved platter drives, but they're not exactly brand new technology either.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368361</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368075</id>
	<title>Re:Why Windows 7 in the summary?</title>
	<author>mrmeval</author>
	<datestamp>1245244380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Because someone got paid to do it. You don't think<nobr> <wbr></nobr>/. editors work for free do you?</p></htmltext>
<tokenext>Because someone got paid to do it .
You do n't think / .
editors work for free do you ?</tokentext>
<sentencetext>Because someone got paid to do it.
You don't think /.
editors work for free do you?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368029</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368661</id>
	<title>Re:fragmentation?</title>
	<author>Anonymous</author>
	<datestamp>1245250260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The problem is not the indexing, or the tables keeping track of where everything is, no matter how massive they may become.  It's that simply deleting data in FLASH memory does not make that area available immediately.</p><p>SSD's use FLASH memory.  "Empty" FLASH memory contains all 1's.  When you write data to "empty" FLASH memory, you write 0's where they are needed, and leave the 1's alone.</p><p>As somebody else mentioned, when the system needs to modify data, it simply marks the old data as deleted and writes the new data to a fresh location.  Very fast.</p><p>The problems start when you are out of fresh locations.  You cannot write 1's to FLASH memory, you can only write 0's.  So if you want to reuse a deleted location, you must first "erase" it and set it back to all 1's, so you can then write 0's where they need to be.</p><p>But FLASH memory cannot be erased one byte at a time, or even a few bytes at a time.  When you erase FLASH memory, you must erase a minimum of an entire page.  Pages are typically some multiple of 256 bytes.</p><p>After you have used your FLASH memory for a while, there will be no pages that are completely unused.  So if you want to re-use some deleted bytes in a page, you need to read the page into RAM, erase the page, incorporate the new data into the RAM copy, then write the page back out to FLASH.</p><p>Biggest problem of all:  Erasing FLASH is S-L-O-W.  Doing these read-erase-write gymnastics in an on-demand fashion is horribly inefficient.</p><p>New algorithms, either in the FLASH firmware, or in the OS file system, can continuously scan the pages in a FLASH memory and systematically defrag the system, putting the live data together into common pages, leaving other pages completely deleted.  This is a bit of a performance hit if you happen to need live data at the moment it is being moved (but the hit is orders of magnitude better than defragging a rotating media, which by definition means that at any given time the R/W head is anywhere but where you need it to be).</p><p>The pages that are entirely deleted can be erased in the background with zero performance impact.  And ideally there are then always free pages available and nobody has to wait for something to open up.</p></htmltext>
<tokenext>The problem is not the indexing , or the tables keeping track of where everything is , no matter how massive they may become .
It 's that simply deleting data in FLASH memory does not make that area available immediately.SSD 's use FLASH memory .
" Empty " FLASH memory contains all 1 's .
When you write data to " empty " FLASH memory , you write 0 's where they are needed , and leave the 1 's alone.As somebody else mentioned , when the system needs to modify data , it simply marks the old data as deleted and writes the new data to a fresh location .
Very fast.The problems start when you are out of fresh locations .
You can not write 1 's to FLASH memory , you can only write 0 's .
So if you want to reuse a deleted location , you must first " erase " it and set it back to all 1 's , so you can then write 0 's where they need to be.But FLASH memory can not be erased one byte at a time , or even a few bytes at a time .
When you erase FLASH memory , you must erase a minimum of an entire page .
Pages are typically some multiple of 256 bytes.After you have used your FLASH memory for a while , there will be no pages that are completely unused .
So if you want to re-use some deleted bytes in a page , you need to read the page into RAM , erase the page , incorporate the new data into the RAM copy , then write the page back out to FLASH.Biggest problem of all : Erasing FLASH is S-L-O-W. Doing these read-erase-write gymnastics in an on-demand fashion is horribly inefficient.New algorithms , either in the FLASH firmware , or in the OS file system , can continuously scan the pages in a FLASH memory and systematically defrag the system , putting the live data together into common pages , leaving other pages completely deleted .
This is a bit of a performance hit if you happen to need live data at the moment it is being moved ( but the hit is orders of magnitude better than defragging a rotating media , which by definition means that at any given time the R/W head is anywhere but where you need it to be ) .The pages that are entirely deleted can be erased in the background with zero performance impact .
And ideally there are then always free pages available and nobody has to wait for something to open up .</tokentext>
<sentencetext>The problem is not the indexing, or the tables keeping track of where everything is, no matter how massive they may become.
It's that simply deleting data in FLASH memory does not make that area available immediately.SSD's use FLASH memory.
"Empty" FLASH memory contains all 1's.
When you write data to "empty" FLASH memory, you write 0's where they are needed, and leave the 1's alone.As somebody else mentioned, when the system needs to modify data, it simply marks the old data as deleted and writes the new data to a fresh location.
Very fast.The problems start when you are out of fresh locations.
You cannot write 1's to FLASH memory, you can only write 0's.
So if you want to reuse a deleted location, you must first "erase" it and set it back to all 1's, so you can then write 0's where they need to be.But FLASH memory cannot be erased one byte at a time, or even a few bytes at a time.
When you erase FLASH memory, you must erase a minimum of an entire page.
Pages are typically some multiple of 256 bytes.After you have used your FLASH memory for a while, there will be no pages that are completely unused.
So if you want to re-use some deleted bytes in a page, you need to read the page into RAM, erase the page, incorporate the new data into the RAM copy, then write the page back out to FLASH.Biggest problem of all:  Erasing FLASH is S-L-O-W.  Doing these read-erase-write gymnastics in an on-demand fashion is horribly inefficient.New algorithms, either in the FLASH firmware, or in the OS file system, can continuously scan the pages in a FLASH memory and systematically defrag the system, putting the live data together into common pages, leaving other pages completely deleted.
This is a bit of a performance hit if you happen to need live data at the moment it is being moved (but the hit is orders of magnitude better than defragging a rotating media, which by definition means that at any given time the R/W head is anywhere but where you need it to be).The pages that are entirely deleted can be erased in the background with zero performance impact.
And ideally there are then always free pages available and nobody has to wait for something to open up.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368505</id>
	<title>Re:But its the future</title>
	<author>timmarhy</author>
	<datestamp>1245248520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>nope. tape is STILL the only way to backup your data if your serious. i've been hearing about the death of the spinning platters for a decade now and it's still just around the corner, much like fusion and peak oil.</htmltext>
<tokenext>nope .
tape is STILL the only way to backup your data if your serious .
i 've been hearing about the death of the spinning platters for a decade now and it 's still just around the corner , much like fusion and peak oil .</tokentext>
<sentencetext>nope.
tape is STILL the only way to backup your data if your serious.
i've been hearing about the death of the spinning platters for a decade now and it's still just around the corner, much like fusion and peak oil.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28373379</id>
	<title>Re:fragmentation?</title>
	<author>sjames</author>
	<datestamp>1245337140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In a nutshell, the page size of the flash is larger than the logical sector size, but flash can only erase whole pages at a time (and erase isn't a fast operation). So when blocks get re-written, the old content doesn't go away by default. Instead, the logical to physical mapping is changed to point to an already blank area and the old content is marked for reclaimation.</p><p>When the last logical block in a page is invalidated, then the page can be scheduled to be erased and returned to the available list.</p><p>The catch in all of this is that the accounting information is ALSO stored in flash. That would be a complete showstopper except that with flash, a bit can always be written from a 1 to a zero (it's the 0-&gt;1 transition that requires an erase). Updating the accounting information doesn't require an erase then write sequence, you just leave a pointer as all ones to signify that this is the current data. To update, choose another page, write the data there and then write it';s address into the pointer. The catch is that now in order to locate the current status, you read the head of the chain and follow the pointer. Repeat until the pointer is all ones. All of that indirection takes time.</p></htmltext>
<tokenext>In a nutshell , the page size of the flash is larger than the logical sector size , but flash can only erase whole pages at a time ( and erase is n't a fast operation ) .
So when blocks get re-written , the old content does n't go away by default .
Instead , the logical to physical mapping is changed to point to an already blank area and the old content is marked for reclaimation.When the last logical block in a page is invalidated , then the page can be scheduled to be erased and returned to the available list.The catch in all of this is that the accounting information is ALSO stored in flash .
That would be a complete showstopper except that with flash , a bit can always be written from a 1 to a zero ( it 's the 0- &gt; 1 transition that requires an erase ) .
Updating the accounting information does n't require an erase then write sequence , you just leave a pointer as all ones to signify that this is the current data .
To update , choose another page , write the data there and then write it ' ; s address into the pointer .
The catch is that now in order to locate the current status , you read the head of the chain and follow the pointer .
Repeat until the pointer is all ones .
All of that indirection takes time .</tokentext>
<sentencetext>In a nutshell, the page size of the flash is larger than the logical sector size, but flash can only erase whole pages at a time (and erase isn't a fast operation).
So when blocks get re-written, the old content doesn't go away by default.
Instead, the logical to physical mapping is changed to point to an already blank area and the old content is marked for reclaimation.When the last logical block in a page is invalidated, then the page can be scheduled to be erased and returned to the available list.The catch in all of this is that the accounting information is ALSO stored in flash.
That would be a complete showstopper except that with flash, a bit can always be written from a 1 to a zero (it's the 0-&gt;1 transition that requires an erase).
Updating the accounting information doesn't require an erase then write sequence, you just leave a pointer as all ones to signify that this is the current data.
To update, choose another page, write the data there and then write it';s address into the pointer.
The catch is that now in order to locate the current status, you read the head of the chain and follow the pointer.
Repeat until the pointer is all ones.
All of that indirection takes time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369397
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368211
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368215
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367951
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369715
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368611
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368361
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368057
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28375013
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368107
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368283
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368057
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368337
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367951
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368197
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368075
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368029
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369549
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28371053
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28401901
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368699
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368199
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368661
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368711
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367951
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28373379
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369717
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368699
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368199
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368769
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368209
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369393
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369301
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368147
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368029
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369111
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368109
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368029
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28371851
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368057
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368621
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28374375
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368147
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368029
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368653
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369183
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368505
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369153
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368361
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368057
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28376047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28371715
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368699
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368199
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369185
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368199
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368079
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367951
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_17_2222230_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369983
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368699
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368199
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_17_2222230.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367951
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368079
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368337
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368711
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368215
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_17_2222230.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367857
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368621
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369111
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368505
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369183
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368209
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368769
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368653
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_17_2222230.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367839
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_17_2222230.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367891
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368057
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368283
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28371851
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368361
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369153
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368611
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369715
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369393
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28371053
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368107
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28375013
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_17_2222230.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368029
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368147
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369301
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28374375
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368109
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368075
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_17_2222230.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28367935
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368199
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368699
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369717
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369983
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28401901
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28371715
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369185
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368197
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368661
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369549
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28376047
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28373379
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28368211
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_17_2222230.28369397
</commentlist>
</conversation>
