<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_06_30_1543246</id>
	<title>EXT4, Btrfs, NILFS2 Performance Compared</title>
	<author>timothy</author>
	<datestamp>1246378020000</datestamp>
	<htmltext>An anonymous reader writes <i>"Phoronix has published <a href="http://www.phoronix.com/vr.php?view=13997">Linux filesystem benchmarks</a> comparing XFS, EXT3, EXT4, Btrfs and NILFS2 filesystems. This is the first time that the new EXT4 and Btrfs and NILFS2 filesystems have been directly compared when it comes to their disk performance though the results may surprise. For the most part, EXT4 came out on top."</i></htmltext>
<tokenext>An anonymous reader writes " Phoronix has published Linux filesystem benchmarks comparing XFS , EXT3 , EXT4 , Btrfs and NILFS2 filesystems .
This is the first time that the new EXT4 and Btrfs and NILFS2 filesystems have been directly compared when it comes to their disk performance though the results may surprise .
For the most part , EXT4 came out on top .
"</tokentext>
<sentencetext>An anonymous reader writes "Phoronix has published Linux filesystem benchmarks comparing XFS, EXT3, EXT4, Btrfs and NILFS2 filesystems.
This is the first time that the new EXT4 and Btrfs and NILFS2 filesystems have been directly compared when it comes to their disk performance though the results may surprise.
For the most part, EXT4 came out on top.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923</id>
	<title>Do these benchmarks make any sense?</title>
	<author>Ed Avis</author>
	<datestamp>1246382400000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>The first benchmark on page 2 is 'Parallel BZIP2 Compression'.  They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem?  Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time.  They then say which filesystems are fastest, but 'these margins were small'.  Well, not really surprising.  Are the results statistically significant or was it just luck?  (They mention running the tests several times, but don't give variance etc.)</p><p>All benchmarks are flawed, but I think these really could be improved.  Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else - unless you believe in the possibility of some magic side-effect whereby the processor is slowed down because you're using a different filesystem.  (It's just about possible, e.g. if the filesystem gobbles lots of memory and causes your machine to thrash, but in the real world it's a waste of time running these things.)</p></htmltext>
<tokenext>The first benchmark on page 2 is 'Parallel BZIP2 Compression' .
They are testing the speed of running bzip2 , a CPU-intensive program , and drawing conclusions about the filesystem ?
Sure , there will be some time taken to read and write the large file from disk , but it is dwarfed by the computation time .
They then say which filesystems are fastest , but 'these margins were small' .
Well , not really surprising .
Are the results statistically significant or was it just luck ?
( They mention running the tests several times , but do n't give variance etc .
) All benchmarks are flawed , but I think these really could be improved .
Surely a good filesystem benchmark is one that exercises the filesystem and the disk , but little else - unless you believe in the possibility of some magic side-effect whereby the processor is slowed down because you 're using a different filesystem .
( It 's just about possible , e.g .
if the filesystem gobbles lots of memory and causes your machine to thrash , but in the real world it 's a waste of time running these things .
)</tokentext>
<sentencetext>The first benchmark on page 2 is 'Parallel BZIP2 Compression'.
They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem?
Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time.
They then say which filesystems are fastest, but 'these margins were small'.
Well, not really surprising.
Are the results statistically significant or was it just luck?
(They mention running the tests several times, but don't give variance etc.
)All benchmarks are flawed, but I think these really could be improved.
Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else - unless you believe in the possibility of some magic side-effect whereby the processor is slowed down because you're using a different filesystem.
(It's just about possible, e.g.
if the filesystem gobbles lots of memory and causes your machine to thrash, but in the real world it's a waste of time running these things.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530487</id>
	<title>Re:Another lame filesystem review</title>
	<author>hardburn</author>
	<datestamp>1246383960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The SSD benchmark is coming.</p><p>But never mind that, because TFA has some problems interpreting the data. If all the numbers are coming out the same, that indicates the bottleneck is somewhere other than IO. For instance, when requesting a small static file over Apache, the file is probably being fetched right out of the cache. This test might catch a few badly implemented filesystems or hard drive electronics, but the ones in the article might as well be thrown out.</p></htmltext>
<tokenext>The SSD benchmark is coming.But never mind that , because TFA has some problems interpreting the data .
If all the numbers are coming out the same , that indicates the bottleneck is somewhere other than IO .
For instance , when requesting a small static file over Apache , the file is probably being fetched right out of the cache .
This test might catch a few badly implemented filesystems or hard drive electronics , but the ones in the article might as well be thrown out .</tokentext>
<sentencetext>The SSD benchmark is coming.But never mind that, because TFA has some problems interpreting the data.
If all the numbers are coming out the same, that indicates the bottleneck is somewhere other than IO.
For instance, when requesting a small static file over Apache, the file is probably being fetched right out of the cache.
This test might catch a few badly implemented filesystems or hard drive electronics, but the ones in the article might as well be thrown out.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529793</id>
	<title>Btrfs</title>
	<author>Anonymous</author>
	<datestamp>1246381980000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>The version of Btrfs that they used was before their performance optimizations - 0.18.  But they now have 0.19 which is supposedly a lot faster and will be in the next kernel release.  There's about 5 months of development work between them:</p><p>#  v0.19 Released (June 2009) For 2.6.31-rc<br># v0.18 Released (Jan 2009) For 2.6.29-rc2</p></htmltext>
<tokenext>The version of Btrfs that they used was before their performance optimizations - 0.18 .
But they now have 0.19 which is supposedly a lot faster and will be in the next kernel release .
There 's about 5 months of development work between them : # v0.19 Released ( June 2009 ) For 2.6.31-rc # v0.18 Released ( Jan 2009 ) For 2.6.29-rc2</tokentext>
<sentencetext>The version of Btrfs that they used was before their performance optimizations - 0.18.
But they now have 0.19 which is supposedly a lot faster and will be in the next kernel release.
There's about 5 months of development work between them:#  v0.19 Released (June 2009) For 2.6.31-rc# v0.18 Released (Jan 2009) For 2.6.29-rc2</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28538177</id>
	<title>Re:Do these benchmarks make any sense?</title>
	<author>MrKaos</author>
	<datestamp>1246378680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>They then say which filesystems are fastest, but 'these margins were small'.</p></div></blockquote><p>
They also said "All mount options and file-system settings were left at their defaults", and I struggled to see what the point is of doing performance tests to find the fastest file system if you are not going to even attempt to get the best performance you can out of each filesystem.</p><p>
Why not do a test that just uses dd to do a straight read from a target hard drive to a file(s) on the target filesystem to eliminate *any* variation with the source data?
Read, write and delete times are the most important things to know and copying a large file on the same file system. What about how successive small file writes performed while a large write is under way. What about how the file system performs when it is 25\%, 50\% and 95\% full? Why not just use the exact same shell script with different target file systems? For everything else Reiser did, what about comparisons to reiserfs, it's still a pretty good file system.</p><p>
When I put my Studio systems together I spent time doing exactly the tests I outlined above to determine which file system would do the job. I actually thought this article might have been better than the tests I did, but as you rightly mentioned, most of the tests are to CPU bound and complicated to be of any use.</p></div>
	</htmltext>
<tokenext>They then say which filesystems are fastest , but 'these margins were small' .
They also said " All mount options and file-system settings were left at their defaults " , and I struggled to see what the point is of doing performance tests to find the fastest file system if you are not going to even attempt to get the best performance you can out of each filesystem .
Why not do a test that just uses dd to do a straight read from a target hard drive to a file ( s ) on the target filesystem to eliminate * any * variation with the source data ?
Read , write and delete times are the most important things to know and copying a large file on the same file system .
What about how successive small file writes performed while a large write is under way .
What about how the file system performs when it is 25 \ % , 50 \ % and 95 \ % full ?
Why not just use the exact same shell script with different target file systems ?
For everything else Reiser did , what about comparisons to reiserfs , it 's still a pretty good file system .
When I put my Studio systems together I spent time doing exactly the tests I outlined above to determine which file system would do the job .
I actually thought this article might have been better than the tests I did , but as you rightly mentioned , most of the tests are to CPU bound and complicated to be of any use .</tokentext>
<sentencetext>They then say which filesystems are fastest, but 'these margins were small'.
They also said "All mount options and file-system settings were left at their defaults", and I struggled to see what the point is of doing performance tests to find the fastest file system if you are not going to even attempt to get the best performance you can out of each filesystem.
Why not do a test that just uses dd to do a straight read from a target hard drive to a file(s) on the target filesystem to eliminate *any* variation with the source data?
Read, write and delete times are the most important things to know and copying a large file on the same file system.
What about how successive small file writes performed while a large write is under way.
What about how the file system performs when it is 25\%, 50\% and 95\% full?
Why not just use the exact same shell script with different target file systems?
For everything else Reiser did, what about comparisons to reiserfs, it's still a pretty good file system.
When I put my Studio systems together I spent time doing exactly the tests I outlined above to determine which file system would do the job.
I actually thought this article might have been better than the tests I did, but as you rightly mentioned, most of the tests are to CPU bound and complicated to be of any use.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530539</id>
	<title>Re:Do these benchmarks make any sense?</title>
	<author>compro01</author>
	<datestamp>1246384080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A processor-intensive test will show which filesystem has the most overhead WRT the processor.  And as the test shows, they're all pretty much the same in that regard.</p></htmltext>
<tokenext>A processor-intensive test will show which filesystem has the most overhead WRT the processor .
And as the test shows , they 're all pretty much the same in that regard .</tokentext>
<sentencetext>A processor-intensive test will show which filesystem has the most overhead WRT the processor.
And as the test shows, they're all pretty much the same in that regard.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533095</id>
	<title>Re:JFS?</title>
	<author>Anonymous</author>
	<datestamp>1246392420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>I have been running XFS on mission critical systems for years and not lost any data.</p></div></blockquote><p>For the rest of us not running mission critical systems with battery back-up and no faulty graphics drivers... XFS was a source of head-aches, truncating open files on every single crash.<br>It wasn't even a bug, but a design decision. Maybe they've fixed it by now, but they definitely lost my trust.</p></div>
	</htmltext>
<tokenext>I have been running XFS on mission critical systems for years and not lost any data.For the rest of us not running mission critical systems with battery back-up and no faulty graphics drivers... XFS was a source of head-aches , truncating open files on every single crash.It was n't even a bug , but a design decision .
Maybe they 've fixed it by now , but they definitely lost my trust .</tokentext>
<sentencetext>I have been running XFS on mission critical systems for years and not lost any data.For the rest of us not running mission critical systems with battery back-up and no faulty graphics drivers... XFS was a source of head-aches, truncating open files on every single crash.It wasn't even a bug, but a design decision.
Maybe they've fixed it by now, but they definitely lost my trust.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531917</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529969</id>
	<title>Another lame filesystem comment</title>
	<author>greg1104</author>
	<datestamp>1246382580000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Btrfs includes support for TRIM on SSD, but that's a secondary addition.  The main purpose of Btrfs is to compete against Sun's ZFS in the area of robust fault tolerance.  If you look at the <a href="http://lkml.org/lkml/2007/6/12/242" title="lkml.org">original announcement</a> [lkml.org], you can see SSD support wasn't on the radar at all; that's strictly been an afterthought in the design.  Btrfs is absolutely designed to work on SATA drives and to compete head to head against ext3/ext4.</p></htmltext>
<tokenext>Btrfs includes support for TRIM on SSD , but that 's a secondary addition .
The main purpose of Btrfs is to compete against Sun 's ZFS in the area of robust fault tolerance .
If you look at the original announcement [ lkml.org ] , you can see SSD support was n't on the radar at all ; that 's strictly been an afterthought in the design .
Btrfs is absolutely designed to work on SATA drives and to compete head to head against ext3/ext4 .</tokentext>
<sentencetext>Btrfs includes support for TRIM on SSD, but that's a secondary addition.
The main purpose of Btrfs is to compete against Sun's ZFS in the area of robust fault tolerance.
If you look at the original announcement [lkml.org], you can see SSD support wasn't on the radar at all; that's strictly been an afterthought in the design.
Btrfs is absolutely designed to work on SATA drives and to compete head to head against ext3/ext4.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531315</id>
	<title>Dubious</title>
	<author>grotgrot</author>
	<datestamp>1246386060000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>I suspect their test methodology isn't very good, in particular the SQLite tests.  SQLite performance is largely based on when commits happen as at that point fsync is called at least twice and sometimes more (the database, journals and containing directory need to be consistent).  The disk has to rotate to the relevant point and write outstanding data to the platters before returning.  This takes a considerable amount of time relative to normal disk writing which is cached and write behind.  If you don't use the same partition for testing then the differing amount of sectors per physical track will affect performance.  Similarly a drive that lies about data being on the platters will seem to be faster, but is not safe should there be a power failure or similar abrupt stop.</p><p>Someone did file a <a href="http://www.sqlite.org/cvstrac/tktview?tn=3934" title="sqlite.org">ticket</a> [sqlite.org] at SQLite but from the comments in there you can see that what Phoronix did is not reproducible.</p></htmltext>
<tokenext>I suspect their test methodology is n't very good , in particular the SQLite tests .
SQLite performance is largely based on when commits happen as at that point fsync is called at least twice and sometimes more ( the database , journals and containing directory need to be consistent ) .
The disk has to rotate to the relevant point and write outstanding data to the platters before returning .
This takes a considerable amount of time relative to normal disk writing which is cached and write behind .
If you do n't use the same partition for testing then the differing amount of sectors per physical track will affect performance .
Similarly a drive that lies about data being on the platters will seem to be faster , but is not safe should there be a power failure or similar abrupt stop.Someone did file a ticket [ sqlite.org ] at SQLite but from the comments in there you can see that what Phoronix did is not reproducible .</tokentext>
<sentencetext>I suspect their test methodology isn't very good, in particular the SQLite tests.
SQLite performance is largely based on when commits happen as at that point fsync is called at least twice and sometimes more (the database, journals and containing directory need to be consistent).
The disk has to rotate to the relevant point and write outstanding data to the platters before returning.
This takes a considerable amount of time relative to normal disk writing which is cached and write behind.
If you don't use the same partition for testing then the differing amount of sectors per physical track will affect performance.
Similarly a drive that lies about data being on the platters will seem to be faster, but is not safe should there be a power failure or similar abrupt stop.Someone did file a ticket [sqlite.org] at SQLite but from the comments in there you can see that what Phoronix did is not reproducible.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529855</id>
	<title>Comparing Apples and Oranges</title>
	<author>mpapet</author>
	<datestamp>1246382160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>All of the file systems are designed for specific tasks/circumstances.  I'm too lazy to dig up what's special about each, but they are most useful in specific niches.  Not that you \_can't\_ generalize, but calling ext4 the best of the bunch misses the whole point of the other file systems.</p></htmltext>
<tokenext>All of the file systems are designed for specific tasks/circumstances .
I 'm too lazy to dig up what 's special about each , but they are most useful in specific niches .
Not that you \ _ca n't \ _ generalize , but calling ext4 the best of the bunch misses the whole point of the other file systems .</tokentext>
<sentencetext>All of the file systems are designed for specific tasks/circumstances.
I'm too lazy to dig up what's special about each, but they are most useful in specific niches.
Not that you \_can't\_ generalize, but calling ext4 the best of the bunch misses the whole point of the other file systems.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532513</id>
	<title>Sexier technology</title>
	<author>kheldan</author>
	<datestamp>1246390140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Personally I'm holding out for the initial release of the MILFS2 filesystem. XD</htmltext>
<tokenext>Personally I 'm holding out for the initial release of the MILFS2 filesystem .
XD</tokentext>
<sentencetext>Personally I'm holding out for the initial release of the MILFS2 filesystem.
XD</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530101</id>
	<title>These all had slow benchmarks</title>
	<author>Anonymous</author>
	<datestamp>1246383000000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I need a filesystem with <i>killer</i> performance. Any suggestions?</p></htmltext>
<tokenext>I need a filesystem with killer performance .
Any suggestions ?</tokentext>
<sentencetext>I need a filesystem with killer performance.
Any suggestions?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530237</id>
	<title>ext4 on top</title>
	<author>Anonymous</author>
	<datestamp>1246383360000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>If the system crashes, any of ext4's files that were modified recently will be truncated to 0 bytes, but I guess that's okay because it's <i>fast</i>!</p></htmltext>
<tokenext>If the system crashes , any of ext4 's files that were modified recently will be truncated to 0 bytes , but I guess that 's okay because it 's fast !</tokentext>
<sentencetext>If the system crashes, any of ext4's files that were modified recently will be truncated to 0 bytes, but I guess that's okay because it's fast!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28534783</id>
	<title>Performance should not be determinant!</title>
	<author>MilesNaismith</author>
	<datestamp>1246356360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It doesn't matter how fast it is, if it isn't correct!

We as IT professionals should focus more on CORRECTNESS of
the terabyes of data we store not how many IO/s as long as it does
the job we need.  Ensuring correctness should be job #1.

Right now in production for me safe means ZFS.  When Linux delivers
a comparable stable tested filesystem I'll be all over it.  Right now it
still seems like the 1980's where 99\% of people are obsessed over how
FAST they can make things.  I cringe every time I watch an admin
start "tuning" a filesystem to make it faster by flipping off sync and
other safety features.</htmltext>
<tokenext>It does n't matter how fast it is , if it is n't correct !
We as IT professionals should focus more on CORRECTNESS of the terabyes of data we store not how many IO/s as long as it does the job we need .
Ensuring correctness should be job # 1 .
Right now in production for me safe means ZFS .
When Linux delivers a comparable stable tested filesystem I 'll be all over it .
Right now it still seems like the 1980 's where 99 \ % of people are obsessed over how FAST they can make things .
I cringe every time I watch an admin start " tuning " a filesystem to make it faster by flipping off sync and other safety features .</tokentext>
<sentencetext>It doesn't matter how fast it is, if it isn't correct!
We as IT professionals should focus more on CORRECTNESS of
the terabyes of data we store not how many IO/s as long as it does
the job we need.
Ensuring correctness should be job #1.
Right now in production for me safe means ZFS.
When Linux delivers
a comparable stable tested filesystem I'll be all over it.
Right now it
still seems like the 1980's where 99\% of people are obsessed over how
FAST they can make things.
I cringe every time I watch an admin
start "tuning" a filesystem to make it faster by flipping off sync and
other safety features.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28538223</id>
	<title>NILFS2 is great for write-heavy workloads</title>
	<author>Jacques Chester</author>
	<datestamp>1246378980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>At least according to some rough microbenchmarking <a href="http://lists.luaforge.net/pipermail/kepler-project/2009-June/003452.html" title="luaforge.net">I've done myself</a> [luaforge.net]. My workload is to write raw CSV to disk as fast as possible. In testing, NILFS2 was nearly 20\% faster than ext3 on a spinning disk.</p><p>It was also smoother. Under very heavy load ext3 seemingly batched up writes then flushed them all at once, causing my server process to drop from 99\% to 70\% utilisation. NILFS seemed to consume a roughly constant percentage of CPU the whole time, which is much more in line with what I want.</p><p>NILFS2 is not for everyone or for every purpose. But it suits my purpose. As usual, you should do the engineering thing: consider your needs, test the alternatives.</p></htmltext>
<tokenext>At least according to some rough microbenchmarking I 've done myself [ luaforge.net ] .
My workload is to write raw CSV to disk as fast as possible .
In testing , NILFS2 was nearly 20 \ % faster than ext3 on a spinning disk.It was also smoother .
Under very heavy load ext3 seemingly batched up writes then flushed them all at once , causing my server process to drop from 99 \ % to 70 \ % utilisation .
NILFS seemed to consume a roughly constant percentage of CPU the whole time , which is much more in line with what I want.NILFS2 is not for everyone or for every purpose .
But it suits my purpose .
As usual , you should do the engineering thing : consider your needs , test the alternatives .</tokentext>
<sentencetext>At least according to some rough microbenchmarking I've done myself [luaforge.net].
My workload is to write raw CSV to disk as fast as possible.
In testing, NILFS2 was nearly 20\% faster than ext3 on a spinning disk.It was also smoother.
Under very heavy load ext3 seemingly batched up writes then flushed them all at once, causing my server process to drop from 99\% to 70\% utilisation.
NILFS seemed to consume a roughly constant percentage of CPU the whole time, which is much more in line with what I want.NILFS2 is not for everyone or for every purpose.
But it suits my purpose.
As usual, you should do the engineering thing: consider your needs, test the alternatives.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530731</id>
	<title>NILFS2 is pretty interesting</title>
	<author>Anonymous</author>
	<datestamp>1246384620000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>NILFS2 (http://www.nilfs.org/en/) is actually a pretty interesting filesystem.  It's a log-structured filesystem, meaning that it treats your disk as a big circular logging device.</p><p>Log structured filesystems were originally developed by the research community (e.g. see the paper on Sprite LFS here, which is the first example that I'm aware of: <a href="http://www.citeulike.org/user/Wombat/article/208320" title="citeulike.org">http://www.citeulike.org/user/Wombat/article/208320</a> [citeulike.org]) to improve disk performance.  The original assumption behind Sprite LFS was that you'll have lots of memory, so you'll be able to mostly service data reads from your cache rather than needing to go to disk; however, writes to files are still awkward as you typically need to seek around to the right locations on the disk.  Sprite LFS took the approach of buffering writes in memory for a time and then squirting a big batch of them onto the disk sequentially at once, in the form of a "log" - doing a big sequential write of all the changes onto the same part of the disk maximised the available write bandwidth.  This approach implies that data was not being altered in place, so it was also necessary to write - also into the log - new copies of the inodes whose contents were altered.  The new inode would point to the original blocks for unmodified areas of the file and include pointers to the new blocks for any parts of the file that got altered.  You can find out the most recent state of a file by finding the inode for that file that has most recently been written to the log.</p><p>This design has a load of nice properties, such as:<br>* You get good write bandwidth, even when modifying small files, since you don't have to keep seeking the disk head to make in-place changes.<br>* The filesystem doesn't need a lengthy fsck to recover from crash (although it's not "journaled" like other filesystems, effectively the whole filesystem *is* one big journal and that gives you similar properties)<br>* Because you're not repeatedly modifying the same bit of disk it could potentially perform better and cause less wear on an appropriately-chosen flash device (don't know how much it helps on an SSD that's doing its own block remapping / wear levelling...).  One of the existing flash filesystems for Linux (JFFS2, I *think*) is log structured.</p><p>In the case of NILFS2 they've exploited the fact that inodes are rewritten when their contents are modified to give you historical snapshots that should be essentially "free" as part of the filesystem's normal operation.  They have the filesystem frequently make automatic checkpoints of the entire filesystem's state.  These will normally be deleted after a time but you have the option of making any of them permanent.  Obviously if you just keep logging all changes to a disk it'll get filled up, so there's typically a garbage collector daemon of some kind that "repacks" old data, deletes stuff that's no longer needed, frees disk space and potentially optimises file layout.  This is necessary for long term operation of a log structured filesystem, though not necessary if running read-only.</p><p>Another modern log structured FS is DragonflyBSD's HAMMER (http://www.dragonflybsd.org/hammer/), which is being ported to Linux as a SoC project, I think (http://hammerfs-ftw.blogspot.com/)</p></htmltext>
<tokenext>NILFS2 ( http : //www.nilfs.org/en/ ) is actually a pretty interesting filesystem .
It 's a log-structured filesystem , meaning that it treats your disk as a big circular logging device.Log structured filesystems were originally developed by the research community ( e.g .
see the paper on Sprite LFS here , which is the first example that I 'm aware of : http : //www.citeulike.org/user/Wombat/article/208320 [ citeulike.org ] ) to improve disk performance .
The original assumption behind Sprite LFS was that you 'll have lots of memory , so you 'll be able to mostly service data reads from your cache rather than needing to go to disk ; however , writes to files are still awkward as you typically need to seek around to the right locations on the disk .
Sprite LFS took the approach of buffering writes in memory for a time and then squirting a big batch of them onto the disk sequentially at once , in the form of a " log " - doing a big sequential write of all the changes onto the same part of the disk maximised the available write bandwidth .
This approach implies that data was not being altered in place , so it was also necessary to write - also into the log - new copies of the inodes whose contents were altered .
The new inode would point to the original blocks for unmodified areas of the file and include pointers to the new blocks for any parts of the file that got altered .
You can find out the most recent state of a file by finding the inode for that file that has most recently been written to the log.This design has a load of nice properties , such as : * You get good write bandwidth , even when modifying small files , since you do n't have to keep seeking the disk head to make in-place changes .
* The filesystem does n't need a lengthy fsck to recover from crash ( although it 's not " journaled " like other filesystems , effectively the whole filesystem * is * one big journal and that gives you similar properties ) * Because you 're not repeatedly modifying the same bit of disk it could potentially perform better and cause less wear on an appropriately-chosen flash device ( do n't know how much it helps on an SSD that 's doing its own block remapping / wear levelling... ) .
One of the existing flash filesystems for Linux ( JFFS2 , I * think * ) is log structured.In the case of NILFS2 they 've exploited the fact that inodes are rewritten when their contents are modified to give you historical snapshots that should be essentially " free " as part of the filesystem 's normal operation .
They have the filesystem frequently make automatic checkpoints of the entire filesystem 's state .
These will normally be deleted after a time but you have the option of making any of them permanent .
Obviously if you just keep logging all changes to a disk it 'll get filled up , so there 's typically a garbage collector daemon of some kind that " repacks " old data , deletes stuff that 's no longer needed , frees disk space and potentially optimises file layout .
This is necessary for long term operation of a log structured filesystem , though not necessary if running read-only.Another modern log structured FS is DragonflyBSD 's HAMMER ( http : //www.dragonflybsd.org/hammer/ ) , which is being ported to Linux as a SoC project , I think ( http : //hammerfs-ftw.blogspot.com/ )</tokentext>
<sentencetext>NILFS2 (http://www.nilfs.org/en/) is actually a pretty interesting filesystem.
It's a log-structured filesystem, meaning that it treats your disk as a big circular logging device.Log structured filesystems were originally developed by the research community (e.g.
see the paper on Sprite LFS here, which is the first example that I'm aware of: http://www.citeulike.org/user/Wombat/article/208320 [citeulike.org]) to improve disk performance.
The original assumption behind Sprite LFS was that you'll have lots of memory, so you'll be able to mostly service data reads from your cache rather than needing to go to disk; however, writes to files are still awkward as you typically need to seek around to the right locations on the disk.
Sprite LFS took the approach of buffering writes in memory for a time and then squirting a big batch of them onto the disk sequentially at once, in the form of a "log" - doing a big sequential write of all the changes onto the same part of the disk maximised the available write bandwidth.
This approach implies that data was not being altered in place, so it was also necessary to write - also into the log - new copies of the inodes whose contents were altered.
The new inode would point to the original blocks for unmodified areas of the file and include pointers to the new blocks for any parts of the file that got altered.
You can find out the most recent state of a file by finding the inode for that file that has most recently been written to the log.This design has a load of nice properties, such as:* You get good write bandwidth, even when modifying small files, since you don't have to keep seeking the disk head to make in-place changes.
* The filesystem doesn't need a lengthy fsck to recover from crash (although it's not "journaled" like other filesystems, effectively the whole filesystem *is* one big journal and that gives you similar properties)* Because you're not repeatedly modifying the same bit of disk it could potentially perform better and cause less wear on an appropriately-chosen flash device (don't know how much it helps on an SSD that's doing its own block remapping / wear levelling...).
One of the existing flash filesystems for Linux (JFFS2, I *think*) is log structured.In the case of NILFS2 they've exploited the fact that inodes are rewritten when their contents are modified to give you historical snapshots that should be essentially "free" as part of the filesystem's normal operation.
They have the filesystem frequently make automatic checkpoints of the entire filesystem's state.
These will normally be deleted after a time but you have the option of making any of them permanent.
Obviously if you just keep logging all changes to a disk it'll get filled up, so there's typically a garbage collector daemon of some kind that "repacks" old data, deletes stuff that's no longer needed, frees disk space and potentially optimises file layout.
This is necessary for long term operation of a log structured filesystem, though not necessary if running read-only.Another modern log structured FS is DragonflyBSD's HAMMER (http://www.dragonflybsd.org/hammer/), which is being ported to Linux as a SoC project, I think (http://hammerfs-ftw.blogspot.com/)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530359</id>
	<title>Re:Another lame filesystem review</title>
	<author>Directrix1</author>
	<datestamp>1246383660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>NILFS2 is made for SSD, but Btrfs isn't.  NILFS2, because of how it stores files, should have a good read performance advantage due to their being no penalty for random access on SSD, and if I'm not mistaken its write speed should be fast on just about anything.</p></htmltext>
<tokenext>NILFS2 is made for SSD , but Btrfs is n't .
NILFS2 , because of how it stores files , should have a good read performance advantage due to their being no penalty for random access on SSD , and if I 'm not mistaken its write speed should be fast on just about anything .</tokentext>
<sentencetext>NILFS2 is made for SSD, but Btrfs isn't.
NILFS2, because of how it stores files, should have a good read performance advantage due to their being no penalty for random access on SSD, and if I'm not mistaken its write speed should be fast on just about anything.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28536921</id>
	<title>I'm surprised the filesystem is tested at all</title>
	<author>Otterley</author>
	<datestamp>1246368120000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB).  Are they testing the filesystems or the buffer cache?  I don't see any indication that any of these filesystems are mounted with the "sync" flag.</p></htmltext>
<tokenext>Almost all of their tests involve working sets smaller than RAM ( the installed RAM size is 4GB , but the working sets are 2GB ) .
Are they testing the filesystems or the buffer cache ?
I do n't see any indication that any of these filesystems are mounted with the " sync " flag .</tokentext>
<sentencetext>Almost all of their tests involve working sets smaller than RAM (the installed RAM size is 4GB, but the working sets are 2GB).
Are they testing the filesystems or the buffer cache?
I don't see any indication that any of these filesystems are mounted with the "sync" flag.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28697155</id>
	<title>btrfs is reiser4</title>
	<author>Anonymous</author>
	<datestamp>1247569860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I have conclusively proven that btrfs is actually a blatant repackaging of reiser4 in a cover up to avoid the political disaster of supporting the code of a convicted murderer. btrfs is 81.56\% similar to reiser4. Here are the steps to reproduce. Please spread the word. http://pastebin.com/ff42272d http://pastebin.com/f27912488</p></htmltext>
<tokenext>I have conclusively proven that btrfs is actually a blatant repackaging of reiser4 in a cover up to avoid the political disaster of supporting the code of a convicted murderer .
btrfs is 81.56 \ % similar to reiser4 .
Here are the steps to reproduce .
Please spread the word .
http : //pastebin.com/ff42272d http : //pastebin.com/f27912488</tokentext>
<sentencetext>I have conclusively proven that btrfs is actually a blatant repackaging of reiser4 in a cover up to avoid the political disaster of supporting the code of a convicted murderer.
btrfs is 81.56\% similar to reiser4.
Here are the steps to reproduce.
Please spread the word.
http://pastebin.com/ff42272d http://pastebin.com/f27912488</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531501</id>
	<title>So what - speed is not all in a file system</title>
	<author>krischik</author>
	<datestamp>1246386600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So what - when was still using Linux a working backup (incl. ACL, Xattib etc. pp) was the most important criteria and XFS came up on top. xfsdump / xfsrestore has save the day more then once.</p></htmltext>
<tokenext>So what - when was still using Linux a working backup ( incl .
ACL , Xattib etc .
pp ) was the most important criteria and XFS came up on top .
xfsdump / xfsrestore has save the day more then once .</tokentext>
<sentencetext>So what - when was still using Linux a working backup (incl.
ACL, Xattib etc.
pp) was the most important criteria and XFS came up on top.
xfsdump / xfsrestore has save the day more then once.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28535157</id>
	<title>Re:buttfs?</title>
	<author>larry bagina</author>
	<datestamp>1246358280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>does pussyfs have TRIM support?</htmltext>
<tokenext>does pussyfs have TRIM support ?</tokentext>
<sentencetext>does pussyfs have TRIM support?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530067</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809</id>
	<title>JFS?</title>
	<author>chrylis</author>
	<datestamp>1246382040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Kinda disappointed the article didn't discuss JFS.  After running into the fragility of XFS, I tried it out, and it's highly robust, fast, and easy on the CPU.</p></htmltext>
<tokenext>Kinda disappointed the article did n't discuss JFS .
After running into the fragility of XFS , I tried it out , and it 's highly robust , fast , and easy on the CPU .</tokentext>
<sentencetext>Kinda disappointed the article didn't discuss JFS.
After running into the fragility of XFS, I tried it out, and it's highly robust, fast, and easy on the CPU.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531917</id>
	<title>Re:JFS?</title>
	<author>Anonymous</author>
	<datestamp>1246388040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Fragility of XFS?  Compared to JFS?  LOL.  I have been running XFS on mission critical systems for years and not lost any data.  I can't say that about <i>any</i> other filesystem.  JFS is relatively stable in the kernel these days but it used to cause kernel faults <i>all</i> the time.  Problem is, there is no longer anyone maintaining JFS.</p><p>These benchmarks are very poorly done.  The results are all over the place with sometimes a 100 or 1000 times difference in speed between filesystems that perform remarkably similar in other tests.  That indicates a problem with their testing methods.</p></htmltext>
<tokenext>Fragility of XFS ?
Compared to JFS ?
LOL. I have been running XFS on mission critical systems for years and not lost any data .
I ca n't say that about any other filesystem .
JFS is relatively stable in the kernel these days but it used to cause kernel faults all the time .
Problem is , there is no longer anyone maintaining JFS.These benchmarks are very poorly done .
The results are all over the place with sometimes a 100 or 1000 times difference in speed between filesystems that perform remarkably similar in other tests .
That indicates a problem with their testing methods .</tokentext>
<sentencetext>Fragility of XFS?
Compared to JFS?
LOL.  I have been running XFS on mission critical systems for years and not lost any data.
I can't say that about any other filesystem.
JFS is relatively stable in the kernel these days but it used to cause kernel faults all the time.
Problem is, there is no longer anyone maintaining JFS.These benchmarks are very poorly done.
The results are all over the place with sometimes a 100 or 1000 times difference in speed between filesystems that perform remarkably similar in other tests.
That indicates a problem with their testing methods.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28539767</id>
	<title>BTRFS or ZFS or ....</title>
	<author>bagsta</author>
	<datestamp>1246441200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>As far as I can see from the comparison of these FSes, BTRFS is a promising file system for Linux and is under development. Some say that it will be the ZFS of Linux or even better. I think time will say.
<br>
<a href="http://storagemojo.com/2009/05/20/btrfs-vs-zfs-omg/" title="storagemojo.com" rel="nofollow">Some other say</a> [storagemojo.com], now that Oracle owns Sun, Oracle can change the license of ZFS from <a href="http://www.sun.com/cddl/cddl.html" title="sun.com" rel="nofollow">CDDL</a> [sun.com] to <a href="http://www.gnu.org/licenses/gpl-2.0.html" title="gnu.org" rel="nofollow">GPL2</a> [gnu.org] and port to Linux. But porting ZFS to Linux it's <a href="http://blogs.sun.com/bonwick/entry/rampant\_layering\_violation" title="sun.com" rel="nofollow">another story</a> [sun.com]...</htmltext>
<tokenext>As far as I can see from the comparison of these FSes , BTRFS is a promising file system for Linux and is under development .
Some say that it will be the ZFS of Linux or even better .
I think time will say .
Some other say [ storagemojo.com ] , now that Oracle owns Sun , Oracle can change the license of ZFS from CDDL [ sun.com ] to GPL2 [ gnu.org ] and port to Linux .
But porting ZFS to Linux it 's another story [ sun.com ] .. .</tokentext>
<sentencetext>As far as I can see from the comparison of these FSes, BTRFS is a promising file system for Linux and is under development.
Some say that it will be the ZFS of Linux or even better.
I think time will say.
Some other say [storagemojo.com], now that Oracle owns Sun, Oracle can change the license of ZFS from CDDL [sun.com] to GPL2 [gnu.org] and port to Linux.
But porting ZFS to Linux it's another story [sun.com]...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530005</id>
	<title>Yeah but...</title>
	<author>Anonymous</author>
	<datestamp>1246382700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>...Does it run Linux?</p></htmltext>
<tokenext>...Does it run Linux ?</tokentext>
<sentencetext>...Does it run Linux?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530379</id>
	<title>Who's stripping?</title>
	<author>clarkn0va</author>
	<datestamp>1246383660000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>Yeah, I know I'm behind the times, but when did striping become stripping?</htmltext>
<tokenext>Yeah , I know I 'm behind the times , but when did striping become stripping ?</tokentext>
<sentencetext>Yeah, I know I'm behind the times, but when did striping become stripping?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28535165</id>
	<title>"Phoronix benchmark" is an oxymoron</title>
	<author>Anonymous</author>
	<datestamp>1246358340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Phoronix - conflation of "phoenix" and "moron".  I.e., a moron that rises from the ashes, refusing to die.</p></htmltext>
<tokenext>Phoronix - conflation of " phoenix " and " moron " .
I.e. , a moron that rises from the ashes , refusing to die .</tokentext>
<sentencetext>Phoronix - conflation of "phoenix" and "moron".
I.e., a moron that rises from the ashes, refusing to die.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530563</id>
	<title>Re:Btrfs</title>
	<author>Anonymous</author>
	<datestamp>1246384200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Btrfs tends to perform best at Bennigan's.</p></htmltext>
<tokenext>Btrfs tends to perform best at Bennigan 's .</tokentext>
<sentencetext>Btrfs tends to perform best at Bennigan's.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529793</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28536955</id>
	<title>Re:Btrfs</title>
	<author>fatp</author>
	<datestamp>1246368360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Then 0.19 is not actually released (no one use rc kernel, right?). We can only say it was not born in the right time.<br><br>BTW, since btrfs came from oracle, and it performs so poorly with sqlite and postgresql, I would be interested its performance with Oracle's own databases... oracle, Berkeley db, mysql... It would be interesting to see it runs well with Oracle RDBMS, but funny if it takes months to create the database (unitl 0.20 is out??)</htmltext>
<tokenext>Then 0.19 is not actually released ( no one use rc kernel , right ? ) .
We can only say it was not born in the right time.BTW , since btrfs came from oracle , and it performs so poorly with sqlite and postgresql , I would be interested its performance with Oracle 's own databases... oracle , Berkeley db , mysql... It would be interesting to see it runs well with Oracle RDBMS , but funny if it takes months to create the database ( unitl 0.20 is out ? ?
)</tokentext>
<sentencetext>Then 0.19 is not actually released (no one use rc kernel, right?).
We can only say it was not born in the right time.BTW, since btrfs came from oracle, and it performs so poorly with sqlite and postgresql, I would be interested its performance with Oracle's own databases... oracle, Berkeley db, mysql... It would be interesting to see it runs well with Oracle RDBMS, but funny if it takes months to create the database (unitl 0.20 is out??
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529793</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533733</id>
	<title>Re:JFS?</title>
	<author>Anonymous</author>
	<datestamp>1246395300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>I have been running XFS on mission critical systems for years and not lost any data.</p></div></blockquote><p>Even if anecdotes were evidence -- which they aren't -- this tells us nothing about how robust XFS is.  What if the reason you haven't lost data is simply that the hardware has operated perfectly?  You need to tell us how many times XFS has <i>saved</i> data.</p></div>
	</htmltext>
<tokenext>I have been running XFS on mission critical systems for years and not lost any data.Even if anecdotes were evidence -- which they are n't -- this tells us nothing about how robust XFS is .
What if the reason you have n't lost data is simply that the hardware has operated perfectly ?
You need to tell us how many times XFS has saved data .</tokentext>
<sentencetext>I have been running XFS on mission critical systems for years and not lost any data.Even if anecdotes were evidence -- which they aren't -- this tells us nothing about how robust XFS is.
What if the reason you haven't lost data is simply that the hardware has operated perfectly?
You need to tell us how many times XFS has saved data.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531917</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530037</id>
	<title>Re:Another lame filesystem review</title>
	<author>Freetardo Jones</author>
	<datestamp>1246382820000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>NILFS2 and Btrfs are both TRIM file systems optimized for SSD media. Comparing them to other file systems on a SATA drive is borderline stupidity, because you would never use them on a SATA drive. Any more than comparing NILFS2 or Btrfs to eXT3 on a SSD would be.</p></div><p>This statement doesn't make any sense since SSDs can use both the original SATA and SATA II interfaces.</p></div>
	</htmltext>
<tokenext>NILFS2 and Btrfs are both TRIM file systems optimized for SSD media .
Comparing them to other file systems on a SATA drive is borderline stupidity , because you would never use them on a SATA drive .
Any more than comparing NILFS2 or Btrfs to eXT3 on a SSD would be.This statement does n't make any sense since SSDs can use both the original SATA and SATA II interfaces .</tokentext>
<sentencetext>NILFS2 and Btrfs are both TRIM file systems optimized for SSD media.
Comparing them to other file systems on a SATA drive is borderline stupidity, because you would never use them on a SATA drive.
Any more than comparing NILFS2 or Btrfs to eXT3 on a SSD would be.This statement doesn't make any sense since SSDs can use both the original SATA and SATA II interfaces.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28537725</id>
	<title>Re:Why is JFS the red-headed stepchild?</title>
	<author>david.given</author>
	<datestamp>1246374360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?</p></div><p>I think because it's just not sexy.

</p><p>But, as you say, if you look into it it supports all the buzzwords. I use it for everything, and IME it's an excellent, lightweight, unobtrusive filesystem that gets the job done while staying out of my way (which is exactly what I want from a filesystem). It would be nice if it supported things like filesystem shrinking, which is very useful when rearranging partitions, and some of the new features like multiple roots in a single volume are <i>really</i> useful and I'd like JFS to support this, but I can live without them.

</p><p>JFS also has one really compelling feature for me: it's <i>cheap</i>. CPU-wise, that is. Every benchmark I've seen show that it's only a little slower than filesystems like XFS but it also uses way less CPU. (Plus it's much less code. Have you seen the <i>size</i> of XFS?) Given that I tend to use low-end machines, frequently embedded, this is good news for me. It's also good if you have lots of RAM --- an expensive filesystem is very noticeable if all your data is in cache and you're no longer I/O bound.

</p><p>I hope it sees more love in the future. I'd be gutted if it bit-rotted and got removed from the kernel.</p></div>
	</htmltext>
<tokenext>Maybe it 's just a case of , " it 's a fine filesystem , but did n't really bring any compelling new features or performance gains to the table , so why bother " ? I think because it 's just not sexy .
But , as you say , if you look into it it supports all the buzzwords .
I use it for everything , and IME it 's an excellent , lightweight , unobtrusive filesystem that gets the job done while staying out of my way ( which is exactly what I want from a filesystem ) .
It would be nice if it supported things like filesystem shrinking , which is very useful when rearranging partitions , and some of the new features like multiple roots in a single volume are really useful and I 'd like JFS to support this , but I can live without them .
JFS also has one really compelling feature for me : it 's cheap .
CPU-wise , that is .
Every benchmark I 've seen show that it 's only a little slower than filesystems like XFS but it also uses way less CPU .
( Plus it 's much less code .
Have you seen the size of XFS ?
) Given that I tend to use low-end machines , frequently embedded , this is good news for me .
It 's also good if you have lots of RAM --- an expensive filesystem is very noticeable if all your data is in cache and you 're no longer I/O bound .
I hope it sees more love in the future .
I 'd be gutted if it bit-rotted and got removed from the kernel .</tokentext>
<sentencetext>Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?I think because it's just not sexy.
But, as you say, if you look into it it supports all the buzzwords.
I use it for everything, and IME it's an excellent, lightweight, unobtrusive filesystem that gets the job done while staying out of my way (which is exactly what I want from a filesystem).
It would be nice if it supported things like filesystem shrinking, which is very useful when rearranging partitions, and some of the new features like multiple roots in a single volume are really useful and I'd like JFS to support this, but I can live without them.
JFS also has one really compelling feature for me: it's cheap.
CPU-wise, that is.
Every benchmark I've seen show that it's only a little slower than filesystems like XFS but it also uses way less CPU.
(Plus it's much less code.
Have you seen the size of XFS?
) Given that I tend to use low-end machines, frequently embedded, this is good news for me.
It's also good if you have lots of RAM --- an expensive filesystem is very noticeable if all your data is in cache and you're no longer I/O bound.
I hope it sees more love in the future.
I'd be gutted if it bit-rotted and got removed from the kernel.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532191</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530393</id>
	<title>Re:Do these benchmarks make any sense?</title>
	<author>js\_sebastian</author>
	<datestamp>1246383720000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>The first benchmark on page 2 is 'Parallel BZIP2 Compression'.  They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem?  Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time.  (...)  Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else.  </p></div><p>That's one type of benchmark. But you also want a benchmark that shows the performance of CPU-intensive appliations while the file system is under heavy use. Why? because the filesystem code itself uses CPU, and you want to make sure it doesn't use too much of it.</p></div>
	</htmltext>
<tokenext>The first benchmark on page 2 is 'Parallel BZIP2 Compression' .
They are testing the speed of running bzip2 , a CPU-intensive program , and drawing conclusions about the filesystem ?
Sure , there will be some time taken to read and write the large file from disk , but it is dwarfed by the computation time .
( ... ) Surely a good filesystem benchmark is one that exercises the filesystem and the disk , but little else .
That 's one type of benchmark .
But you also want a benchmark that shows the performance of CPU-intensive appliations while the file system is under heavy use .
Why ? because the filesystem code itself uses CPU , and you want to make sure it does n't use too much of it .</tokentext>
<sentencetext>The first benchmark on page 2 is 'Parallel BZIP2 Compression'.
They are testing the speed of running bzip2, a CPU-intensive program, and drawing conclusions about the filesystem?
Sure, there will be some time taken to read and write the large file from disk, but it is dwarfed by the computation time.
(...)  Surely a good filesystem benchmark is one that exercises the filesystem and the disk, but little else.
That's one type of benchmark.
But you also want a benchmark that shows the performance of CPU-intensive appliations while the file system is under heavy use.
Why? because the filesystem code itself uses CPU, and you want to make sure it doesn't use too much of it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530591</id>
	<title>so this means</title>
	<author>Anonymous</author>
	<datestamp>1246384260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>ext4 DOES or DOES NOT outperform the reiserfs?<br> <br> <br>what?</htmltext>
<tokenext>ext4 DOES or DOES NOT outperform the reiserfs ?
what ?</tokentext>
<sentencetext>ext4 DOES or DOES NOT outperform the reiserfs?
what?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529815</id>
	<title>Lots of formats?</title>
	<author>jabjoe</author>
	<datestamp>1246382040000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>Surely an OS only needs a handful of formats, all closed and patent encumbered so no one else can read them?
Oh wait, that sucks.....</htmltext>
<tokenext>Surely an OS only needs a handful of formats , all closed and patent encumbered so no one else can read them ?
Oh wait , that sucks.... .</tokentext>
<sentencetext>Surely an OS only needs a handful of formats, all closed and patent encumbered so no one else can read them?
Oh wait, that sucks.....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28547763</id>
	<title>Re:JFS?</title>
	<author>Wolfrider</author>
	<datestamp>1246480380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Word - I use JFS for all my major filesystems, even USB/Firewire drives.  Works very well with VMware, and has a very fast FSCK as well.</p></htmltext>
<tokenext>Word - I use JFS for all my major filesystems , even USB/Firewire drives .
Works very well with VMware , and has a very fast FSCK as well .</tokentext>
<sentencetext>Word - I use JFS for all my major filesystems, even USB/Firewire drives.
Works very well with VMware, and has a very fast FSCK as well.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797</id>
	<title>Another lame filesystem review</title>
	<author>brunes69</author>
	<datestamp>1246381980000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>NILFS2 and Btrfs are both TRIM file systems optimized for SSD media. Comparing them to other file systems on a SATA drive is borderline stupidity, because you would never use them on a SATA drive. Any more than comparing NILFS2 or Btrfs to eXT3 on a SSD would be.</p><p>It's like comparing the performance of motor oil and sewing machine oil to lubricate an engine or a sewing machine. They're not the same thing just because they are both "oil".</p></htmltext>
<tokenext>NILFS2 and Btrfs are both TRIM file systems optimized for SSD media .
Comparing them to other file systems on a SATA drive is borderline stupidity , because you would never use them on a SATA drive .
Any more than comparing NILFS2 or Btrfs to eXT3 on a SSD would be.It 's like comparing the performance of motor oil and sewing machine oil to lubricate an engine or a sewing machine .
They 're not the same thing just because they are both " oil " .</tokentext>
<sentencetext>NILFS2 and Btrfs are both TRIM file systems optimized for SSD media.
Comparing them to other file systems on a SATA drive is borderline stupidity, because you would never use them on a SATA drive.
Any more than comparing NILFS2 or Btrfs to eXT3 on a SSD would be.It's like comparing the performance of motor oil and sewing machine oil to lubricate an engine or a sewing machine.
They're not the same thing just because they are both "oil".</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530079</id>
	<title>I'll be interested when ..</title>
	<author>Anonymous</author>
	<datestamp>1246382940000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p>When these filesystems actually have matured enough to NOT have at least dozen bugfix changesets in each revision of kernel Changelog. Even ext3fs has received few rather interesting corner-case fixes this year, so maybe ext4 will be reliable in 5 years or so.</p></htmltext>
<tokenext>When these filesystems actually have matured enough to NOT have at least dozen bugfix changesets in each revision of kernel Changelog .
Even ext3fs has received few rather interesting corner-case fixes this year , so maybe ext4 will be reliable in 5 years or so .</tokentext>
<sentencetext>When these filesystems actually have matured enough to NOT have at least dozen bugfix changesets in each revision of kernel Changelog.
Even ext3fs has received few rather interesting corner-case fixes this year, so maybe ext4 will be reliable in 5 years or so.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531927</id>
	<title>Re:JFS?</title>
	<author>diegocgteleline.es</author>
	<datestamp>1246388100000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>JFS has been in "bugfix mode" for some time.</p></htmltext>
<tokenext>JFS has been in " bugfix mode " for some time .</tokentext>
<sentencetext>JFS has been in "bugfix mode" for some time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529729</id>
	<title>What, no ReiserFS?</title>
	<author>Anonymous</author>
	<datestamp>1246381800000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>you folks are killing me</p></htmltext>
<tokenext>you folks are killing me</tokentext>
<sentencetext>you folks are killing me</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533657</id>
	<title>Re:Another lame filesystem review</title>
	<author>Anonymous</author>
	<datestamp>1246394940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So just because something was made for SSD media it is of no interest how it behaves in other cases? We don't even want to know?</p><p>What if it behaves actually quite good? Wouldn't that be some interesting result?</p><p>If everyone would followed your advice we would have missed some great oportunities. Like the internet.</p></htmltext>
<tokenext>So just because something was made for SSD media it is of no interest how it behaves in other cases ?
We do n't even want to know ? What if it behaves actually quite good ?
Would n't that be some interesting result ? If everyone would followed your advice we would have missed some great oportunities .
Like the internet .</tokentext>
<sentencetext>So just because something was made for SSD media it is of no interest how it behaves in other cases?
We don't even want to know?What if it behaves actually quite good?
Wouldn't that be some interesting result?If everyone would followed your advice we would have missed some great oportunities.
Like the internet.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531015</id>
	<title>Re:Do these benchmarks make any sense?</title>
	<author>\_32nHz</author>
	<datestamp>1246385220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You need benchmarks to reflect your real world use. If you always run your benchmarks on idling systems then filesystems with on the fly compression would usually win. However they are not popular because this isn't a good trade off for most people.

Parallel BZIP2 compression sounds a good choice as it should stress memory and CPU, whilst giving a common IO pattern, and a fairly low inherent performance variance.

Obviously you are looking for a fairly small variance in performance, and the are a lot of other factors that must be accounted for before the results have any significance. Not publishing their data pretty much guarantees they don't know what they are doing.</htmltext>
<tokenext>You need benchmarks to reflect your real world use .
If you always run your benchmarks on idling systems then filesystems with on the fly compression would usually win .
However they are not popular because this is n't a good trade off for most people .
Parallel BZIP2 compression sounds a good choice as it should stress memory and CPU , whilst giving a common IO pattern , and a fairly low inherent performance variance .
Obviously you are looking for a fairly small variance in performance , and the are a lot of other factors that must be accounted for before the results have any significance .
Not publishing their data pretty much guarantees they do n't know what they are doing .</tokentext>
<sentencetext>You need benchmarks to reflect your real world use.
If you always run your benchmarks on idling systems then filesystems with on the fly compression would usually win.
However they are not popular because this isn't a good trade off for most people.
Parallel BZIP2 compression sounds a good choice as it should stress memory and CPU, whilst giving a common IO pattern, and a fairly low inherent performance variance.
Obviously you are looking for a fairly small variance in performance, and the are a lot of other factors that must be accounted for before the results have any significance.
Not publishing their data pretty much guarantees they don't know what they are doing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530661</id>
	<title>Protection against corruption?</title>
	<author>Anonymous</author>
	<datestamp>1246384380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>What are the default mount options?</p><p>Are the ubuntu default options sane*? I remember Linus ranting about stupid defaults for ext4, but couldn't find it anymore.</p><p>*sane being defined as: power outage doesn't leave you with a corrupt fs</p></htmltext>
<tokenext>What are the default mount options ? Are the ubuntu default options sane * ?
I remember Linus ranting about stupid defaults for ext4 , but could n't find it anymore .
* sane being defined as : power outage does n't leave you with a corrupt fs</tokentext>
<sentencetext>What are the default mount options?Are the ubuntu default options sane*?
I remember Linus ranting about stupid defaults for ext4, but couldn't find it anymore.
*sane being defined as: power outage doesn't leave you with a corrupt fs</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532223</id>
	<title>Re:Do these benchmarks make any sense?</title>
	<author>ckaminski</author>
	<datestamp>1246389120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>&lt;quote&gt;All benchmarks are flawed&lt;/quote&gt;<br><br>I'd argue that this is true only if they don't disclose their biases and limitations of testing methodology.</div>
	</htmltext>
<tokenext>All benchmarks are flawedI 'd argue that this is true only if they do n't disclose their biases and limitations of testing methodology .</tokentext>
<sentencetext>All benchmarks are flawedI'd argue that this is true only if they don't disclose their biases and limitations of testing methodology.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530583</id>
	<title>Re:Another lame filesystem review</title>
	<author>Anonymous</author>
	<datestamp>1246384200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>Others have pulled you up on the SATA/SSD remark so I won't cover that again, but you are also a little confused about those filesystems being optimised for SSD.</p><p>If you read the NILFS page (http://www.nilfs.org/en/about\_nilfs.html), it says nothing about SSD.  It has features you might want on any storage, any benefits to SSD media is just a side effect.</p></htmltext>
<tokenext>Others have pulled you up on the SATA/SSD remark so I wo n't cover that again , but you are also a little confused about those filesystems being optimised for SSD.If you read the NILFS page ( http : //www.nilfs.org/en/about \ _nilfs.html ) , it says nothing about SSD .
It has features you might want on any storage , any benefits to SSD media is just a side effect .</tokentext>
<sentencetext>Others have pulled you up on the SATA/SSD remark so I won't cover that again, but you are also a little confused about those filesystems being optimised for SSD.If you read the NILFS page (http://www.nilfs.org/en/about\_nilfs.html), it says nothing about SSD.
It has features you might want on any storage, any benefits to SSD media is just a side effect.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532191</id>
	<title>Why is JFS the red-headed stepchild?</title>
	<author>JSBiff</author>
	<datestamp>1246389000000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Ok, I've been wondering this for a long time. IBM contributed JFS to Linux years ago, but no one ever seems to give it a thought as to using it. I used it on my computer for awhile, and I can't say that I had any complaints (of course, one person's experience doesn't necessarily mean anything). When I looked into the technical features, it seemed to support lots of great things like journaling, Unicode filenames, large files, large volumes (although, granted, some of the newer filesystems *are* supporting larger files/volumes).</p><p>Don't get me wrong - some of the newer filesystems (ZFS, Btrfs, NILFS2) do have interesting features that aren't in JFS, and which are great reasons to use the newer systems, but still, it always seems like JFS is left out in the cold. Are there technical reasons people have found it lacking or something? Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?</p></htmltext>
<tokenext>Ok , I 've been wondering this for a long time .
IBM contributed JFS to Linux years ago , but no one ever seems to give it a thought as to using it .
I used it on my computer for awhile , and I ca n't say that I had any complaints ( of course , one person 's experience does n't necessarily mean anything ) .
When I looked into the technical features , it seemed to support lots of great things like journaling , Unicode filenames , large files , large volumes ( although , granted , some of the newer filesystems * are * supporting larger files/volumes ) .Do n't get me wrong - some of the newer filesystems ( ZFS , Btrfs , NILFS2 ) do have interesting features that are n't in JFS , and which are great reasons to use the newer systems , but still , it always seems like JFS is left out in the cold .
Are there technical reasons people have found it lacking or something ?
Maybe it 's just a case of , " it 's a fine filesystem , but did n't really bring any compelling new features or performance gains to the table , so why bother " ?</tokentext>
<sentencetext>Ok, I've been wondering this for a long time.
IBM contributed JFS to Linux years ago, but no one ever seems to give it a thought as to using it.
I used it on my computer for awhile, and I can't say that I had any complaints (of course, one person's experience doesn't necessarily mean anything).
When I looked into the technical features, it seemed to support lots of great things like journaling, Unicode filenames, large files, large volumes (although, granted, some of the newer filesystems *are* supporting larger files/volumes).Don't get me wrong - some of the newer filesystems (ZFS, Btrfs, NILFS2) do have interesting features that aren't in JFS, and which are great reasons to use the newer systems, but still, it always seems like JFS is left out in the cold.
Are there technical reasons people have found it lacking or something?
Maybe it's just a case of, "it's a fine filesystem, but didn't really bring any compelling new features or performance gains to the table, so why bother"?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533371</id>
	<title>Re:Comparing Apples and Oranges</title>
	<author>buchner.johannes</author>
	<datestamp>1246393680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Could you elaborate what the niches are for each?</p><p>Would it be technically possible to compare benchmarks with the Windows implementation of NTFS and FAT? Despite having a different underlying kernel?</p></htmltext>
<tokenext>Could you elaborate what the niches are for each ? Would it be technically possible to compare benchmarks with the Windows implementation of NTFS and FAT ?
Despite having a different underlying kernel ?</tokentext>
<sentencetext>Could you elaborate what the niches are for each?Would it be technically possible to compare benchmarks with the Windows implementation of NTFS and FAT?
Despite having a different underlying kernel?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529855</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28536379</id>
	<title>Re:Why is JFS the red-headed stepchild?</title>
	<author>jabuzz</author>
	<datestamp>1246364760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Because as far as IBM are concerned JFS is not very interesting. I would point out the fact that the DMAPI implementation on JFS has bit rotted, and IBM don't even support HSM on it on Linux. For that you need to buy GPFS, which makes ZFS look completely ordinary.</p></htmltext>
<tokenext>Because as far as IBM are concerned JFS is not very interesting .
I would point out the fact that the DMAPI implementation on JFS has bit rotted , and IBM do n't even support HSM on it on Linux .
For that you need to buy GPFS , which makes ZFS look completely ordinary .</tokentext>
<sentencetext>Because as far as IBM are concerned JFS is not very interesting.
I would point out the fact that the DMAPI implementation on JFS has bit rotted, and IBM don't even support HSM on it on Linux.
For that you need to buy GPFS, which makes ZFS look completely ordinary.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532191</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530067</id>
	<title>buttfs?</title>
	<author>Anonymous</author>
	<datestamp>1246382880000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>im more of a pussyfs person thank you.</htmltext>
<tokenext>im more of a pussyfs person thank you .</tokentext>
<sentencetext>im more of a pussyfs person thank you.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531587</id>
	<title>Yet another content-free Phoronix fluff article</title>
	<author>Ant P.</author>
	<datestamp>1246386900000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Skip TFA - the conclusion is that these benchmarks are invalid.</p><p>At least they've improved since last time - they no longer benchmark <em>filesystems</em> using a Quake 3 timedemo.</p></htmltext>
<tokenext>Skip TFA - the conclusion is that these benchmarks are invalid.At least they 've improved since last time - they no longer benchmark filesystems using a Quake 3 timedemo .</tokentext>
<sentencetext>Skip TFA - the conclusion is that these benchmarks are invalid.At least they've improved since last time - they no longer benchmark filesystems using a Quake 3 timedemo.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533175</id>
	<title>Wait a second, What's up with SQL-lite test</title>
	<author>goombah99</author>
	<datestamp>1246392720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Talk about optimization or lack of it.  Take a look at the SQL lite test.  EXT3 is something like 80 times faster than EXT4 or BTRFS.</p><p>What heck is going on!!!.  Postgress SQL does not seem to show this performance enhancement.</p><p>really this is an insanely different score, to the effect that if it's real no one in the right mind would run SQL on anything but EXT3.</p><p>Something must be wrong with this test.</p></htmltext>
<tokenext>Talk about optimization or lack of it .
Take a look at the SQL lite test .
EXT3 is something like 80 times faster than EXT4 or BTRFS.What heck is going on ! ! ! .
Postgress SQL does not seem to show this performance enhancement.really this is an insanely different score , to the effect that if it 's real no one in the right mind would run SQL on anything but EXT3.Something must be wrong with this test .</tokentext>
<sentencetext>Talk about optimization or lack of it.
Take a look at the SQL lite test.
EXT3 is something like 80 times faster than EXT4 or BTRFS.What heck is going on!!!.
Postgress SQL does not seem to show this performance enhancement.really this is an insanely different score, to the effect that if it's real no one in the right mind would run SQL on anything but EXT3.Something must be wrong with this test.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529793</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532223
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529969
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530539
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28535157
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530067
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531927
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28536379
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532191
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530005
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530487
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530393
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531015
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530037
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533175
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529793
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28537725
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532191
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28536955
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529793
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533657
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533733
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531917
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530583
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28547763
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530563
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529793
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533371
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529855
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530359
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533095
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531917
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_30_1543246_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28538177
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529923
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530393
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532223
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531015
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28538177
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530539
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530067
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28535157
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532513
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529797
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533657
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530583
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530487
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530005
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529969
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530037
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530359
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28536921
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529809
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28532191
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28536379
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28537725
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531917
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533733
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533095
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28547763
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531927
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531315
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529729
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28531587
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530079
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529793
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28536955
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530563
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533175
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28530731
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_30_1543246.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28529855
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_30_1543246.28533371
</commentlist>
</conversation>
