<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_01_14_2027255</id>
	<title>Google Switching To EXT4 Filesystem</title>
	<author>timothy</author>
	<datestamp>1263459000000</datestamp>
	<htmltext>An anonymous reader writes <i>"Google is in the process of <a href="http://digitizor.com/2010/01/14/google-switching-to-ext4-filesystem/">upgrading their existing EXT2 filesystem</a> to the new and improved EXT4 filesystem. Google has benchmarked three different filesystems &mdash; XFS, EXT4 and JFS. In their benchmarking, EXT4 and XFS performed equally well. However, in view of the easier upgrade path from EXT2 to EXT4, Google has <a href="http://lists.openwall.net/linux-ext4/2010/01/04/8">decided to go ahead with EXT4</a>."</i></htmltext>
<tokenext>An anonymous reader writes " Google is in the process of upgrading their existing EXT2 filesystem to the new and improved EXT4 filesystem .
Google has benchmarked three different filesystems    XFS , EXT4 and JFS .
In their benchmarking , EXT4 and XFS performed equally well .
However , in view of the easier upgrade path from EXT2 to EXT4 , Google has decided to go ahead with EXT4 .
"</tokentext>
<sentencetext>An anonymous reader writes "Google is in the process of upgrading their existing EXT2 filesystem to the new and improved EXT4 filesystem.
Google has benchmarked three different filesystems — XFS, EXT4 and JFS.
In their benchmarking, EXT4 and XFS performed equally well.
However, in view of the easier upgrade path from EXT2 to EXT4, Google has decided to go ahead with EXT4.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714</id>
	<title>As impressively as each other?! WTF?!</title>
	<author>Anonymous</author>
	<datestamp>1263463440000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>From TFA:</p><p><div class="quote"><p>In their benchmarking, EXT4 and XFS performed, as impressively as each other.</p></div><p>WTF kind of retarded sentence is that?! Did Rob Smith help you write that article?!</p><p>In their benchmarking of EXT4 and XFS, EACH performed as impressively as THE OTHER.</p></div>
	</htmltext>
<tokenext>From TFA : In their benchmarking , EXT4 and XFS performed , as impressively as each other.WTF kind of retarded sentence is that ? !
Did Rob Smith help you write that article ?
! In their benchmarking of EXT4 and XFS , EACH performed as impressively as THE OTHER .</tokentext>
<sentencetext>From TFA:In their benchmarking, EXT4 and XFS performed, as impressively as each other.WTF kind of retarded sentence is that?!
Did Rob Smith help you write that article?
!In their benchmarking of EXT4 and XFS, EACH performed as impressively as THE OTHER.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776670</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>DrXym</author>
	<datestamp>1263551640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>The main advantage of EXT3 over EXT2 is that, with journaling, if you ever need to fsck the data, it goes a LOT quicker. It's interesting to note that Google never felt it needed that functionality.</i>
<p>
I wouldn't be surprised if most of the data is transient, so why bother to recover it? If necessary, reimage the base OS - the transient stuff is going to get overwritten anyway.</p></htmltext>
<tokenext>The main advantage of EXT3 over EXT2 is that , with journaling , if you ever need to fsck the data , it goes a LOT quicker .
It 's interesting to note that Google never felt it needed that functionality .
I would n't be surprised if most of the data is transient , so why bother to recover it ?
If necessary , reimage the base OS - the transient stuff is going to get overwritten anyway .</tokentext>
<sentencetext>The main advantage of EXT3 over EXT2 is that, with journaling, if you ever need to fsck the data, it goes a LOT quicker.
It's interesting to note that Google never felt it needed that functionality.
I wouldn't be surprised if most of the data is transient, so why bother to recover it?
If necessary, reimage the base OS - the transient stuff is going to get overwritten anyway.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776030</id>
	<title>Re:Time for a backup?</title>
	<author>Anonymous</author>
	<datestamp>1263586380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>it is creepy to me that I took this exact same upgrade path. I had all my files on ext2, I tested xfs, jfs, and ext4, I found that xfs and ext4 were about the same, but I went with ext4 because it was easiest to upgrade to.</p></htmltext>
<tokenext>it is creepy to me that I took this exact same upgrade path .
I had all my files on ext2 , I tested xfs , jfs , and ext4 , I found that xfs and ext4 were about the same , but I went with ext4 because it was easiest to upgrade to .</tokentext>
<sentencetext>it is creepy to me that I took this exact same upgrade path.
I had all my files on ext2, I tested xfs, jfs, and ext4, I found that xfs and ext4 were about the same, but I went with ext4 because it was easiest to upgrade to.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771112</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640</id>
	<title>Re:Time for a backup?</title>
	<author>Anonymous</author>
	<datestamp>1263463260000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Oh fuck off. It's not like Google is going to upgrade their entire multiply-redundant infrastructure all at once. And ext4 is a very conservative and stable FS. The "upgrade" process is to simply mount your old ext3 volume as ext4, and let new writes take advantage of ext4 features. If Google is actually still using ext2 rather than ext3, ext4 will be significantly *more* reliable. Not as good as XFS for preserving data integrity, but better than ext2.</htmltext>
<tokenext>Oh fuck off .
It 's not like Google is going to upgrade their entire multiply-redundant infrastructure all at once .
And ext4 is a very conservative and stable FS .
The " upgrade " process is to simply mount your old ext3 volume as ext4 , and let new writes take advantage of ext4 features .
If Google is actually still using ext2 rather than ext3 , ext4 will be significantly * more * reliable .
Not as good as XFS for preserving data integrity , but better than ext2 .</tokentext>
<sentencetext>Oh fuck off.
It's not like Google is going to upgrade their entire multiply-redundant infrastructure all at once.
And ext4 is a very conservative and stable FS.
The "upgrade" process is to simply mount your old ext3 volume as ext4, and let new writes take advantage of ext4 features.
If Google is actually still using ext2 rather than ext3, ext4 will be significantly *more* reliable.
Not as good as XFS for preserving data integrity, but better than ext2.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772430</id>
	<title>NEXT UP</title>
	<author>kuzb</author>
	<datestamp>1263470760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>BREAKING NEWS:</p><p>Google switches to new softer 2-ply toilet paper to reduce employee chafing.</p></htmltext>
<tokenext>BREAKING NEWS : Google switches to new softer 2-ply toilet paper to reduce employee chafing .</tokentext>
<sentencetext>BREAKING NEWS:Google switches to new softer 2-ply toilet paper to reduce employee chafing.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771400</id>
	<title>Re:Time for a backup?</title>
	<author>nemmi</author>
	<datestamp>1263465900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No. No need to back it up. Google already has a backup. It is called the Dept. Of Justice (DOJ) . They are actually in the same building, but they just want to make sure the "terrorists" haven't made any "illegal searches" before you can have it back.</p></htmltext>
<tokenext>No .
No need to back it up .
Google already has a backup .
It is called the Dept .
Of Justice ( DOJ ) .
They are actually in the same building , but they just want to make sure the " terrorists " have n't made any " illegal searches " before you can have it back .</tokentext>
<sentencetext>No.
No need to back it up.
Google already has a backup.
It is called the Dept.
Of Justice (DOJ) .
They are actually in the same building, but they just want to make sure the "terrorists" haven't made any "illegal searches" before you can have it back.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770868</id>
	<title>Re:No ReiserFS?</title>
	<author>Icarium</author>
	<datestamp>1263463920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'd imagine contacting a prison for tech support could be a bit awkward.<br>(Yes, I know it's lame)</p></htmltext>
<tokenext>I 'd imagine contacting a prison for tech support could be a bit awkward .
( Yes , I know it 's lame )</tokentext>
<sentencetext>I'd imagine contacting a prison for tech support could be a bit awkward.
(Yes, I know it's lame)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770648</id>
	<title>Re:Time for a backup?</title>
	<author>Anonymous</author>
	<datestamp>1263463260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yes because google will do an in-place upgrade of terabytes of data without taking their own backups. Retard.</p></htmltext>
<tokenext>Yes because google will do an in-place upgrade of terabytes of data without taking their own backups .
Retard .</tokentext>
<sentencetext>Yes because google will do an in-place upgrade of terabytes of data without taking their own backups.
Retard.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770844</id>
	<title>Re:No ReiserFS?</title>
	<author>Anonymous</author>
	<datestamp>1263463860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It's not that the Creator was convicted of a crime, per se.</p><p>All the Namesys people are working for other employers now, on other things.</p><p>Who's interested in maintaining or enhancing it? Nobody as far as I can tell.</p></htmltext>
<tokenext>It 's not that the Creator was convicted of a crime , per se.All the Namesys people are working for other employers now , on other things.Who 's interested in maintaining or enhancing it ?
Nobody as far as I can tell .</tokentext>
<sentencetext>It's not that the Creator was convicted of a crime, per se.All the Namesys people are working for other employers now, on other things.Who's interested in maintaining or enhancing it?
Nobody as far as I can tell.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774594</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>D Ninja</author>
	<datestamp>1263483840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>if you ever need to fsck the data</p></div><p>My my!  The things they're doing with porn these days!</p></div>
	</htmltext>
<tokenext>if you ever need to fsck the dataMy my !
The things they 're doing with porn these days !</tokentext>
<sentencetext>if you ever need to fsck the dataMy my!
The things they're doing with porn these days!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770948</id>
	<title>Windows Driver</title>
	<author>Anonymous</author>
	<datestamp>1263464220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Might this prompt someone at Google to make an installable file system driver for Windows for EXT4? Right now, there is none, because of differing inode sizes and some extra features over EXT2 that EXT4 demands I think.</htmltext>
<tokenext>Might this prompt someone at Google to make an installable file system driver for Windows for EXT4 ?
Right now , there is none , because of differing inode sizes and some extra features over EXT2 that EXT4 demands I think .</tokentext>
<sentencetext>Might this prompt someone at Google to make an installable file system driver for Windows for EXT4?
Right now, there is none, because of differing inode sizes and some extra features over EXT2 that EXT4 demands I think.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773618</id>
	<title>Re:Btrfs?</title>
	<author>complete loony</author>
	<datestamp>1263476820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>From google's point of view it only has to be stable enough. They don't care that much if a node goes down or a single copy of a block of data becomes unavailable. What they care about is aggregate throughput for the entire cluster.</htmltext>
<tokenext>From google 's point of view it only has to be stable enough .
They do n't care that much if a node goes down or a single copy of a block of data becomes unavailable .
What they care about is aggregate throughput for the entire cluster .</tokentext>
<sentencetext>From google's point of view it only has to be stable enough.
They don't care that much if a node goes down or a single copy of a block of data becomes unavailable.
What they care about is aggregate throughput for the entire cluster.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770838</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771068</id>
	<title>where are the benchmarks?</title>
	<author>Alvaro Martinez</author>
	<datestamp>1263464760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>i di'dnt read the funky article because it's been slashdoted, but i'd like to see properly the benchmarks</htmltext>
<tokenext>i di'dnt read the funky article because it 's been slashdoted , but i 'd like to see properly the benchmarks</tokentext>
<sentencetext>i di'dnt read the funky article because it's been slashdoted, but i'd like to see properly the benchmarks</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772262</id>
	<title>Re:Well</title>
	<author>Anonymous</author>
	<datestamp>1263469980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I assume you mean <b>Increase</b> the signal-to-noise ratio.  Did you mean reduce the noise floor?</htmltext>
<tokenext>I assume you mean Increase the signal-to-noise ratio .
Did you mean reduce the noise floor ?</tokentext>
<sentencetext>I assume you mean Increase the signal-to-noise ratio.
Did you mean reduce the noise floor?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777126</id>
	<title>Re:Has Ted Cooked the Benchmarks Again?</title>
	<author>Lisandro</author>
	<datestamp>1263557400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Great post. Thank you for your insight!</htmltext>
<tokenext>Great post .
Thank you for your insight !</tokentext>
<sentencetext>Great post.
Thank you for your insight!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773226</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772392</id>
	<title>Re:Btrfs?</title>
	<author>shish</author>
	<datestamp>1263470640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Btrfs does have a giant list of really cool features; but from what I've seen of google's needs, they're at the complete opposite end of the spectrum (I'm surprised that they're using a filesystem at all, when they could just dump their data structure on the raw disk)</htmltext>
<tokenext>Btrfs does have a giant list of really cool features ; but from what I 've seen of google 's needs , they 're at the complete opposite end of the spectrum ( I 'm surprised that they 're using a filesystem at all , when they could just dump their data structure on the raw disk )</tokentext>
<sentencetext>Btrfs does have a giant list of really cool features; but from what I've seen of google's needs, they're at the complete opposite end of the spectrum (I'm surprised that they're using a filesystem at all, when they could just dump their data structure on the raw disk)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772006</id>
	<title>Re:Well</title>
	<author>icebraining</author>
	<datestamp>1263468840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You can configure an higher threshold; 1 should be enough to filter most ACs.</p></htmltext>
<tokenext>You can configure an higher threshold ; 1 should be enough to filter most ACs .</tokentext>
<sentencetext>You can configure an higher threshold; 1 should be enough to filter most ACs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771748</id>
	<title>Re:Btrfs?</title>
	<author>Lennie</author>
	<datestamp>1263467460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If they choose for the ext-family upgrade path, btrfs is also still possible in the future. You can even do an inplace upgrade from ext2, 3 (and probably 4, but I didn't see it in the text where I read about this feature) to btrfs.<br><br>Not that it matters, I'm fairly sure they don't do inplace upgrades. Atleast with ext4, if you want to benefit the most from performance and features, if I remember correctly, you should do a new filesystem, not an inplace upgrade.</htmltext>
<tokenext>If they choose for the ext-family upgrade path , btrfs is also still possible in the future .
You can even do an inplace upgrade from ext2 , 3 ( and probably 4 , but I did n't see it in the text where I read about this feature ) to btrfs.Not that it matters , I 'm fairly sure they do n't do inplace upgrades .
Atleast with ext4 , if you want to benefit the most from performance and features , if I remember correctly , you should do a new filesystem , not an inplace upgrade .</tokentext>
<sentencetext>If they choose for the ext-family upgrade path, btrfs is also still possible in the future.
You can even do an inplace upgrade from ext2, 3 (and probably 4, but I didn't see it in the text where I read about this feature) to btrfs.Not that it matters, I'm fairly sure they don't do inplace upgrades.
Atleast with ext4, if you want to benefit the most from performance and features, if I remember correctly, you should do a new filesystem, not an inplace upgrade.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770838</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771198</id>
	<title>Re:Use of commas.</title>
	<author>AvitarX</author>
	<datestamp>1263465180000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>There is no hard rule on this, and both can be ambiguous in different circumstances.</p><p><a href="http://en.wikipedia.org/wiki/Serial\_comma" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/Serial\_comma</a> [wikipedia.org]</p></htmltext>
<tokenext>There is no hard rule on this , and both can be ambiguous in different circumstances.http : //en.wikipedia.org/wiki/Serial \ _comma [ wikipedia.org ]</tokentext>
<sentencetext>There is no hard rule on this, and both can be ambiguous in different circumstances.http://en.wikipedia.org/wiki/Serial\_comma [wikipedia.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771296</id>
	<title>Re:Windows Driver</title>
	<author>Anonymous</author>
	<datestamp>1263465480000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>I can't imagine why it would.<br> <br>

To the best of my knowledge, Google uses pretty much no Windows servers themselves(at least not for any of their public facing products, they almost certainly have some kicking around) and "a vast number of instances of custom in-house server applications" is among the least plausible environments for a Windows server deployment, so that is unlikely to change.<br> <br>

On the desktop side, Google has a bunch of stuff that runs on Windows; but it all communicates with Google's servers over various ordinary web protocols and stores local files with the OS provided filesystem. The benefits of EXT4 on Windows would have to be pretty damn compelling for them to start requiring a kernel driver install and a spare unformatted partition.<br> <br>

I suppose it is conceivable that some Google employee might decide to do it, for more or less inscrutable reasons; but it would have no connection at all to Google's broader operation or strategy.</htmltext>
<tokenext>I ca n't imagine why it would .
To the best of my knowledge , Google uses pretty much no Windows servers themselves ( at least not for any of their public facing products , they almost certainly have some kicking around ) and " a vast number of instances of custom in-house server applications " is among the least plausible environments for a Windows server deployment , so that is unlikely to change .
On the desktop side , Google has a bunch of stuff that runs on Windows ; but it all communicates with Google 's servers over various ordinary web protocols and stores local files with the OS provided filesystem .
The benefits of EXT4 on Windows would have to be pretty damn compelling for them to start requiring a kernel driver install and a spare unformatted partition .
I suppose it is conceivable that some Google employee might decide to do it , for more or less inscrutable reasons ; but it would have no connection at all to Google 's broader operation or strategy .</tokentext>
<sentencetext>I can't imagine why it would.
To the best of my knowledge, Google uses pretty much no Windows servers themselves(at least not for any of their public facing products, they almost certainly have some kicking around) and "a vast number of instances of custom in-house server applications" is among the least plausible environments for a Windows server deployment, so that is unlikely to change.
On the desktop side, Google has a bunch of stuff that runs on Windows; but it all communicates with Google's servers over various ordinary web protocols and stores local files with the OS provided filesystem.
The benefits of EXT4 on Windows would have to be pretty damn compelling for them to start requiring a kernel driver install and a spare unformatted partition.
I suppose it is conceivable that some Google employee might decide to do it, for more or less inscrutable reasons; but it would have no connection at all to Google's broader operation or strategy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30791054</id>
	<title>Re:Windows Driver</title>
	<author>Anonymous</author>
	<datestamp>1263663960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I loaded the Win32 ext2 driver from SourceForge when I needed to share a USB HDD between WinXP, FreeBSD and various Linicies.  The drive was as large enough to use for backup and transfers for several desktops.  The USB2 bandwidth was better than the actual LAN throughput at the time as well.  It was the only filesystem that all OSes had drivers for and could be *counted on* to read/write at the time.  It can be a good tactical solution - ka5vjl</p></htmltext>
<tokenext>I loaded the Win32 ext2 driver from SourceForge when I needed to share a USB HDD between WinXP , FreeBSD and various Linicies .
The drive was as large enough to use for backup and transfers for several desktops .
The USB2 bandwidth was better than the actual LAN throughput at the time as well .
It was the only filesystem that all OSes had drivers for and could be * counted on * to read/write at the time .
It can be a good tactical solution - ka5vjl</tokentext>
<sentencetext>I loaded the Win32 ext2 driver from SourceForge when I needed to share a USB HDD between WinXP, FreeBSD and various Linicies.
The drive was as large enough to use for backup and transfers for several desktops.
The USB2 bandwidth was better than the actual LAN throughput at the time as well.
It was the only filesystem that all OSes had drivers for and could be *counted on* to read/write at the time.
It can be a good tactical solution - ka5vjl</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771296</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772346</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>Anonymous</author>
	<datestamp>1263470460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I never understood why people are so into journaling. My computer doesn't crash that often. And in the rare occassions when it does I really wouldn't mind a long fsck time that much.</p><p>A journal doesn't really improve the performance of the file system. It doesn't protect against data loss or corruption. It doesn't speed up normal usage. It doesn't even improve the integrity. All it does is a faster integrity check.</p><p>Now take a look at Soft Updates. That is interesting stuff. Soft Updates means the operations are performed in such a way that the file system is always in a valid state. With that you don't need to fsck at all. It is mostly about ordering the write operations, like write the metadata before you write the pointer to it. And there is also an implementation of it with UFS+Soft Updates in FreeBSD. As far as I know the only thing that doesn't work atomic is freeing of space. So after a crash disk space may be marked as used that isn't referenced anywhere. You can still work without any problems, you just have less space than you should have. After a crash you are ready to go and you can fsck to regain free space while the system is up and running. That is awesome.<br>Why is there no Linux filesystem that works this way? Because it is hard? What kind of lame excuse is that?</p><p>And then there is copy-on-write like ZFS has. Instead of overwriting in place a new block is allocated and then when everything is written the metadata is changed to point to the new block. That accomplishes the same as Soft Updates but in a much simpler way.</p><p>Why would anyone want a journal? If you care about integrity Soft Updates or copy-on-write is the correct solution. Journaling is just changing the problem instead of solving it.</p></htmltext>
<tokenext>I never understood why people are so into journaling .
My computer does n't crash that often .
And in the rare occassions when it does I really would n't mind a long fsck time that much.A journal does n't really improve the performance of the file system .
It does n't protect against data loss or corruption .
It does n't speed up normal usage .
It does n't even improve the integrity .
All it does is a faster integrity check.Now take a look at Soft Updates .
That is interesting stuff .
Soft Updates means the operations are performed in such a way that the file system is always in a valid state .
With that you do n't need to fsck at all .
It is mostly about ordering the write operations , like write the metadata before you write the pointer to it .
And there is also an implementation of it with UFS + Soft Updates in FreeBSD .
As far as I know the only thing that does n't work atomic is freeing of space .
So after a crash disk space may be marked as used that is n't referenced anywhere .
You can still work without any problems , you just have less space than you should have .
After a crash you are ready to go and you can fsck to regain free space while the system is up and running .
That is awesome.Why is there no Linux filesystem that works this way ?
Because it is hard ?
What kind of lame excuse is that ? And then there is copy-on-write like ZFS has .
Instead of overwriting in place a new block is allocated and then when everything is written the metadata is changed to point to the new block .
That accomplishes the same as Soft Updates but in a much simpler way.Why would anyone want a journal ?
If you care about integrity Soft Updates or copy-on-write is the correct solution .
Journaling is just changing the problem instead of solving it .</tokentext>
<sentencetext>I never understood why people are so into journaling.
My computer doesn't crash that often.
And in the rare occassions when it does I really wouldn't mind a long fsck time that much.A journal doesn't really improve the performance of the file system.
It doesn't protect against data loss or corruption.
It doesn't speed up normal usage.
It doesn't even improve the integrity.
All it does is a faster integrity check.Now take a look at Soft Updates.
That is interesting stuff.
Soft Updates means the operations are performed in such a way that the file system is always in a valid state.
With that you don't need to fsck at all.
It is mostly about ordering the write operations, like write the metadata before you write the pointer to it.
And there is also an implementation of it with UFS+Soft Updates in FreeBSD.
As far as I know the only thing that doesn't work atomic is freeing of space.
So after a crash disk space may be marked as used that isn't referenced anywhere.
You can still work without any problems, you just have less space than you should have.
After a crash you are ready to go and you can fsck to regain free space while the system is up and running.
That is awesome.Why is there no Linux filesystem that works this way?
Because it is hard?
What kind of lame excuse is that?And then there is copy-on-write like ZFS has.
Instead of overwriting in place a new block is allocated and then when everything is written the metadata is changed to point to the new block.
That accomplishes the same as Soft Updates but in a much simpler way.Why would anyone want a journal?
If you care about integrity Soft Updates or copy-on-write is the correct solution.
Journaling is just changing the problem instead of solving it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771674</id>
	<title>Re:As impressively as each other?! WTF?!</title>
	<author>Itninja</author>
	<datestamp>1263467040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Sorry but my internal lameness filter automatically ignores any sentence beginning with 'WTF'.</htmltext>
<tokenext>Sorry but my internal lameness filter automatically ignores any sentence beginning with 'WTF' .</tokentext>
<sentencetext>Sorry but my internal lameness filter automatically ignores any sentence beginning with 'WTF'.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30780702</id>
	<title>Re:Time for a backup?</title>
	<author>Simetrical</author>
	<datestamp>1263579720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>If Google is actually still using ext2 rather than ext3, ext4 will be significantly *more* reliable.</p></div><p>They don't care.  I can't remember where I read it, but I read that they were using ext2 since they have no reason to use journaling &ndash; if a machine crashes, they just reimage it.  GFS ensures that everything is copied to multiple nodes, maybe even in physically disparate locations, so there's no need for recovery (such as via journaling) of individual nodes that have failed.  ext2 is just ext3 with journaling disabled.

</p><p>What Google wants, as the summary suggests, is performance, and ext4 will certainly provide that compared to ext2/3.</p></div>
	</htmltext>
<tokenext>If Google is actually still using ext2 rather than ext3 , ext4 will be significantly * more * reliable.They do n't care .
I ca n't remember where I read it , but I read that they were using ext2 since they have no reason to use journaling    if a machine crashes , they just reimage it .
GFS ensures that everything is copied to multiple nodes , maybe even in physically disparate locations , so there 's no need for recovery ( such as via journaling ) of individual nodes that have failed .
ext2 is just ext3 with journaling disabled .
What Google wants , as the summary suggests , is performance , and ext4 will certainly provide that compared to ext2/3 .</tokentext>
<sentencetext>If Google is actually still using ext2 rather than ext3, ext4 will be significantly *more* reliable.They don't care.
I can't remember where I read it, but I read that they were using ext2 since they have no reason to use journaling – if a machine crashes, they just reimage it.
GFS ensures that everything is copied to multiple nodes, maybe even in physically disparate locations, so there's no need for recovery (such as via journaling) of individual nodes that have failed.
ext2 is just ext3 with journaling disabled.
What Google wants, as the summary suggests, is performance, and ext4 will certainly provide that compared to ext2/3.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772322</id>
	<title>Re:I upgraded from ext3 to ext4 and</title>
	<author>Jake Griffin</author>
	<datestamp>1263470340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>(See first post)</htmltext>
<tokenext>( See first post )</tokentext>
<sentencetext>(See first post)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771118</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777208</id>
	<title>What about reliability of EXT4</title>
	<author>jassuncao</author>
	<datestamp>1263558240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I live in a zone where power failures are very common. While I was using EXT3 I lost data for several times due to power failures, and there was even a time a disk got corrupted.
After I switched to JFS the data lost is minimal and I never had a corrupted disk. Another think I enjoy in JFS is that its really quick to fsck a disc after power failure.
So is it safe to switch to EXT4 ?</htmltext>
<tokenext>I live in a zone where power failures are very common .
While I was using EXT3 I lost data for several times due to power failures , and there was even a time a disk got corrupted .
After I switched to JFS the data lost is minimal and I never had a corrupted disk .
Another think I enjoy in JFS is that its really quick to fsck a disc after power failure .
So is it safe to switch to EXT4 ?</tokentext>
<sentencetext>I live in a zone where power failures are very common.
While I was using EXT3 I lost data for several times due to power failures, and there was even a time a disk got corrupted.
After I switched to JFS the data lost is minimal and I never had a corrupted disk.
Another think I enjoy in JFS is that its really quick to fsck a disc after power failure.
So is it safe to switch to EXT4 ?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774204</id>
	<title>Re:Time for a backup?</title>
	<author>Anonymous</author>
	<datestamp>1263480900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I usually let the bit-gods decide what data I have that is important enough to save.  Over the years the bit-gods have taught me that:</p><p>snip&gt;<br>Photos of my children: Not important.  If I need more baby photos, I can just have more babies.</p></div><p>Let me know how well "sudo make me a baby" (<a href="http://xkcd.com/149/" title="xkcd.com">xkcd</a> [xkcd.com] style) works out for you?</p></div>
	</htmltext>
<tokenext>I usually let the bit-gods decide what data I have that is important enough to save .
Over the years the bit-gods have taught me that : snip &gt; Photos of my children : Not important .
If I need more baby photos , I can just have more babies.Let me know how well " sudo make me a baby " ( xkcd [ xkcd.com ] style ) works out for you ?</tokentext>
<sentencetext>I usually let the bit-gods decide what data I have that is important enough to save.
Over the years the bit-gods have taught me that:snip&gt;Photos of my children: Not important.
If I need more baby photos, I can just have more babies.Let me know how well "sudo make me a baby" (xkcd [xkcd.com] style) works out for you?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770942</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771884</id>
	<title>Re:Ubuntu 9.10?</title>
	<author>Anonymous</author>
	<datestamp>1263468240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>From the bug comments, this could be linked to latent kernel bug on journal checksums. Which went unnoticed until they were enabled by default after 2.6.31 and and reverted in 2.6.32-rc6. If ubuntu picked up that patch for their kernel, that would have caused corruptions.</p><p>http://bugzilla.kernel.org/show\_bug.cgi?id=14354</p></htmltext>
<tokenext>From the bug comments , this could be linked to latent kernel bug on journal checksums .
Which went unnoticed until they were enabled by default after 2.6.31 and and reverted in 2.6.32-rc6 .
If ubuntu picked up that patch for their kernel , that would have caused corruptions.http : //bugzilla.kernel.org/show \ _bug.cgi ? id = 14354</tokentext>
<sentencetext>From the bug comments, this could be linked to latent kernel bug on journal checksums.
Which went unnoticed until they were enabled by default after 2.6.31 and and reverted in 2.6.32-rc6.
If ubuntu picked up that patch for their kernel, that would have caused corruptions.http://bugzilla.kernel.org/show\_bug.cgi?id=14354</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770592</id>
	<title>Digitzor link uesless</title>
	<author>autocracy</author>
	<datestamp>1263463080000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>I managed to ease a pageview out of it. That said, the<nobr> <wbr></nobr>/. summary says all they say, and you're all better served by the source they point to, which is what SHOULD have been in the article summary instead of the Digitzor site.</p><p>See <a href="http://lists.openwall.net/linux-ext4/2010/01/04/8" title="openwall.net">http://lists.openwall.net/linux-ext4/2010/01/04/8</a> [openwall.net]</p></htmltext>
<tokenext>I managed to ease a pageview out of it .
That said , the / .
summary says all they say , and you 're all better served by the source they point to , which is what SHOULD have been in the article summary instead of the Digitzor site.See http : //lists.openwall.net/linux-ext4/2010/01/04/8 [ openwall.net ]</tokentext>
<sentencetext>I managed to ease a pageview out of it.
That said, the /.
summary says all they say, and you're all better served by the source they point to, which is what SHOULD have been in the article summary instead of the Digitzor site.See http://lists.openwall.net/linux-ext4/2010/01/04/8 [openwall.net]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777292</id>
	<title>Re:Windows Driver</title>
	<author>drinkypoo</author>
	<datestamp>1263559320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Installing the ext2 IFS on Windows XP leads to frequent lockups and crashes. I've tried it on several machines now, and over several versions. Wny would you want one anyway? Work in Linux.</p></htmltext>
<tokenext>Installing the ext2 IFS on Windows XP leads to frequent lockups and crashes .
I 've tried it on several machines now , and over several versions .
Wny would you want one anyway ?
Work in Linux .</tokentext>
<sentencetext>Installing the ext2 IFS on Windows XP leads to frequent lockups and crashes.
I've tried it on several machines now, and over several versions.
Wny would you want one anyway?
Work in Linux.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770948</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771648</id>
	<title>Has Ted Cooked the Benchmarks Again?</title>
	<author>segedunum</author>
	<datestamp>1263466980000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext>I just hope Ted T'so hasn't been cooking the ext4 benchmarks again by making data notoriously less safe with a lot of retarded default settings. With data integrity restored ext4 should perform on a par with ext3, but should do far better in filesystem in hundreds of gigabytes or many terabytes. XFS has reigned there for many years so I take the article with a pinch of salt.</htmltext>
<tokenext>I just hope Ted T'so has n't been cooking the ext4 benchmarks again by making data notoriously less safe with a lot of retarded default settings .
With data integrity restored ext4 should perform on a par with ext3 , but should do far better in filesystem in hundreds of gigabytes or many terabytes .
XFS has reigned there for many years so I take the article with a pinch of salt .</tokentext>
<sentencetext>I just hope Ted T'so hasn't been cooking the ext4 benchmarks again by making data notoriously less safe with a lot of retarded default settings.
With data integrity restored ext4 should perform on a par with ext3, but should do far better in filesystem in hundreds of gigabytes or many terabytes.
XFS has reigned there for many years so I take the article with a pinch of salt.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771878</id>
	<title>Re:Ubuntu 9.10?</title>
	<author>Anonymous</author>
	<datestamp>1263468120000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>They employ the main developer of ext2, ext3 and ext4.<br><br>He probably knows a lot about it.</htmltext>
<tokenext>They employ the main developer of ext2 , ext3 and ext4.He probably knows a lot about it .</tokentext>
<sentencetext>They employ the main developer of ext2, ext3 and ext4.He probably knows a lot about it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771122</id>
	<title>Re:No ReiserFS?</title>
	<author>Anonymous</author>
	<datestamp>1263464880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The association is too close in this case because a murderer's name is part of the file system name. If the product had been named something else the association wouldn't be there. Might as well stock the shelves with Bernardo Bath Oil and Dahmer Doodads. How well do you think that would go in the eyes of the corporate world?
So it's not because the creator of the filesystem committed a crime, it's because the product has an unsavoury name - those are two distinct and unrelated issues.</htmltext>
<tokenext>The association is too close in this case because a murderer 's name is part of the file system name .
If the product had been named something else the association would n't be there .
Might as well stock the shelves with Bernardo Bath Oil and Dahmer Doodads .
How well do you think that would go in the eyes of the corporate world ?
So it 's not because the creator of the filesystem committed a crime , it 's because the product has an unsavoury name - those are two distinct and unrelated issues .</tokentext>
<sentencetext>The association is too close in this case because a murderer's name is part of the file system name.
If the product had been named something else the association wouldn't be there.
Might as well stock the shelves with Bernardo Bath Oil and Dahmer Doodads.
How well do you think that would go in the eyes of the corporate world?
So it's not because the creator of the filesystem committed a crime, it's because the product has an unsavoury name - those are two distinct and unrelated issues.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771746</id>
	<title>Re:Use of commas.</title>
	<author>dloose</author>
	<datestamp>1263467400000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext>Who gives a fuck about an Oxford comma?</htmltext>
<tokenext>Who gives a fuck about an Oxford comma ?</tokentext>
<sentencetext>Who gives a fuck about an Oxford comma?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770528</id>
	<title>Slashdotted already ?</title>
	<author>ccandreva</author>
	<datestamp>1263462840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Looks like Digitizor already melted.</p></htmltext>
<tokenext>Looks like Digitizor already melted .</tokentext>
<sentencetext>Looks like Digitizor already melted.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774438</id>
	<title>Re:Time for a backup?</title>
	<author>Anonymous</author>
	<datestamp>1263482760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>I usually let the bit-gods decide what data I have that is important enough to save.  Over the years the bit-gods have taught me that:</p><p>Music files: not important, Styx crossed the Styx to<nobr> <wbr></nobr>/dev/null in 2002<br>Essay written for sophomore year high school english: Important, I assume to haunt me in some future political race.<br>Porn collection: Like the subject matter within, it swells impressively, explodes, then enters a refractory period until it's ready to build up again.<br>C++ program that graphs the Mandelbrot set: Important.  I like feeling like an explorer navigating the cardioid's canyons.<br>Photos of my children: Not important.  If I need more baby photos, I can just have more babies.</p></div><p>one of the best posts I read in a while!!!</p></div>
	</htmltext>
<tokenext>I usually let the bit-gods decide what data I have that is important enough to save .
Over the years the bit-gods have taught me that : Music files : not important , Styx crossed the Styx to /dev/null in 2002Essay written for sophomore year high school english : Important , I assume to haunt me in some future political race.Porn collection : Like the subject matter within , it swells impressively , explodes , then enters a refractory period until it 's ready to build up again.C + + program that graphs the Mandelbrot set : Important .
I like feeling like an explorer navigating the cardioid 's canyons.Photos of my children : Not important .
If I need more baby photos , I can just have more babies.one of the best posts I read in a while ! !
!</tokentext>
<sentencetext>I usually let the bit-gods decide what data I have that is important enough to save.
Over the years the bit-gods have taught me that:Music files: not important, Styx crossed the Styx to /dev/null in 2002Essay written for sophomore year high school english: Important, I assume to haunt me in some future political race.Porn collection: Like the subject matter within, it swells impressively, explodes, then enters a refractory period until it's ready to build up again.C++ program that graphs the Mandelbrot set: Important.
I like feeling like an explorer navigating the cardioid's canyons.Photos of my children: Not important.
If I need more baby photos, I can just have more babies.one of the best posts I read in a while!!
!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770942</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773658</id>
	<title>True jouraling in EXT4?</title>
	<author>Anonymous</author>
	<datestamp>1263477060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Wasn't there an article here recently, regarding ext3 vs ext4 and power failures. ext4, while compliant with a white paper, was not doing due diligence on the journaling stuff (as ext3 did) and should not be considered a true, hardcore journaling file-system? I recall a tiny uproar regarding ext4 not being an upgrade path for ext3 users if you care about your data. (i realise Google is going ext2 &gt; ext4).<br>
&nbsp; The article claims the the project leader/developers think they are is in the right by sticking to the flawed white paper.</p></htmltext>
<tokenext>Was n't there an article here recently , regarding ext3 vs ext4 and power failures .
ext4 , while compliant with a white paper , was not doing due diligence on the journaling stuff ( as ext3 did ) and should not be considered a true , hardcore journaling file-system ?
I recall a tiny uproar regarding ext4 not being an upgrade path for ext3 users if you care about your data .
( i realise Google is going ext2 &gt; ext4 ) .
  The article claims the the project leader/developers think they are is in the right by sticking to the flawed white paper .</tokentext>
<sentencetext>Wasn't there an article here recently, regarding ext3 vs ext4 and power failures.
ext4, while compliant with a white paper, was not doing due diligence on the journaling stuff (as ext3 did) and should not be considered a true, hardcore journaling file-system?
I recall a tiny uproar regarding ext4 not being an upgrade path for ext3 users if you care about your data.
(i realise Google is going ext2 &gt; ext4).
  The article claims the the project leader/developers think they are is in the right by sticking to the flawed white paper.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771222</id>
	<title>Hey ,ReiserFS</title>
	<author>zoomshorts</author>
	<datestamp>1263465240000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I hear it's killer !</p></htmltext>
<tokenext>I hear it 's killer !</tokentext>
<sentencetext>I hear it's killer !</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774766</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>Anonymous</author>
	<datestamp>1263485280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If you think that fsck time is a compelling reason for using a journal, you need to rethink.  Journals cost in both disk space (to store the journal) and code path/complexity (to run the journal code).  Is it worth the cost just to speed up fsck?  (By the way, ext4 fsck runs in a fraction of the time of ext2 fsck due to its better layout, even without a journal.)</p><p>You want journaling if you're interested in surviving power failures and the like, not because you want fast fsck.</p></htmltext>
<tokenext>If you think that fsck time is a compelling reason for using a journal , you need to rethink .
Journals cost in both disk space ( to store the journal ) and code path/complexity ( to run the journal code ) .
Is it worth the cost just to speed up fsck ?
( By the way , ext4 fsck runs in a fraction of the time of ext2 fsck due to its better layout , even without a journal .
) You want journaling if you 're interested in surviving power failures and the like , not because you want fast fsck .</tokentext>
<sentencetext>If you think that fsck time is a compelling reason for using a journal, you need to rethink.
Journals cost in both disk space (to store the journal) and code path/complexity (to run the journal code).
Is it worth the cost just to speed up fsck?
(By the way, ext4 fsck runs in a fraction of the time of ext2 fsck due to its better layout, even without a journal.
)You want journaling if you're interested in surviving power failures and the like, not because you want fast fsck.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30779580</id>
	<title>Re:Ubuntu 9.10?</title>
	<author>Anonymous</author>
	<datestamp>1263574560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><i>The damn bug is STILL not fixed apparently. Some people get the corruption, and some don't. Scares me enough to not even try using ext4 just yet, and I'm still surprised Canonical was stupid enough to have ext4 as the default filesystem in Karmic.</i></p><p>I've been following this bug quite closely and there's no really convincing evidence that it's actually an ext4 bug as opposed to a variety of other hardware and/or software problems which are showing up *because* people are doing md5sums etc. to check the new filesystem.</p><p>There was a serious data corruption bug in ext4 recently (can't remember the ref but it is pointed to in the Ubuntu bug report), but when it was suggested the Ubuntu problem related to this (confirmed) ext4 bug, the maintainers said that this was only in a RC kernel which never got anywhere near Ubuntu.</p><p>Anyhow, lots of Ubuntu home users are at least running their root fs as ext4 for the extra performance even if they are holding off on upgrading their<nobr> <wbr></nobr>/home or data partitions, and the experience generally seems positive.</p></htmltext>
<tokenext>The damn bug is STILL not fixed apparently .
Some people get the corruption , and some do n't .
Scares me enough to not even try using ext4 just yet , and I 'm still surprised Canonical was stupid enough to have ext4 as the default filesystem in Karmic.I 've been following this bug quite closely and there 's no really convincing evidence that it 's actually an ext4 bug as opposed to a variety of other hardware and/or software problems which are showing up * because * people are doing md5sums etc .
to check the new filesystem.There was a serious data corruption bug in ext4 recently ( ca n't remember the ref but it is pointed to in the Ubuntu bug report ) , but when it was suggested the Ubuntu problem related to this ( confirmed ) ext4 bug , the maintainers said that this was only in a RC kernel which never got anywhere near Ubuntu.Anyhow , lots of Ubuntu home users are at least running their root fs as ext4 for the extra performance even if they are holding off on upgrading their /home or data partitions , and the experience generally seems positive .</tokentext>
<sentencetext>The damn bug is STILL not fixed apparently.
Some people get the corruption, and some don't.
Scares me enough to not even try using ext4 just yet, and I'm still surprised Canonical was stupid enough to have ext4 as the default filesystem in Karmic.I've been following this bug quite closely and there's no really convincing evidence that it's actually an ext4 bug as opposed to a variety of other hardware and/or software problems which are showing up *because* people are doing md5sums etc.
to check the new filesystem.There was a serious data corruption bug in ext4 recently (can't remember the ref but it is pointed to in the Ubuntu bug report), but when it was suggested the Ubuntu problem related to this (confirmed) ext4 bug, the maintainers said that this was only in a RC kernel which never got anywhere near Ubuntu.Anyhow, lots of Ubuntu home users are at least running their root fs as ext4 for the extra performance even if they are holding off on upgrading their /home or data partitions, and the experience generally seems positive.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774532</id>
	<title>Re:Time for a backup?</title>
	<author>symbolset</author>
	<datestamp>1263483420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Even Microsoft wouldn't do that.  They would be in danger of losing the data.
</p><p>/ducks, runs.</p></htmltext>
<tokenext>Even Microsoft would n't do that .
They would be in danger of losing the data .
/ducks , runs .</tokentext>
<sentencetext>Even Microsoft wouldn't do that.
They would be in danger of losing the data.
/ducks, runs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771708</id>
	<title>Re:Well</title>
	<author>Captain Splendid</author>
	<datestamp>1263467220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>Or, you could stop being lazy and go tweak your preferences, thereby saving the rest of us from your whining.</htmltext>
<tokenext>Or , you could stop being lazy and go tweak your preferences , thereby saving the rest of us from your whining .</tokentext>
<sentencetext>Or, you could stop being lazy and go tweak your preferences, thereby saving the rest of us from your whining.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771188</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572</id>
	<title>Use of commas.</title>
	<author>Anonymous</author>
	<datestamp>1263463020000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>Eats, shoots and leaves. Read it.</p></htmltext>
<tokenext>Eats , shoots and leaves .
Read it .</tokentext>
<sentencetext>Eats, shoots and leaves.
Read it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770954</id>
	<title>Re:Btrfs?</title>
	<author>Tubal-Cain</author>
	<datestamp>1263464220000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>The chances of them using it would be pretty much nil. They are switching from <em>ext2</em>, and ext4's been "done" for over a year now. I'm sure they have a few benchmarks of btrfs, just not on as large of a scale as these tests were.</htmltext>
<tokenext>The chances of them using it would be pretty much nil .
They are switching from ext2 , and ext4 's been " done " for over a year now .
I 'm sure they have a few benchmarks of btrfs , just not on as large of a scale as these tests were .</tokentext>
<sentencetext>The chances of them using it would be pretty much nil.
They are switching from ext2, and ext4's been "done" for over a year now.
I'm sure they have a few benchmarks of btrfs, just not on as large of a scale as these tests were.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773060</id>
	<title>Re:As impressively as each other?! WTF?!</title>
	<author>mqduck</author>
	<datestamp>1263473700000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Simply removing the second comma would make the sentence entirely correct:</p><p>"In their benchmarking, EXT4 and XFS performed as impressively as each other."</p><p>Adding "each" would make it a bit clearer, but the meaning is already obvious. I don't know why you think it has to be "THE other".</p></htmltext>
<tokenext>Simply removing the second comma would make the sentence entirely correct : " In their benchmarking , EXT4 and XFS performed as impressively as each other .
" Adding " each " would make it a bit clearer , but the meaning is already obvious .
I do n't know why you think it has to be " THE other " .</tokentext>
<sentencetext>Simply removing the second comma would make the sentence entirely correct:"In their benchmarking, EXT4 and XFS performed as impressively as each other.
"Adding "each" would make it a bit clearer, but the meaning is already obvious.
I don't know why you think it has to be "THE other".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30783214</id>
	<title>Re:Time for a backup?</title>
	<author>tool462</author>
	<datestamp>1263547140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This one (http://xkcd.com/387/) was involved in the discussion when we decided to have kids.</p></htmltext>
<tokenext>This one ( http : //xkcd.com/387/ ) was involved in the discussion when we decided to have kids .</tokentext>
<sentencetext>This one (http://xkcd.com/387/) was involved in the discussion when we decided to have kids.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774204</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771038</id>
	<title>four?</title>
	<author>Anonymous</author>
	<datestamp>1263464580000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Last time I had a system to work with ext3 was still considered experimental.</p><p>*sniff*</p><p>I miss LFS.</p></htmltext>
<tokenext>Last time I had a system to work with ext3 was still considered experimental .
* sniff * I miss LFS .</tokentext>
<sentencetext>Last time I had a system to work with ext3 was still considered experimental.
*sniff*I miss LFS.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775922</id>
	<title>Re:As impressively as each other?! WTF?!</title>
	<author>Anonymous</author>
	<datestamp>1263498720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>From TFA:</p><p><div class="quote"><p>In their benchmarking, EXT4 and XFS performed, as impressively as each other.</p></div><p>WTF kind of retarded sentence is that?! Did Rob Smith help you write that article?!</p><p>In their benchmarking of EXT4 and XFS, EACH performed as impressively as THE OTHER.</p></div><p>What are you talking about? 'As impressively as each other' sounds totally natural to me.</p></div>
	</htmltext>
<tokenext>From TFA : In their benchmarking , EXT4 and XFS performed , as impressively as each other.WTF kind of retarded sentence is that ? !
Did Rob Smith help you write that article ?
! In their benchmarking of EXT4 and XFS , EACH performed as impressively as THE OTHER.What are you talking about ?
'As impressively as each other ' sounds totally natural to me .</tokentext>
<sentencetext>From TFA:In their benchmarking, EXT4 and XFS performed, as impressively as each other.WTF kind of retarded sentence is that?!
Did Rob Smith help you write that article?
!In their benchmarking of EXT4 and XFS, EACH performed as impressively as THE OTHER.What are you talking about?
'As impressively as each other' sounds totally natural to me.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30780772</id>
	<title>Re:Btrfs?</title>
	<author>Simetrical</author>
	<datestamp>1263579960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I guess they didn't consider btrfs ready enough for benchmarking yet.</p></div><p>Aside from btrfs not being ready for production according to anyone, including its developers, it's probably not useful to Google.  It has tons of awesome features, but they mostly make administration easier.  Google already administers everything through their own user-space cross-computer filesystem, which can handle all their integrity/backup/live upgrade/etc. requirements much better than btrfs probably could.  What they want is raw performance, and when btrfs is ready for prime time, it will probably beat ext4 on some benchmarks (especially if you have, e.g., a "file copy" benchmark and let btrfs use COW) but lose on others.</p></div>
	</htmltext>
<tokenext>I guess they did n't consider btrfs ready enough for benchmarking yet.Aside from btrfs not being ready for production according to anyone , including its developers , it 's probably not useful to Google .
It has tons of awesome features , but they mostly make administration easier .
Google already administers everything through their own user-space cross-computer filesystem , which can handle all their integrity/backup/live upgrade/etc .
requirements much better than btrfs probably could .
What they want is raw performance , and when btrfs is ready for prime time , it will probably beat ext4 on some benchmarks ( especially if you have , e.g. , a " file copy " benchmark and let btrfs use COW ) but lose on others .</tokentext>
<sentencetext>I guess they didn't consider btrfs ready enough for benchmarking yet.Aside from btrfs not being ready for production according to anyone, including its developers, it's probably not useful to Google.
It has tons of awesome features, but they mostly make administration easier.
Google already administers everything through their own user-space cross-computer filesystem, which can handle all their integrity/backup/live upgrade/etc.
requirements much better than btrfs probably could.
What they want is raw performance, and when btrfs is ready for prime time, it will probably beat ext4 on some benchmarks (especially if you have, e.g., a "file copy" benchmark and let btrfs use COW) but lose on others.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771014</id>
	<title>Re:Time for a backup?</title>
	<author>at\_slashdot</author>
	<datestamp>1263464460000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><i>"backups are never a bad idea."</i></p><p>Depends, for example you reduce the security of data with the number of backups you keep (you could encrypt them but that has it's own problems).</p></htmltext>
<tokenext>" backups are never a bad idea .
" Depends , for example you reduce the security of data with the number of backups you keep ( you could encrypt them but that has it 's own problems ) .</tokentext>
<sentencetext>"backups are never a bad idea.
"Depends, for example you reduce the security of data with the number of backups you keep (you could encrypt them but that has it's own problems).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770756</id>
	<title>Re:No ReiserFS?</title>
	<author>pdbaby</author>
	<datestamp>1263463560000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>...or maybe the fact that he's no longer involved brings up questions about its future direction. I'm sure they took a look at reiserfs previously</htmltext>
<tokenext>...or maybe the fact that he 's no longer involved brings up questions about its future direction .
I 'm sure they took a look at reiserfs previously</tokentext>
<sentencetext>...or maybe the fact that he's no longer involved brings up questions about its future direction.
I'm sure they took a look at reiserfs previously</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774620</id>
	<title>Re:No ReiserFS?</title>
	<author>Xabraxas</author>
	<datestamp>1263484200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The problem with ReiserFS is that Reiser3 is old and lacking features compared to other filesystems like XFS and EXT4.  Rieser4 isn't a part of the kernel and probably never will be so that could end up being quite problematic, especially in the future.</htmltext>
<tokenext>The problem with ReiserFS is that Reiser3 is old and lacking features compared to other filesystems like XFS and EXT4 .
Rieser4 is n't a part of the kernel and probably never will be so that could end up being quite problematic , especially in the future .</tokentext>
<sentencetext>The problem with ReiserFS is that Reiser3 is old and lacking features compared to other filesystems like XFS and EXT4.
Rieser4 isn't a part of the kernel and probably never will be so that could end up being quite problematic, especially in the future.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</id>
	<title>Google doesn't need journaling?</title>
	<author>Anonymous</author>
	<datestamp>1263463200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>The main advantage of EXT3 over EXT2 is that, with journaling, if you ever need to fsck the data, it goes a LOT quicker.  It's interesting to note that Google never felt it needed that functionality.<br> <br>

Additionally, I was under the impression that Google used massive numbers of commodity consumer-grade harddrives, as opposed to high-grade stuff which I presume is less likely to err.  Couple this fact with the massive amount of data Google is working with and there has got to be a lot of filesystem errors, no?<br> <br>

Can anyone else with experience with big database stuff hint as to why Google would not need to fsck their data (often enough for EXT3 to be worthwhile)?  Is it cheaper just to overwrite the data from some backup elsewhere at this scale?  How do they know the backup is clean without fscking that?</htmltext>
<tokenext>The main advantage of EXT3 over EXT2 is that , with journaling , if you ever need to fsck the data , it goes a LOT quicker .
It 's interesting to note that Google never felt it needed that functionality .
Additionally , I was under the impression that Google used massive numbers of commodity consumer-grade harddrives , as opposed to high-grade stuff which I presume is less likely to err .
Couple this fact with the massive amount of data Google is working with and there has got to be a lot of filesystem errors , no ?
Can anyone else with experience with big database stuff hint as to why Google would not need to fsck their data ( often enough for EXT3 to be worthwhile ) ?
Is it cheaper just to overwrite the data from some backup elsewhere at this scale ?
How do they know the backup is clean without fscking that ?</tokentext>
<sentencetext>The main advantage of EXT3 over EXT2 is that, with journaling, if you ever need to fsck the data, it goes a LOT quicker.
It's interesting to note that Google never felt it needed that functionality.
Additionally, I was under the impression that Google used massive numbers of commodity consumer-grade harddrives, as opposed to high-grade stuff which I presume is less likely to err.
Couple this fact with the massive amount of data Google is working with and there has got to be a lot of filesystem errors, no?
Can anyone else with experience with big database stuff hint as to why Google would not need to fsck their data (often enough for EXT3 to be worthwhile)?
Is it cheaper just to overwrite the data from some backup elsewhere at this scale?
How do they know the backup is clean without fscking that?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770660</id>
	<title>Re:Time for a backup?</title>
	<author>Monkeedude1212</author>
	<datestamp>1263463320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It sounds like EXT4 is fully compatible with 2 and 3, so even an EXT2 drive can be mounted as EXT4, which means the chances for failure are seriously reduced.</p><p>But I totally hear what you're saying. Whenever you upgrade Anything, nothing is SUPPOSED to go wrong.</p><p>However, It always does.</p></htmltext>
<tokenext>It sounds like EXT4 is fully compatible with 2 and 3 , so even an EXT2 drive can be mounted as EXT4 , which means the chances for failure are seriously reduced.But I totally hear what you 're saying .
Whenever you upgrade Anything , nothing is SUPPOSED to go wrong.However , It always does .</tokentext>
<sentencetext>It sounds like EXT4 is fully compatible with 2 and 3, so even an EXT2 drive can be mounted as EXT4, which means the chances for failure are seriously reduced.But I totally hear what you're saying.
Whenever you upgrade Anything, nothing is SUPPOSED to go wrong.However, It always does.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772192</id>
	<title>Re:Ubuntu 9.10?</title>
	<author>Randle\_Revar</author>
	<datestamp>1263469680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ubuntu makes no sense for a company with Google's size, resources, and needs</p></htmltext>
<tokenext>Ubuntu makes no sense for a company with Google 's size , resources , and needs</tokentext>
<sentencetext>Ubuntu makes no sense for a company with Google's size, resources, and needs</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771742</id>
	<title>Re:No ReiserFS?</title>
	<author>gmuslera</author>
	<datestamp>1263467400000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>To make the move to this new filesystem, they hired Ted T'so (actual maintainer of ext4). Hans wasn't available for the moment, and would be bad to have a famous employee that, well, did evil.</htmltext>
<tokenext>To make the move to this new filesystem , they hired Ted T'so ( actual maintainer of ext4 ) .
Hans was n't available for the moment , and would be bad to have a famous employee that , well , did evil .</tokentext>
<sentencetext>To make the move to this new filesystem, they hired Ted T'so (actual maintainer of ext4).
Hans wasn't available for the moment, and would be bad to have a famous employee that, well, did evil.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771460</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>crazyvas</author>
	<datestamp>1263466200000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>They use fast replication techniques to restore disk servers (chunkservers in GFS terminology) when they fail.

<p>The failure could be because of a component failure, disk corruption, or even a simply killing of the process. The detection is done via checksumming (as opposed to fscking), which also takes care of detecting higher-level issues that fscking might miss.

</p><p> Yes, it is much cheaper for them to overwrite data from another replica (3 replicas for all chunkservers is the default) using their fast re-replication techniques rather than trying to fsck.

</p><p> Check this paper out (see pdf link at bottom of page) under "Section 5: Fault Tolerance and Diagnosis" for more info:
<br>
<a href="http://labs.google.com/papers/gfs.html" title="google.com" rel="nofollow">http://labs.google.com/papers/gfs.html</a> [google.com]</p></htmltext>
<tokenext>They use fast replication techniques to restore disk servers ( chunkservers in GFS terminology ) when they fail .
The failure could be because of a component failure , disk corruption , or even a simply killing of the process .
The detection is done via checksumming ( as opposed to fscking ) , which also takes care of detecting higher-level issues that fscking might miss .
Yes , it is much cheaper for them to overwrite data from another replica ( 3 replicas for all chunkservers is the default ) using their fast re-replication techniques rather than trying to fsck .
Check this paper out ( see pdf link at bottom of page ) under " Section 5 : Fault Tolerance and Diagnosis " for more info : http : //labs.google.com/papers/gfs.html [ google.com ]</tokentext>
<sentencetext>They use fast replication techniques to restore disk servers (chunkservers in GFS terminology) when they fail.
The failure could be because of a component failure, disk corruption, or even a simply killing of the process.
The detection is done via checksumming (as opposed to fscking), which also takes care of detecting higher-level issues that fscking might miss.
Yes, it is much cheaper for them to overwrite data from another replica (3 replicas for all chunkservers is the default) using their fast re-replication techniques rather than trying to fsck.
Check this paper out (see pdf link at bottom of page) under "Section 5: Fault Tolerance and Diagnosis" for more info:

http://labs.google.com/papers/gfs.html [google.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771918</id>
	<title>Re:GFS</title>
	<author>joib</author>
	<datestamp>1263468360000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>I believe GFS uses a local fs on each node to take care of, well, all the stuff that a normal local fs like ext3 does. GFS only does the distributed stuff on top of that.</p></htmltext>
<tokenext>I believe GFS uses a local fs on each node to take care of , well , all the stuff that a normal local fs like ext3 does .
GFS only does the distributed stuff on top of that .</tokentext>
<sentencetext>I believe GFS uses a local fs on each node to take care of, well, all the stuff that a normal local fs like ext3 does.
GFS only does the distributed stuff on top of that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770926</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771488</id>
	<title>well, duh</title>
	<author>Dan Yocum</author>
	<datestamp>1263466320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"In their benchmarking, EXT4 and <strong>XFS performed</strong>, as <strong>impressively</strong> as each other."</p><p>Welcome to 2001, subby.  Glad you could make it this decade.</p><p>I completely understand them not jumping to XFS, though.  I'd never want to convert exabytes of data from one FS to another.</p></htmltext>
<tokenext>" In their benchmarking , EXT4 and XFS performed , as impressively as each other .
" Welcome to 2001 , subby .
Glad you could make it this decade.I completely understand them not jumping to XFS , though .
I 'd never want to convert exabytes of data from one FS to another .</tokentext>
<sentencetext>"In their benchmarking, EXT4 and XFS performed, as impressively as each other.
"Welcome to 2001, subby.
Glad you could make it this decade.I completely understand them not jumping to XFS, though.
I'd never want to convert exabytes of data from one FS to another.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777678</id>
	<title>Re:Btrfs?</title>
	<author>cgenman</author>
	<datestamp>1263563400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It has been a while since I built a Linux system.  Can someone comment on the specific advantages of EXT4 over EXT2?</p></htmltext>
<tokenext>It has been a while since I built a Linux system .
Can someone comment on the specific advantages of EXT4 over EXT2 ?</tokentext>
<sentencetext>It has been a while since I built a Linux system.
Can someone comment on the specific advantages of EXT4 over EXT2?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770838</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771118</id>
	<title>I upgraded from ext3 to ext4 and</title>
	<author>Anonymous</author>
	<datestamp>1263464880000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Do I get the front page of slashdot?  No just a comment!  How much is Google paying you for this publicity Mr. Malda?</p></htmltext>
<tokenext>Do I get the front page of slashdot ?
No just a comment !
How much is Google paying you for this publicity Mr. Malda ?</tokentext>
<sentencetext>Do I get the front page of slashdot?
No just a comment!
How much is Google paying you for this publicity Mr. Malda?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770828</id>
	<title>Re:Time for a backup?</title>
	<author>berashith</author>
	<datestamp>1263463740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>is the beta over yet? I dont give good SLAs on retention and recovery to dev systems .</p></htmltext>
<tokenext>is the beta over yet ?
I dont give good SLAs on retention and recovery to dev systems .</tokentext>
<sentencetext>is the beta over yet?
I dont give good SLAs on retention and recovery to dev systems .</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30780942</id>
	<title>Re:No ReiserFS?</title>
	<author>bluefoxlucid</author>
	<datestamp>1263580680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Creating ReiserFS was a huge offense and it's appropriate to banish both Hans and the file system itself to the void.</htmltext>
<tokenext>Creating ReiserFS was a huge offense and it 's appropriate to banish both Hans and the file system itself to the void .</tokentext>
<sentencetext>Creating ReiserFS was a huge offense and it's appropriate to banish both Hans and the file system itself to the void.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771732</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>the\_other\_chewey</author>
	<datestamp>1263467400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The main advantage of EXT3 over EXT2 is that, with journaling, if you ever need to fsck the data, it goes a LOT quicker. It's interesting to note that Google never felt it needed that functionality.</p></div><p>
Doing fsck runs is just not worth it for them. One of the first contributions from google to the ext4<br>
driver was the possibility to run ext4 volumes without journaling: All the performance benefits of ext4,<br>
none of the performance penalties of journaling.<br>
<br>
If there is (possible) FS corruption, they just rebuild it from scratch from another copy of the data.<br>
<br>
This comes from Ted T'so's FOSDEM09 keynote BTW. Very interesting talk.</p></div>
	</htmltext>
<tokenext>The main advantage of EXT3 over EXT2 is that , with journaling , if you ever need to fsck the data , it goes a LOT quicker .
It 's interesting to note that Google never felt it needed that functionality .
Doing fsck runs is just not worth it for them .
One of the first contributions from google to the ext4 driver was the possibility to run ext4 volumes without journaling : All the performance benefits of ext4 , none of the performance penalties of journaling .
If there is ( possible ) FS corruption , they just rebuild it from scratch from another copy of the data .
This comes from Ted T'so 's FOSDEM09 keynote BTW .
Very interesting talk .</tokentext>
<sentencetext>The main advantage of EXT3 over EXT2 is that, with journaling, if you ever need to fsck the data, it goes a LOT quicker.
It's interesting to note that Google never felt it needed that functionality.
Doing fsck runs is just not worth it for them.
One of the first contributions from google to the ext4
driver was the possibility to run ext4 volumes without journaling: All the performance benefits of ext4,
none of the performance penalties of journaling.
If there is (possible) FS corruption, they just rebuild it from scratch from another copy of the data.
This comes from Ted T'so's FOSDEM09 keynote BTW.
Very interesting talk.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770886</id>
	<title>Re:Time for a backup?</title>
	<author>Anonymous</author>
	<datestamp>1263463980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Free loaders don't get to choose.</p></htmltext>
<tokenext>Free loaders do n't get to choose .</tokentext>
<sentencetext>Free loaders don't get to choose.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30778442</id>
	<title>WTF? GFS DUDE</title>
	<author>hesaigo999ca</author>
	<datestamp>1263568500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Google has their own proprietary file system called gfs (and now gfs2), who came up with this rubbish?<br>They have special file system because of their design demands and the inherent flaws<br>
&nbsp; in most file systems when you cluster vast amounts of computers together.</p><p>What does the writer of this post think he will accomplish by sending out this garbage is what I want to know!</p></htmltext>
<tokenext>Google has their own proprietary file system called gfs ( and now gfs2 ) , who came up with this rubbish ? They have special file system because of their design demands and the inherent flaws   in most file systems when you cluster vast amounts of computers together.What does the writer of this post think he will accomplish by sending out this garbage is what I want to know !</tokentext>
<sentencetext>Google has their own proprietary file system called gfs (and now gfs2), who came up with this rubbish?They have special file system because of their design demands and the inherent flaws
  in most file systems when you cluster vast amounts of computers together.What does the writer of this post think he will accomplish by sending out this garbage is what I want to know!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771292</id>
	<title>Give us a +-0 Counterbalance</title>
	<author>itomato</author>
	<datestamp>1263465480000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>When does black become white?<br>#CCCCCC or #888888</p><p>Is there overlap with Flamebait?</p><p>When does an otherwise 'troll' moderation-worthy comment lose out on status that could validate 19 responses, with 50\% scoring +2?</p><p>Sometimes a troll is a troll, but sometimes its just a shadow.</p></htmltext>
<tokenext>When does black become white ? # CCCCCC or # 888888Is there overlap with Flamebait ? When does an otherwise 'troll ' moderation-worthy comment lose out on status that could validate 19 responses , with 50 \ % scoring + 2 ? Sometimes a troll is a troll , but sometimes its just a shadow .</tokentext>
<sentencetext>When does black become white?#CCCCCC or #888888Is there overlap with Flamebait?When does an otherwise 'troll' moderation-worthy comment lose out on status that could validate 19 responses, with 50\% scoring +2?Sometimes a troll is a troll, but sometimes its just a shadow.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770484</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502</id>
	<title>Time for a backup?</title>
	<author>Anonymous</author>
	<datestamp>1263462720000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>I guess now is as good as any to go through my Gmail and Google Docs and make local backups. I'm sure my info is safe, but I have been through these types of 'upgrades' at work before and every once in a while....well, let's just say backups are never a bad idea.</htmltext>
<tokenext>I guess now is as good as any to go through my Gmail and Google Docs and make local backups .
I 'm sure my info is safe , but I have been through these types of 'upgrades ' at work before and every once in a while....well , let 's just say backups are never a bad idea .</tokentext>
<sentencetext>I guess now is as good as any to go through my Gmail and Google Docs and make local backups.
I'm sure my info is safe, but I have been through these types of 'upgrades' at work before and every once in a while....well, let's just say backups are never a bad idea.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775002</id>
	<title>Re:Time for a backup?</title>
	<author>bennomatic</author>
	<datestamp>1263487260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Did you AC troll yourself for the sake of a punch line?</htmltext>
<tokenext>Did you AC troll yourself for the sake of a punch line ?</tokentext>
<sentencetext>Did you AC troll yourself for the sake of a punch line?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771112</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774276</id>
	<title>Re:Use of commas.</title>
	<author>Anonymous</author>
	<datestamp>1263481560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Red, Green, Blue and White : this list contain three colour schemes<br>Red, Green, Blue, and White : this list contains four colour schemes</p><p>You are correct to put the comma before the and.</p></htmltext>
<tokenext>Red , Green , Blue and White : this list contain three colour schemesRed , Green , Blue , and White : this list contains four colour schemesYou are correct to put the comma before the and .</tokentext>
<sentencetext>Red, Green, Blue and White : this list contain three colour schemesRed, Green, Blue, and White : this list contains four colour schemesYou are correct to put the comma before the and.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770942</id>
	<title>Re:Time for a backup?</title>
	<author>Anonymous</author>
	<datestamp>1263464220000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>I usually let the bit-gods decide what data I have that is important enough to save.  Over the years the bit-gods have taught me that:</p><p>Music files: not important, Styx crossed the Styx to<nobr> <wbr></nobr>/dev/null in 2002<br>Essay written for sophomore year high school english: Important, I assume to haunt me in some future political race.<br>Porn collection: Like the subject matter within, it swells impressively, explodes, then enters a refractory period until it's ready to build up again.<br>C++ program that graphs the Mandelbrot set: Important.  I like feeling like an explorer navigating the cardioid's canyons.<br>Photos of my children: Not important.  If I need more baby photos, I can just have more babies.</p></htmltext>
<tokenext>I usually let the bit-gods decide what data I have that is important enough to save .
Over the years the bit-gods have taught me that : Music files : not important , Styx crossed the Styx to /dev/null in 2002Essay written for sophomore year high school english : Important , I assume to haunt me in some future political race.Porn collection : Like the subject matter within , it swells impressively , explodes , then enters a refractory period until it 's ready to build up again.C + + program that graphs the Mandelbrot set : Important .
I like feeling like an explorer navigating the cardioid 's canyons.Photos of my children : Not important .
If I need more baby photos , I can just have more babies .</tokentext>
<sentencetext>I usually let the bit-gods decide what data I have that is important enough to save.
Over the years the bit-gods have taught me that:Music files: not important, Styx crossed the Styx to /dev/null in 2002Essay written for sophomore year high school english: Important, I assume to haunt me in some future political race.Porn collection: Like the subject matter within, it swells impressively, explodes, then enters a refractory period until it's ready to build up again.C++ program that graphs the Mandelbrot set: Important.
I like feeling like an explorer navigating the cardioid's canyons.Photos of my children: Not important.
If I need more baby photos, I can just have more babies.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775742</id>
	<title>Re:Ubuntu 9.10?</title>
	<author>tytso</author>
	<datestamp>1263495780000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>So Canonical has never reported this bug to LKML or to the linux-ext4 list as far as I am aware.  No other distribution has complained about this &gt; 512MB bug, either.  The first I heard about it is when I scanned the Slashdot comments.</p><p>Now that I'll know about it, I'll try to reproduce it with an upstream kernel.  I'll note that in 9.04, Ubuntu had a bug which as far as I know, must have been caused by their screwing up some patch backports.  Only Ubuntu's kernel had a bug where rm'ing a large directory hierarchy would have a tendency to cause a hang.  No one was able to reproduce it on an upstream kernel,</p><p>I will say that I don't ever push patches to Linus without running them through the XFS QA test suite.   (Which is now generalized enough so it can be used on a number of file systems other than just XFS).   If it doesn't have a "write a 640 MB file" and make sure it isn't corrupted, we can add it and then all of the file systems which use the XFSQA test suite can benefit from it.</p><p>(I was recently proselytizing the use of the XFS QA suite to some Reiserfs and BTRFS developers.  The "competition" between file systems is really more of a fanboy/fangirl thing than at the developer level.   In fact, Chris Mason, the head btrfs developer, has helped me with some tricky ext3/ext4 bugs, and in the past couple of years I've been encouraging various companies to donote engineering time to help work on btrfs.  With the exception of Hans Reiser, who has in the past me of trying to actively sabotage his project --- not true as far as I'm concerned --- we all are a pretty friendly bunch and work together and help each other out as we can.)</p></htmltext>
<tokenext>So Canonical has never reported this bug to LKML or to the linux-ext4 list as far as I am aware .
No other distribution has complained about this &gt; 512MB bug , either .
The first I heard about it is when I scanned the Slashdot comments.Now that I 'll know about it , I 'll try to reproduce it with an upstream kernel .
I 'll note that in 9.04 , Ubuntu had a bug which as far as I know , must have been caused by their screwing up some patch backports .
Only Ubuntu 's kernel had a bug where rm'ing a large directory hierarchy would have a tendency to cause a hang .
No one was able to reproduce it on an upstream kernel,I will say that I do n't ever push patches to Linus without running them through the XFS QA test suite .
( Which is now generalized enough so it can be used on a number of file systems other than just XFS ) .
If it does n't have a " write a 640 MB file " and make sure it is n't corrupted , we can add it and then all of the file systems which use the XFSQA test suite can benefit from it .
( I was recently proselytizing the use of the XFS QA suite to some Reiserfs and BTRFS developers .
The " competition " between file systems is really more of a fanboy/fangirl thing than at the developer level .
In fact , Chris Mason , the head btrfs developer , has helped me with some tricky ext3/ext4 bugs , and in the past couple of years I 've been encouraging various companies to donote engineering time to help work on btrfs .
With the exception of Hans Reiser , who has in the past me of trying to actively sabotage his project --- not true as far as I 'm concerned --- we all are a pretty friendly bunch and work together and help each other out as we can .
)</tokentext>
<sentencetext>So Canonical has never reported this bug to LKML or to the linux-ext4 list as far as I am aware.
No other distribution has complained about this &gt; 512MB bug, either.
The first I heard about it is when I scanned the Slashdot comments.Now that I'll know about it, I'll try to reproduce it with an upstream kernel.
I'll note that in 9.04, Ubuntu had a bug which as far as I know, must have been caused by their screwing up some patch backports.
Only Ubuntu's kernel had a bug where rm'ing a large directory hierarchy would have a tendency to cause a hang.
No one was able to reproduce it on an upstream kernel,I will say that I don't ever push patches to Linus without running them through the XFS QA test suite.
(Which is now generalized enough so it can be used on a number of file systems other than just XFS).
If it doesn't have a "write a 640 MB file" and make sure it isn't corrupted, we can add it and then all of the file systems which use the XFSQA test suite can benefit from it.
(I was recently proselytizing the use of the XFS QA suite to some Reiserfs and BTRFS developers.
The "competition" between file systems is really more of a fanboy/fangirl thing than at the developer level.
In fact, Chris Mason, the head btrfs developer, has helped me with some tricky ext3/ext4 bugs, and in the past couple of years I've been encouraging various companies to donote engineering time to help work on btrfs.
With the exception of Hans Reiser, who has in the past me of trying to actively sabotage his project --- not true as far as I'm concerned --- we all are a pretty friendly bunch and work together and help each other out as we can.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770696</id>
	<title>512 MB size limit (bug) gone?</title>
	<author>Gothmolly</author>
	<datestamp>1263463380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Did they fix that nasty "if you have files &gt; 512MB kiss them goodbye" bug ?</p></htmltext>
<tokenext>Did they fix that nasty " if you have files &gt; 512MB kiss them goodbye " bug ?</tokentext>
<sentencetext>Did they fix that nasty "if you have files &gt; 512MB kiss them goodbye" bug ?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773712</id>
	<title>Re:Ubuntu 9.10?</title>
	<author>rdnetto</author>
	<datestamp>1263477420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I learnt the hard way - when I upgraded to 9.04 (and specifically selected ext4) I found that the system would crash when I emptied trash. Ever since then I've stuck to XFS.</p></htmltext>
<tokenext>I learnt the hard way - when I upgraded to 9.04 ( and specifically selected ext4 ) I found that the system would crash when I emptied trash .
Ever since then I 've stuck to XFS .</tokentext>
<sentencetext>I learnt the hard way - when I upgraded to 9.04 (and specifically selected ext4) I found that the system would crash when I emptied trash.
Ever since then I've stuck to XFS.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777340</id>
	<title>Re:Use of commas.</title>
	<author>Anonymous</author>
	<datestamp>1263559740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I prefer the Oxford comma, you insensitive clod!</p></htmltext>
<tokenext>I prefer the Oxford comma , you insensitive clod !</tokentext>
<sentencetext>I prefer the Oxford comma, you insensitive clod!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772584</id>
	<title>Re:Ubuntu 9.10?</title>
	<author>Jorl17</author>
	<datestamp>1263471480000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>Of course Google knows what it's dong! How on earth would we insist on having ALL of our private data on THEIR cloud?</htmltext>
<tokenext>Of course Google knows what it 's dong !
How on earth would we insist on having ALL of our private data on THEIR cloud ?</tokentext>
<sentencetext>Of course Google knows what it's dong!
How on earth would we insist on having ALL of our private data on THEIR cloud?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773538</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>TheRaven64</author>
	<datestamp>1263476400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm not sure how you think fsck or journaling work...</p><p>
With a tool like fsck, it starts at the root inode of a filesystem and then walks the tree, looking for various things that can be caused by writes happening in the wrong order.  For example, it does garbage collection so that inodes that have a reference count greater than 0 but which are not actually referenced in a directory entry are removed (or moved to a folder where you can check if they are parts of a file that you didn't meant to delete).  It will also check that the amount of free space and the size of the disk minus the size of the files it can find are the same.  </p><p>
With journaling, this is simpler because you have a much smaller number of things that can go wrong.  With a journaling FS, you first write to disk that you are going to make a change, then you make the change, then you erase that bit of the journal.  If the power fails before you write to the journal, you lose the transaction.  If it fails after you write the journal, then you may be able to replay the transaction from the journal (if it's something simple).  If it fails after, then you look in the journal, see it's already been done by checking the on-disk state, and delete the journal.</p><p>
As a simple example, consider moving a file from one directory to another.  You need to add an entry to the target directory, then you need to remove it from the old one.  In the middle, the file will be referenced in two places but its reference count will still be one, so unlinking it in the old directory will delete it in both places and leave a dangling reference in the other.  If the power fails at this point, fsck will walk the directory tree, find two references to the same inode, and either delete one of them or increment the reference count of the file and report an error (this behaviour is implementation dependent, your fsck may do something completely different, this is just an example).  </p><p>
With journaling, you first write something in the log saying which file you are moving.  Then you update the target directory, then you update the source directory, then you update the journal again to say that you've done it.  Now this time when power goes out in the middle, fsck can look at the journal and immediately see the two directories that are in an inconsistent state.  First it will check the target directory, and if the file isn't referenced there then it will add a reference.  Then it will check the source directory and remove the entry there, if it exists.  Then it will delete the journal entry.  At every point in the initial operation, there was enough information on disk to complete the operation entirely.  Without the journal, fsck could only find a bit of the filesystem that was inconsistent; it still needed to employ heuristics to guess what the correct state should have been.</p><p>
The fsck tool isn't magic.  It knows a bit about what the filesystem is meant to look like, and tries to ensure that it really does look like that, but it doesn't always have enough information to get things right.</p></htmltext>
<tokenext>I 'm not sure how you think fsck or journaling work.. . With a tool like fsck , it starts at the root inode of a filesystem and then walks the tree , looking for various things that can be caused by writes happening in the wrong order .
For example , it does garbage collection so that inodes that have a reference count greater than 0 but which are not actually referenced in a directory entry are removed ( or moved to a folder where you can check if they are parts of a file that you did n't meant to delete ) .
It will also check that the amount of free space and the size of the disk minus the size of the files it can find are the same .
With journaling , this is simpler because you have a much smaller number of things that can go wrong .
With a journaling FS , you first write to disk that you are going to make a change , then you make the change , then you erase that bit of the journal .
If the power fails before you write to the journal , you lose the transaction .
If it fails after you write the journal , then you may be able to replay the transaction from the journal ( if it 's something simple ) .
If it fails after , then you look in the journal , see it 's already been done by checking the on-disk state , and delete the journal .
As a simple example , consider moving a file from one directory to another .
You need to add an entry to the target directory , then you need to remove it from the old one .
In the middle , the file will be referenced in two places but its reference count will still be one , so unlinking it in the old directory will delete it in both places and leave a dangling reference in the other .
If the power fails at this point , fsck will walk the directory tree , find two references to the same inode , and either delete one of them or increment the reference count of the file and report an error ( this behaviour is implementation dependent , your fsck may do something completely different , this is just an example ) .
With journaling , you first write something in the log saying which file you are moving .
Then you update the target directory , then you update the source directory , then you update the journal again to say that you 've done it .
Now this time when power goes out in the middle , fsck can look at the journal and immediately see the two directories that are in an inconsistent state .
First it will check the target directory , and if the file is n't referenced there then it will add a reference .
Then it will check the source directory and remove the entry there , if it exists .
Then it will delete the journal entry .
At every point in the initial operation , there was enough information on disk to complete the operation entirely .
Without the journal , fsck could only find a bit of the filesystem that was inconsistent ; it still needed to employ heuristics to guess what the correct state should have been .
The fsck tool is n't magic .
It knows a bit about what the filesystem is meant to look like , and tries to ensure that it really does look like that , but it does n't always have enough information to get things right .</tokentext>
<sentencetext>I'm not sure how you think fsck or journaling work...
With a tool like fsck, it starts at the root inode of a filesystem and then walks the tree, looking for various things that can be caused by writes happening in the wrong order.
For example, it does garbage collection so that inodes that have a reference count greater than 0 but which are not actually referenced in a directory entry are removed (or moved to a folder where you can check if they are parts of a file that you didn't meant to delete).
It will also check that the amount of free space and the size of the disk minus the size of the files it can find are the same.
With journaling, this is simpler because you have a much smaller number of things that can go wrong.
With a journaling FS, you first write to disk that you are going to make a change, then you make the change, then you erase that bit of the journal.
If the power fails before you write to the journal, you lose the transaction.
If it fails after you write the journal, then you may be able to replay the transaction from the journal (if it's something simple).
If it fails after, then you look in the journal, see it's already been done by checking the on-disk state, and delete the journal.
As a simple example, consider moving a file from one directory to another.
You need to add an entry to the target directory, then you need to remove it from the old one.
In the middle, the file will be referenced in two places but its reference count will still be one, so unlinking it in the old directory will delete it in both places and leave a dangling reference in the other.
If the power fails at this point, fsck will walk the directory tree, find two references to the same inode, and either delete one of them or increment the reference count of the file and report an error (this behaviour is implementation dependent, your fsck may do something completely different, this is just an example).
With journaling, you first write something in the log saying which file you are moving.
Then you update the target directory, then you update the source directory, then you update the journal again to say that you've done it.
Now this time when power goes out in the middle, fsck can look at the journal and immediately see the two directories that are in an inconsistent state.
First it will check the target directory, and if the file isn't referenced there then it will add a reference.
Then it will check the source directory and remove the entry there, if it exists.
Then it will delete the journal entry.
At every point in the initial operation, there was enough information on disk to complete the operation entirely.
Without the journal, fsck could only find a bit of the filesystem that was inconsistent; it still needed to employ heuristics to guess what the correct state should have been.
The fsck tool isn't magic.
It knows a bit about what the filesystem is meant to look like, and tries to ensure that it really does look like that, but it doesn't always have enough information to get things right.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770960</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770484</id>
	<title>Well</title>
	<author>Anonymous</author>
	<datestamp>1263462660000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I'm sorry, you must have mistaken me for someone who gives a fuck.</p></htmltext>
<tokenext>I 'm sorry , you must have mistaken me for someone who gives a fuck .</tokentext>
<sentencetext>I'm sorry, you must have mistaken me for someone who gives a fuck.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30795510</id>
	<title>Re:Has Ted Cooked the Benchmarks Again?</title>
	<author>segedunum</author>
	<datestamp>1263656640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>So I'm not sure what you're talking about. If you're talking about delayed allocation, XFS has it too, and the same <b>buggy applications</b>...</p></div></blockquote><p>
Stop blaming the applications for a filesystem problem Ted. The excuse doesn't wash no matter how many times you use it, and no, XFS does not have it.</p></div>
	</htmltext>
<tokenext>So I 'm not sure what you 're talking about .
If you 're talking about delayed allocation , XFS has it too , and the same buggy applications.. . Stop blaming the applications for a filesystem problem Ted .
The excuse does n't wash no matter how many times you use it , and no , XFS does not have it .</tokentext>
<sentencetext>So I'm not sure what you're talking about.
If you're talking about delayed allocation, XFS has it too, and the same buggy applications...
Stop blaming the applications for a filesystem problem Ted.
The excuse doesn't wash no matter how many times you use it, and no, XFS does not have it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773226</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773944</id>
	<title>Re:Ubuntu 9.10?</title>
	<author>Anonymous</author>
	<datestamp>1263478920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p> Scares me enough to not even try using ext4 just yet, and I'm still surprised Canonical was stupid enough to have ext4 as the default filesystem in Karmic.</p><p>Then again, perhaps Google knows what they're doing.</p></div><p>You're new to Ubuntu, right? This is the \_EXACT\_ thing Ubuntu is notorious for doing. The exact reason why anyone with a clue ISN'T using Ubuntu. They release often, they release early and testing? Well, who cares?</p></div>
	</htmltext>
<tokenext>Scares me enough to not even try using ext4 just yet , and I 'm still surprised Canonical was stupid enough to have ext4 as the default filesystem in Karmic.Then again , perhaps Google knows what they 're doing.You 're new to Ubuntu , right ?
This is the \ _EXACT \ _ thing Ubuntu is notorious for doing .
The exact reason why anyone with a clue IS N'T using Ubuntu .
They release often , they release early and testing ?
Well , who cares ?</tokentext>
<sentencetext> Scares me enough to not even try using ext4 just yet, and I'm still surprised Canonical was stupid enough to have ext4 as the default filesystem in Karmic.Then again, perhaps Google knows what they're doing.You're new to Ubuntu, right?
This is the \_EXACT\_ thing Ubuntu is notorious for doing.
The exact reason why anyone with a clue ISN'T using Ubuntu.
They release often, they release early and testing?
Well, who cares?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773226</id>
	<title>Re:Has Ted Cooked the Benchmarks Again?</title>
	<author>tytso</author>
	<datestamp>1263474660000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>So I'm not sure what you're talking about.  If you're talking about delayed allocation, XFS has it too, and the same buggy applications that don't use fsync() will also lose information after a buggy proprietary Nvidia video driver crashes your machine, regardless of whether you are using XFS or ext4.</p><p>If you are talking about the change to \_ext3\_ to use data=writeback, that was a change that Linus made, not me, and ext4 has always defaulted to data=ordered.  Linus thought that since the vast majority of Linux machines are single-user desktop machines, the performance hit of data=ordered, which is designed to prevent exposure of uninitialized data blocks after a crash wasn't worth it.   I and other file system engineers disagreed, but Linus's kernel, Linus's rules.   I pushed a patch to ext3 which makes the default a config option, and as far as I know the enterprise distro's plan to use this config option to keep the defaults the same as before for ext3.</p><p>Since it was my choice, I actually changed the defaults for ext4 to use barriers=1. which Andrew Morton vetoed for ext3 because again, he didn't think it was worth the performance hit.   But with ext4, the benefits of delayed allocation and extents are so vast that it completely dominated the performance hit of turning on write barriers.   That is what most of the performance benefits for ext4 come from, and it is very much a huge step forward compared to ext3.</p><p>So with respect, you don't know what you are talking about.</p><p>-- Ted</p></htmltext>
<tokenext>So I 'm not sure what you 're talking about .
If you 're talking about delayed allocation , XFS has it too , and the same buggy applications that do n't use fsync ( ) will also lose information after a buggy proprietary Nvidia video driver crashes your machine , regardless of whether you are using XFS or ext4.If you are talking about the change to \ _ext3 \ _ to use data = writeback , that was a change that Linus made , not me , and ext4 has always defaulted to data = ordered .
Linus thought that since the vast majority of Linux machines are single-user desktop machines , the performance hit of data = ordered , which is designed to prevent exposure of uninitialized data blocks after a crash was n't worth it .
I and other file system engineers disagreed , but Linus 's kernel , Linus 's rules .
I pushed a patch to ext3 which makes the default a config option , and as far as I know the enterprise distro 's plan to use this config option to keep the defaults the same as before for ext3.Since it was my choice , I actually changed the defaults for ext4 to use barriers = 1 .
which Andrew Morton vetoed for ext3 because again , he did n't think it was worth the performance hit .
But with ext4 , the benefits of delayed allocation and extents are so vast that it completely dominated the performance hit of turning on write barriers .
That is what most of the performance benefits for ext4 come from , and it is very much a huge step forward compared to ext3.So with respect , you do n't know what you are talking about.-- Ted</tokentext>
<sentencetext>So I'm not sure what you're talking about.
If you're talking about delayed allocation, XFS has it too, and the same buggy applications that don't use fsync() will also lose information after a buggy proprietary Nvidia video driver crashes your machine, regardless of whether you are using XFS or ext4.If you are talking about the change to \_ext3\_ to use data=writeback, that was a change that Linus made, not me, and ext4 has always defaulted to data=ordered.
Linus thought that since the vast majority of Linux machines are single-user desktop machines, the performance hit of data=ordered, which is designed to prevent exposure of uninitialized data blocks after a crash wasn't worth it.
I and other file system engineers disagreed, but Linus's kernel, Linus's rules.
I pushed a patch to ext3 which makes the default a config option, and as far as I know the enterprise distro's plan to use this config option to keep the defaults the same as before for ext3.Since it was my choice, I actually changed the defaults for ext4 to use barriers=1.
which Andrew Morton vetoed for ext3 because again, he didn't think it was worth the performance hit.
But with ext4, the benefits of delayed allocation and extents are so vast that it completely dominated the performance hit of turning on write barriers.
That is what most of the performance benefits for ext4 come from, and it is very much a huge step forward compared to ext3.So with respect, you don't know what you are talking about.-- Ted</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771648</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772292</id>
	<title>Re:Time for a backup?</title>
	<author>ajs</author>
	<datestamp>1263470220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I guess now is as good as any to go through my Gmail and Google Docs and make local backups. I'm sure my info is safe, but I have been through these types of 'upgrades' at work before and every once in a while....well, let's just say backups are never a bad idea.</p></div><p>What makes you think that gmail or gdocs is going to be affected? Your data is almost certainly stored in a database. It's possible that that database is stored on a filesystem (as opposed to a raw device, which I won't be at all surprised to see), but even then you're talking about something that's far less discreet than a bunch of text files lying around on a filesystem.</p><p>What's actually kind of amusing is you've never known when or if they've updated that database and yet your life has continued along smoothly.</p></div>
	</htmltext>
<tokenext>I guess now is as good as any to go through my Gmail and Google Docs and make local backups .
I 'm sure my info is safe , but I have been through these types of 'upgrades ' at work before and every once in a while....well , let 's just say backups are never a bad idea.What makes you think that gmail or gdocs is going to be affected ?
Your data is almost certainly stored in a database .
It 's possible that that database is stored on a filesystem ( as opposed to a raw device , which I wo n't be at all surprised to see ) , but even then you 're talking about something that 's far less discreet than a bunch of text files lying around on a filesystem.What 's actually kind of amusing is you 've never known when or if they 've updated that database and yet your life has continued along smoothly .</tokentext>
<sentencetext>I guess now is as good as any to go through my Gmail and Google Docs and make local backups.
I'm sure my info is safe, but I have been through these types of 'upgrades' at work before and every once in a while....well, let's just say backups are never a bad idea.What makes you think that gmail or gdocs is going to be affected?
Your data is almost certainly stored in a database.
It's possible that that database is stored on a filesystem (as opposed to a raw device, which I won't be at all surprised to see), but even then you're talking about something that's far less discreet than a bunch of text files lying around on a filesystem.What's actually kind of amusing is you've never known when or if they've updated that database and yet your life has continued along smoothly.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772988</id>
	<title>Re:No ReiserFS?</title>
	<author>mqduck</author>
	<datestamp>1263473460000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>So it's not because the creator of the filesystem committed a crime, it's because the product has an unsavoury name</p></div><p>Actually, it's more likely because the creator and main developer of the filesystem is suddenly gone. As I understand it, he wasn't a very friendly guy (surprise!) and drove others away from the project.</p></div>
	</htmltext>
<tokenext>So it 's not because the creator of the filesystem committed a crime , it 's because the product has an unsavoury nameActually , it 's more likely because the creator and main developer of the filesystem is suddenly gone .
As I understand it , he was n't a very friendly guy ( surprise !
) and drove others away from the project .</tokentext>
<sentencetext>So it's not because the creator of the filesystem committed a crime, it's because the product has an unsavoury nameActually, it's more likely because the creator and main developer of the filesystem is suddenly gone.
As I understand it, he wasn't a very friendly guy (surprise!
) and drove others away from the project.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771122</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773016</id>
	<title>Re:Use of commas.</title>
	<author>Anonymous</author>
	<datestamp>1263473520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://en.wikipedia.org/wiki/Serial\_comma" title="wikipedia.org">http://en.wikipedia.org/wiki/Serial\_comma</a> [wikipedia.org]</p><p>while technically incorrect usage and shunned by many academics I've met, as a computer programmer it sits better with me to have each term in a list or array of objects accurately comma delimited. It seems stupid to me to rely on re-arranging a list because of the ambiguity an and term can create.</p></htmltext>
<tokenext>http : //en.wikipedia.org/wiki/Serial \ _comma [ wikipedia.org ] while technically incorrect usage and shunned by many academics I 've met , as a computer programmer it sits better with me to have each term in a list or array of objects accurately comma delimited .
It seems stupid to me to rely on re-arranging a list because of the ambiguity an and term can create .</tokentext>
<sentencetext>http://en.wikipedia.org/wiki/Serial\_comma [wikipedia.org]while technically incorrect usage and shunned by many academics I've met, as a computer programmer it sits better with me to have each term in a list or array of objects accurately comma delimited.
It seems stupid to me to rely on re-arranging a list because of the ambiguity an and term can create.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777782</id>
	<title>Re:No ReiserFS?</title>
	<author>Joey Vegetables</author>
	<datestamp>1263564540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Perhaps if they decide to make heavy use of <a href="http://en.wikipedia.org/wiki/Chroot\_jail" title="wikipedia.org">these</a> [wikipedia.org], they might reconsider.</p><p>Seriously . . . you want something as important and heavily used as a filesystem to be as future-proof as possible, and there remains serious question about who will maintain reiser4 going forward.  Ext4 is a stepping-stone to btrfs, which seems to have a bright future, and incorporates many of the same ideas as reiserfs.</p></htmltext>
<tokenext>Perhaps if they decide to make heavy use of these [ wikipedia.org ] , they might reconsider.Seriously .
. .
you want something as important and heavily used as a filesystem to be as future-proof as possible , and there remains serious question about who will maintain reiser4 going forward .
Ext4 is a stepping-stone to btrfs , which seems to have a bright future , and incorporates many of the same ideas as reiserfs .</tokentext>
<sentencetext>Perhaps if they decide to make heavy use of these [wikipedia.org], they might reconsider.Seriously .
. .
you want something as important and heavily used as a filesystem to be as future-proof as possible, and there remains serious question about who will maintain reiser4 going forward.
Ext4 is a stepping-stone to btrfs, which seems to have a bright future, and incorporates many of the same ideas as reiserfs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771290</id>
	<title>Re:Time for a backup?</title>
	<author>Anonymous</author>
	<datestamp>1263465480000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Dude, it's Google!</p><p>They have like 50 backups of their own logos, all of them.</p><p>You'r bits are safe, trust me.</p></htmltext>
<tokenext>Dude , it 's Google ! They have like 50 backups of their own logos , all of them.You'r bits are safe , trust me .</tokentext>
<sentencetext>Dude, it's Google!They have like 50 backups of their own logos, all of them.You'r bits are safe, trust me.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772462</id>
	<title>Re:Time for a backup?</title>
	<author>mR.bRiGhTsId3</author>
	<datestamp>1263470880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I don't think google actually cares about data integrity at the machine level. They have built-in fault tolerance at higher levels of their stack like GFS.</htmltext>
<tokenext>I do n't think google actually cares about data integrity at the machine level .
They have built-in fault tolerance at higher levels of their stack like GFS .</tokentext>
<sentencetext>I don't think google actually cares about data integrity at the machine level.
They have built-in fault tolerance at higher levels of their stack like GFS.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771836</id>
	<title>Re:Time for a backup?</title>
	<author>lymond01</author>
	<datestamp>1263468000000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><i>If Google is actually still using ext2 rather than ext3, ext4 will be significantly *more* reliable.</i></p><p>It ain't the destination, it's the journey that worries me.</p></htmltext>
<tokenext>If Google is actually still using ext2 rather than ext3 , ext4 will be significantly * more * reliable.It ai n't the destination , it 's the journey that worries me .</tokentext>
<sentencetext>If Google is actually still using ext2 rather than ext3, ext4 will be significantly *more* reliable.It ain't the destination, it's the journey that worries me.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771632</id>
	<title>Downtime</title>
	<author>Joucifer</author>
	<datestamp>1263466860000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Is this why Google was down for about 30 minutes today?

Did anyone else even experience this or was it a local issue?</htmltext>
<tokenext>Is this why Google was down for about 30 minutes today ?
Did anyone else even experience this or was it a local issue ?</tokentext>
<sentencetext>Is this why Google was down for about 30 minutes today?
Did anyone else even experience this or was it a local issue?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771704</id>
	<title>Re:Use of commas.</title>
	<author>SomeJoel</author>
	<datestamp>1263467220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>That's a 15 karma penalty.  1st down.</p></div><p>A defensive penalty?! You've, got to be, joking.</p></div>
	</htmltext>
<tokenext>That 's a 15 karma penalty .
1st down.A defensive penalty ? !
You 've , got to be , joking .</tokentext>
<sentencetext>That's a 15 karma penalty.
1st down.A defensive penalty?!
You've, got to be, joking.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770604</id>
	<title>Ted T'so</title>
	<author>RPoet</author>
	<datestamp>1263463140000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>They have <a href="http://www.h-online.com/open/news/item/Ted-T-so-moves-to-Google-904219.html" title="h-online.com">Ted T'so</a> [h-online.com] of Linux filesystem fame working for them now.</p></htmltext>
<tokenext>They have Ted T'so [ h-online.com ] of Linux filesystem fame working for them now .</tokentext>
<sentencetext>They have Ted T'so [h-online.com] of Linux filesystem fame working for them now.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770764</id>
	<title>Re:No ReiserFS?</title>
	<author>Anonymous</author>
	<datestamp>1263463560000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>...maybe they felt it wasn't cutting edge enough.</p></htmltext>
<tokenext>...maybe they felt it was n't cutting edge enough .</tokentext>
<sentencetext>...maybe they felt it wasn't cutting edge enough.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774952</id>
	<title>Re:No ReiserFS?</title>
	<author>Anonymous</author>
	<datestamp>1263486840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>just because the creator of the filesystem committed a crime</p></div></blockquote><p>The crime is murder.  I don't know why you try to trivialize it with "just because" and using the word "crime" instead of "murder."  Say what you want about the merits of the file system but don't try to trivialize Hans Reiser's acts of extreme violence.</p><p>I'm sure it has the potential to be a great file system.  However, Mr. Reiser has something else to work on now.  He needs to work on his rehabilitation.  If, in his absence, someone else wants to pick up the slack and continue developing it great, more power to them.</p><p>You should just leave him out of any argument and focus your praise for the file system on the merits of the file system alone or you'll look like a mindless fan boy that doesn't have a problem with "crime."</p></div>
	</htmltext>
<tokenext>just because the creator of the filesystem committed a crimeThe crime is murder .
I do n't know why you try to trivialize it with " just because " and using the word " crime " instead of " murder .
" Say what you want about the merits of the file system but do n't try to trivialize Hans Reiser 's acts of extreme violence.I 'm sure it has the potential to be a great file system .
However , Mr. Reiser has something else to work on now .
He needs to work on his rehabilitation .
If , in his absence , someone else wants to pick up the slack and continue developing it great , more power to them.You should just leave him out of any argument and focus your praise for the file system on the merits of the file system alone or you 'll look like a mindless fan boy that does n't have a problem with " crime .
"</tokentext>
<sentencetext>just because the creator of the filesystem committed a crimeThe crime is murder.
I don't know why you try to trivialize it with "just because" and using the word "crime" instead of "murder.
"  Say what you want about the merits of the file system but don't try to trivialize Hans Reiser's acts of extreme violence.I'm sure it has the potential to be a great file system.
However, Mr. Reiser has something else to work on now.
He needs to work on his rehabilitation.
If, in his absence, someone else wants to pick up the slack and continue developing it great, more power to them.You should just leave him out of any argument and focus your praise for the file system on the merits of the file system alone or you'll look like a mindless fan boy that doesn't have a problem with "crime.
"
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770582</id>
	<title>Re:Slashdotted already ?</title>
	<author>spazdor</author>
	<datestamp>1263463080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Must be all that journalizing the webserver's gotta do.</p></htmltext>
<tokenext>Must be all that journalizing the webserver 's got ta do .</tokentext>
<sentencetext>Must be all that journalizing the webserver's gotta do.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770528</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770618</id>
	<title>Re:Slashdotted already ?</title>
	<author>Anonymous</author>
	<datestamp>1263463200000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>Phoronix has the story</p><p>http://www.phoronix.com/scan.php?page=news\_item&amp;px=Nzg4MA</p></htmltext>
<tokenext>Phoronix has the storyhttp : //www.phoronix.com/scan.php ? page = news \ _item&amp;px = Nzg4MA</tokentext>
<sentencetext>Phoronix has the storyhttp://www.phoronix.com/scan.php?page=news\_item&amp;px=Nzg4MA</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770528</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774888</id>
	<title>Re:GFS</title>
	<author>PPH</author>
	<datestamp>1263486300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>GooFS?</p></htmltext>
<tokenext>GooFS ?</tokentext>
<sentencetext>GooFS?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770926</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771536</id>
	<title>Re:As impressively as each other?! WTF?!</title>
	<author>Anonymous</author>
	<datestamp>1263466500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>In their benchmarking of EXT4 and XFS, EACH performed as impressively as THE OTHER.</p></div><p>Much better would be: Ext4 and XFS were similarly impressive in benchmark performance.</p></div>
	</htmltext>
<tokenext>In their benchmarking of EXT4 and XFS , EACH performed as impressively as THE OTHER.Much better would be : Ext4 and XFS were similarly impressive in benchmark performance .</tokentext>
<sentencetext>In their benchmarking of EXT4 and XFS, EACH performed as impressively as THE OTHER.Much better would be: Ext4 and XFS were similarly impressive in benchmark performance.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771112</id>
	<title>Re:Time for a backup?</title>
	<author>Anonymous</author>
	<datestamp>1263464880000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext>Jeez, calm down junior! No need to open a can of fanboi on me....</htmltext>
<tokenext>Jeez , calm down junior !
No need to open a can of fanboi on me... .</tokentext>
<sentencetext>Jeez, calm down junior!
No need to open a can of fanboi on me....</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776636</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>Anonymous</author>
	<datestamp>1263551100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A quick skim shows that you've asked the following questions.</p><p>-<nobr> <wbr></nobr>... hint as to why Google would not need to fsck their data<nobr> <wbr></nobr>...<br>- Is it cheaper just to overwrite the data from some backup elsewhere at this scale?<br>- How do they know the backup is clean without fscking that?<br>- (anyone with "experience with big database stuff")</p><p>Did you get the answer you want?</p></htmltext>
<tokenext>A quick skim shows that you 've asked the following questions.- ... hint as to why Google would not need to fsck their data ...- Is it cheaper just to overwrite the data from some backup elsewhere at this scale ? - How do they know the backup is clean without fscking that ? - ( anyone with " experience with big database stuff " ) Did you get the answer you want ?</tokentext>
<sentencetext>A quick skim shows that you've asked the following questions.- ... hint as to why Google would not need to fsck their data ...- Is it cheaper just to overwrite the data from some backup elsewhere at this scale?- How do they know the backup is clean without fscking that?- (anyone with "experience with big database stuff")Did you get the answer you want?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773146</id>
	<title>Re:As impressively as each other?! WTF?!</title>
	<author>Anonymous</author>
	<datestamp>1263474120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yea, yea the writing is juvenile, throw that fact in the corner next to "obsession with big breasts" and we can move on.</p></htmltext>
<tokenext>Yea , yea the writing is juvenile , throw that fact in the corner next to " obsession with big breasts " and we can move on .</tokentext>
<sentencetext>Yea, yea the writing is juvenile, throw that fact in the corner next to "obsession with big breasts" and we can move on.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772610</id>
	<title>Re:No ReiserFS?</title>
	<author>pHus10n</author>
	<datestamp>1263471600000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>I thought ReiserFS would be the "killer app" for Google...</htmltext>
<tokenext>I thought ReiserFS would be the " killer app " for Google.. .</tokentext>
<sentencetext>I thought ReiserFS would be the "killer app" for Google...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771188</id>
	<title>Re:Well</title>
	<author>Jaysyn</author>
	<datestamp>1263465180000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I seriously wish that<nobr> <wbr></nobr>/. would limit AC posts to something like 2 a month per account / IP.  That would seriously reduce the S/N ratio here.</p></htmltext>
<tokenext>I seriously wish that / .
would limit AC posts to something like 2 a month per account / IP .
That would seriously reduce the S/N ratio here .</tokentext>
<sentencetext>I seriously wish that /.
would limit AC posts to something like 2 a month per account / IP.
That would seriously reduce the S/N ratio here.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770484</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771584</id>
	<title>Re:Time for a backup?</title>
	<author>gmuslera</author>
	<datestamp>1263466680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Data integrity (and replication) is managed in a layer over the fs, so the journaling could be an unneeded hit to the performance. Probably thats why they didnt upgraded to ext3 a long while ago.</htmltext>
<tokenext>Data integrity ( and replication ) is managed in a layer over the fs , so the journaling could be an unneeded hit to the performance .
Probably thats why they didnt upgraded to ext3 a long while ago .</tokentext>
<sentencetext>Data integrity (and replication) is managed in a layer over the fs, so the journaling could be an unneeded hit to the performance.
Probably thats why they didnt upgraded to ext3 a long while ago.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771384</id>
	<title>Re:No ReiserFS?</title>
	<author>KlomDark</author>
	<datestamp>1263465840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>// Came here for the Reiser reference<nobr> <wbr></nobr>//// Not leaving disappointed!<nobr> <wbr></nobr>////// Oops, this aint Fark...</p></htmltext>
<tokenext>// Came here for the Reiser reference //// Not leaving disappointed !
////// Oops , this aint Fark.. .</tokentext>
<sentencetext>// Came here for the Reiser reference //// Not leaving disappointed!
////// Oops, this aint Fark...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770824</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>42forty-two42</author>
	<datestamp>1263463740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>First, google's servers each have <a href="http://news.cnet.com/8301-1001\_3-10209580-92.html" title="cnet.com">their own battery</a> [cnet.com], so it's unlikely that all the servers in a DC will go down at once. If only a few go down, their redundancy means that it's not a big deal - they can wait for the fsck. And moreover, even if an entire DC goes down (eg, due to cooling loss) they have the redundancy needed to deal with entire datacenter failures - with that kind of redundancy, fscking is only a minor inconvenience (plus with a cooling failure they might have time to sync and umount before poweroff...)</htmltext>
<tokenext>First , google 's servers each have their own battery [ cnet.com ] , so it 's unlikely that all the servers in a DC will go down at once .
If only a few go down , their redundancy means that it 's not a big deal - they can wait for the fsck .
And moreover , even if an entire DC goes down ( eg , due to cooling loss ) they have the redundancy needed to deal with entire datacenter failures - with that kind of redundancy , fscking is only a minor inconvenience ( plus with a cooling failure they might have time to sync and umount before poweroff... )</tokentext>
<sentencetext>First, google's servers each have their own battery [cnet.com], so it's unlikely that all the servers in a DC will go down at once.
If only a few go down, their redundancy means that it's not a big deal - they can wait for the fsck.
And moreover, even if an entire DC goes down (eg, due to cooling loss) they have the redundancy needed to deal with entire datacenter failures - with that kind of redundancy, fscking is only a minor inconvenience (plus with a cooling failure they might have time to sync and umount before poweroff...)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771140</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>philipmather</author>
	<datestamp>1263464940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>From all the articles I've seen regarding the various Google product's modus operandi the "consumer grade" disks are used in only two cases, either...</p><p>1) To store the data closer to processing elements that needs it (BigTable? or MapReduce), in which case failure of a critical disk could be treaeted as a failed PE and the job re-queued on a different PE or set of PEs (i.e. non-latency sensitive work)<nobr> <wbr></nobr>...or...</p><p>2) In massive redundancy (GFS) for systems where "redoing" part of the job either isn't practical, relevant or applicable.<nobr> <wbr></nobr>...you can make a reasonable bet that any "output" data sets such as BI/MIS reports or "end product" data sets are then shipped off and made available to whatever audience needs it via something a little more conventional like a SAN. Single, whole "input" data-sets are probably treated the same either kept on something boring and normal like a SAN or re-scanned/built/trawled if a subsection is lost from a set that is formed from a composite.</p><p>Check out these...</p><p><a href="http://www.25hoursaday.com/weblog/CommentView.aspx?guid=7D244266-E3AB-4636-985D-BEE5C0BFC485" title="25hoursaday.com" rel="nofollow">http://www.25hoursaday.com/weblog/CommentView.aspx?guid=7D244266-E3AB-4636-985D-BEE5C0BFC485</a> [25hoursaday.com]<br><a href="http://labs.google.com/papers/bigtable.html" title="google.com" rel="nofollow">http://labs.google.com/papers/bigtable.html</a> [google.com]</p></htmltext>
<tokenext>From all the articles I 've seen regarding the various Google product 's modus operandi the " consumer grade " disks are used in only two cases , either...1 ) To store the data closer to processing elements that needs it ( BigTable ?
or MapReduce ) , in which case failure of a critical disk could be treaeted as a failed PE and the job re-queued on a different PE or set of PEs ( i.e .
non-latency sensitive work ) ...or...2 ) In massive redundancy ( GFS ) for systems where " redoing " part of the job either is n't practical , relevant or applicable .
...you can make a reasonable bet that any " output " data sets such as BI/MIS reports or " end product " data sets are then shipped off and made available to whatever audience needs it via something a little more conventional like a SAN .
Single , whole " input " data-sets are probably treated the same either kept on something boring and normal like a SAN or re-scanned/built/trawled if a subsection is lost from a set that is formed from a composite.Check out these...http : //www.25hoursaday.com/weblog/CommentView.aspx ? guid = 7D244266-E3AB-4636-985D-BEE5C0BFC485 [ 25hoursaday.com ] http : //labs.google.com/papers/bigtable.html [ google.com ]</tokentext>
<sentencetext>From all the articles I've seen regarding the various Google product's modus operandi the "consumer grade" disks are used in only two cases, either...1) To store the data closer to processing elements that needs it (BigTable?
or MapReduce), in which case failure of a critical disk could be treaeted as a failed PE and the job re-queued on a different PE or set of PEs (i.e.
non-latency sensitive work) ...or...2) In massive redundancy (GFS) for systems where "redoing" part of the job either isn't practical, relevant or applicable.
...you can make a reasonable bet that any "output" data sets such as BI/MIS reports or "end product" data sets are then shipped off and made available to whatever audience needs it via something a little more conventional like a SAN.
Single, whole "input" data-sets are probably treated the same either kept on something boring and normal like a SAN or re-scanned/built/trawled if a subsection is lost from a set that is formed from a composite.Check out these...http://www.25hoursaday.com/weblog/CommentView.aspx?guid=7D244266-E3AB-4636-985D-BEE5C0BFC485 [25hoursaday.com]http://labs.google.com/papers/bigtable.html [google.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770838</id>
	<title>Re:Btrfs?</title>
	<author>Anonymous</author>
	<datestamp>1263463860000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>From kernel.org's <a href="http://btrfs.wiki.kernel.org/index.php/Main\_Page" title="kernel.org">BTRFS page</a> [kernel.org]:<p><div class="quote"><p> <b>Btrfs is under heavy development, and is not suitable for any uses other than benchmarking and review.</b>  The Btrfs disk format is not yet finalized, but it will only be changed if a critical bug is found and no workarounds are possible.</p></div><p>It's ready for benchmarking, it's just not ready for widespread use yet.  If Google was looking for a filesystem to make a switch to in the near future, BTRFS simply isn't an option quite yet.<br> <br>

It's really easy at this point to move from EXT2 to EXT4  (I believe you can simply remount the partition as the new filesystem, maybe change a flag or two, and away you go).  It's basically free performance.  If Google is convinced it's stable, there isn't much reason not to do this.  It could act as an interim filesystem until something significantly better - such as BTRFS - gets to the point where it's dependable.  The fact BTRFS was not mentioned here doesn't mean it's completely ruled out.</p></div>
	</htmltext>
<tokenext>From kernel.org 's BTRFS page [ kernel.org ] : Btrfs is under heavy development , and is not suitable for any uses other than benchmarking and review .
The Btrfs disk format is not yet finalized , but it will only be changed if a critical bug is found and no workarounds are possible.It 's ready for benchmarking , it 's just not ready for widespread use yet .
If Google was looking for a filesystem to make a switch to in the near future , BTRFS simply is n't an option quite yet .
It 's really easy at this point to move from EXT2 to EXT4 ( I believe you can simply remount the partition as the new filesystem , maybe change a flag or two , and away you go ) .
It 's basically free performance .
If Google is convinced it 's stable , there is n't much reason not to do this .
It could act as an interim filesystem until something significantly better - such as BTRFS - gets to the point where it 's dependable .
The fact BTRFS was not mentioned here does n't mean it 's completely ruled out .</tokentext>
<sentencetext>From kernel.org's BTRFS page [kernel.org]: Btrfs is under heavy development, and is not suitable for any uses other than benchmarking and review.
The Btrfs disk format is not yet finalized, but it will only be changed if a critical bug is found and no workarounds are possible.It's ready for benchmarking, it's just not ready for widespread use yet.
If Google was looking for a filesystem to make a switch to in the near future, BTRFS simply isn't an option quite yet.
It's really easy at this point to move from EXT2 to EXT4  (I believe you can simply remount the partition as the new filesystem, maybe change a flag or two, and away you go).
It's basically free performance.
If Google is convinced it's stable, there isn't much reason not to do this.
It could act as an interim filesystem until something significantly better - such as BTRFS - gets to the point where it's dependable.
The fact BTRFS was not mentioned here doesn't mean it's completely ruled out.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770960</id>
	<title>Re:Google doesn't need journaling?</title>
	<author>ls671</author>
	<datestamp>1263464220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I always felt that fscking the data taking data that is already on the disk (the journal) into account was weaker than fscking the data independently (no journal). Or at least that it would bring more possibilities of errors (e.g. errors in the journal itself). It may very well be an unjustified impression that I have but at least it seems logical at first glance; A simpler file system means less risk of bugs, etc.</p><p><a href="http://slashdot.org/comments.pl?sid=1511104&amp;cid=30770742" title="slashdot.org">http://slashdot.org/comments.pl?sid=1511104&amp;cid=30770742</a> [slashdot.org]</p></htmltext>
<tokenext>I always felt that fscking the data taking data that is already on the disk ( the journal ) into account was weaker than fscking the data independently ( no journal ) .
Or at least that it would bring more possibilities of errors ( e.g .
errors in the journal itself ) .
It may very well be an unjustified impression that I have but at least it seems logical at first glance ; A simpler file system means less risk of bugs , etc.http : //slashdot.org/comments.pl ? sid = 1511104&amp;cid = 30770742 [ slashdot.org ]</tokentext>
<sentencetext>I always felt that fscking the data taking data that is already on the disk (the journal) into account was weaker than fscking the data independently (no journal).
Or at least that it would bring more possibilities of errors (e.g.
errors in the journal itself).
It may very well be an unjustified impression that I have but at least it seems logical at first glance; A simpler file system means less risk of bugs, etc.http://slashdot.org/comments.pl?sid=1511104&amp;cid=30770742 [slashdot.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772716</id>
	<title>Re:Use of commas.</title>
	<author>Anonymous</author>
	<datestamp>1263472140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There should be a comma after every item in the list - otherwise you don't know if the last entry is one item or two.</p><p>The car comes in several colours, white, red, green, black, and white.<br>The car comes in several colours, white, red, green, black and white.</p><p>So, did you want to say the car was only available in two-tone black and white, or that it was available in both black and in white?</p></htmltext>
<tokenext>There should be a comma after every item in the list - otherwise you do n't know if the last entry is one item or two.The car comes in several colours , white , red , green , black , and white.The car comes in several colours , white , red , green , black and white.So , did you want to say the car was only available in two-tone black and white , or that it was available in both black and in white ?</tokentext>
<sentencetext>There should be a comma after every item in the list - otherwise you don't know if the last entry is one item or two.The car comes in several colours, white, red, green, black, and white.The car comes in several colours, white, red, green, black and white.So, did you want to say the car was only available in two-tone black and white, or that it was available in both black and in white?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252</id>
	<title>Ubuntu 9.10?</title>
	<author>GF678</author>
	<datestamp>1263465360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>Gee, I hope they're not using Ubuntu 9.10 by any chance: <a href="http://www.ubuntu.com/getubuntu/releasenotes/910" title="ubuntu.com">http://www.ubuntu.com/getubuntu/releasenotes/910</a> [ubuntu.com] </p><blockquote><div><p>There have been some reports of data corruption with fresh (not upgraded) ext4 file systems using the Ubuntu 9.10 kernel when writing to large files (over 512MB). The issue is under investigation, and if confirmed will be resolved in a post-release update. Users who routinely manipulate large files may want to consider using ext3 file systems until this issue is resolved. (453579)</p></div></blockquote><p>The damn bug is STILL not fixed apparently. Some people get the corruption, and some don't. Scares me enough to not even try using ext4 just yet, and I'm still surprised Canonical was stupid enough to have ext4 as the default filesystem in Karmic.</p><p>Then again, perhaps Google knows what they're doing.</p></div>
	</htmltext>
<tokenext>Gee , I hope they 're not using Ubuntu 9.10 by any chance : http : //www.ubuntu.com/getubuntu/releasenotes/910 [ ubuntu.com ] There have been some reports of data corruption with fresh ( not upgraded ) ext4 file systems using the Ubuntu 9.10 kernel when writing to large files ( over 512MB ) .
The issue is under investigation , and if confirmed will be resolved in a post-release update .
Users who routinely manipulate large files may want to consider using ext3 file systems until this issue is resolved .
( 453579 ) The damn bug is STILL not fixed apparently .
Some people get the corruption , and some do n't .
Scares me enough to not even try using ext4 just yet , and I 'm still surprised Canonical was stupid enough to have ext4 as the default filesystem in Karmic.Then again , perhaps Google knows what they 're doing .</tokentext>
<sentencetext>Gee, I hope they're not using Ubuntu 9.10 by any chance: http://www.ubuntu.com/getubuntu/releasenotes/910 [ubuntu.com] There have been some reports of data corruption with fresh (not upgraded) ext4 file systems using the Ubuntu 9.10 kernel when writing to large files (over 512MB).
The issue is under investigation, and if confirmed will be resolved in a post-release update.
Users who routinely manipulate large files may want to consider using ext3 file systems until this issue is resolved.
(453579)The damn bug is STILL not fixed apparently.
Some people get the corruption, and some don't.
Scares me enough to not even try using ext4 just yet, and I'm still surprised Canonical was stupid enough to have ext4 as the default filesystem in Karmic.Then again, perhaps Google knows what they're doing.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814</id>
	<title>Re:Use of commas.</title>
	<author>Em Emalb</author>
	<datestamp>1263463740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Why do I put a comma before the and in a list?</p><p>I would say "I have a cat, a dog, and two goats."</p><p>But you would say "I have a cat, a dog and two goats."  (Then you'd bugger the goats, but that's how you roll.)</p><p>The English language is so damned weird...but AC is right, illegal use of commas.  That's a 15 karma penalty.  1st down.</p></htmltext>
<tokenext>Why do I put a comma before the and in a list ? I would say " I have a cat , a dog , and two goats .
" But you would say " I have a cat , a dog and two goats .
" ( Then you 'd bugger the goats , but that 's how you roll .
) The English language is so damned weird...but AC is right , illegal use of commas .
That 's a 15 karma penalty .
1st down .</tokentext>
<sentencetext>Why do I put a comma before the and in a list?I would say "I have a cat, a dog, and two goats.
"But you would say "I have a cat, a dog and two goats.
"  (Then you'd bugger the goats, but that's how you roll.
)The English language is so damned weird...but AC is right, illegal use of commas.
That's a 15 karma penalty.
1st down.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</id>
	<title>No ReiserFS?</title>
	<author>Anonymous</author>
	<datestamp>1263463200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>It's interesting that ReiserFS wasn't even an option here. I myself even ended up using Ext4 when I set up a new box not too long ago. It's a real shame that just because the creator of the filesystem committed a crime, people are drawn to treat the technology itself are somehow dishonored.</htmltext>
<tokenext>It 's interesting that ReiserFS was n't even an option here .
I myself even ended up using Ext4 when I set up a new box not too long ago .
It 's a real shame that just because the creator of the filesystem committed a crime , people are drawn to treat the technology itself are somehow dishonored .</tokentext>
<sentencetext>It's interesting that ReiserFS wasn't even an option here.
I myself even ended up using Ext4 when I set up a new box not too long ago.
It's a real shame that just because the creator of the filesystem committed a crime, people are drawn to treat the technology itself are somehow dishonored.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775654</id>
	<title>Re:No ReiserFS?</title>
	<author>anomaly65</author>
	<datestamp>1263494580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>reiserfs while good only runs in a chroot'ed jailed file system<nobr> <wbr></nobr>;-)</p></htmltext>
<tokenext>reiserfs while good only runs in a chroot'ed jailed file system ; - )</tokentext>
<sentencetext>reiserfs while good only runs in a chroot'ed jailed file system ;-)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770926</id>
	<title>GFS</title>
	<author>jonpublic</author>
	<datestamp>1263464160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>I thought google had their own file system named the google files system.</p><p><a href="http://labs.google.com/papers/gfs.html" title="google.com">http://labs.google.com/papers/gfs.html</a> [google.com]</p></htmltext>
<tokenext>I thought google had their own file system named the google files system.http : //labs.google.com/papers/gfs.html [ google.com ]</tokentext>
<sentencetext>I thought google had their own file system named the google files system.http://labs.google.com/papers/gfs.html [google.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616</id>
	<title>Btrfs?</title>
	<author>Wonko the Sane</author>
	<datestamp>1263463200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I guess they didn't consider btrfs ready enough for benchmarking yet.</p></htmltext>
<tokenext>I guess they did n't consider btrfs ready enough for benchmarking yet .</tokentext>
<sentencetext>I guess they didn't consider btrfs ready enough for benchmarking yet.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776334</id>
	<title>Re:Ubuntu 9.10?</title>
	<author>inKubus</author>
	<datestamp>1263547440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's why people don't use Ubuntu or even Debian for important servers.  I've got a Fedora Core 4 box that hasn't been rebooted since 2006 with quite a heavy load of web sites.  In production I'm using CentOS 5.4 which is just fine with kernel 2.6.18.  EXT4, pft.  Google has plenty of money, they should use ramfs and add more ram and more boxes.  Why even mess with disks for a search index?  It's like the definition of volatile data.</p></htmltext>
<tokenext>That 's why people do n't use Ubuntu or even Debian for important servers .
I 've got a Fedora Core 4 box that has n't been rebooted since 2006 with quite a heavy load of web sites .
In production I 'm using CentOS 5.4 which is just fine with kernel 2.6.18 .
EXT4 , pft .
Google has plenty of money , they should use ramfs and add more ram and more boxes .
Why even mess with disks for a search index ?
It 's like the definition of volatile data .</tokentext>
<sentencetext>That's why people don't use Ubuntu or even Debian for important servers.
I've got a Fedora Core 4 box that hasn't been rebooted since 2006 with quite a heavy load of web sites.
In production I'm using CentOS 5.4 which is just fine with kernel 2.6.18.
EXT4, pft.
Google has plenty of money, they should use ramfs and add more ram and more boxes.
Why even mess with disks for a search index?
It's like the definition of volatile data.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_62</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30780702
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771290
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_76</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772462
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772610
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776636
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771836
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775922
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771584
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770868
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772716
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770484
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771188
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772006
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776334
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_68</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777340
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_75</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774620
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773538
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30780942
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772584
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_74</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771878
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770954
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_65</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771118
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772322
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771460
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770838
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771748
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771296
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30791054
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773226
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30795510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774594
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_66</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771704
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771112
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776030
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771384
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_73</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770528
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770582
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771014
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777292
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_63</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770484
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771188
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771708
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775654
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770660
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770484
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771292
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773060
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772292
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770838
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777678
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771884
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773226
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777126
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_78</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770528
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771400
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_69</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770828
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770926
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771918
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771732
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774952
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770838
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770756
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771746
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772346
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772392
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771742
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30779580
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_77</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30780772
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772192
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_67</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770484
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771188
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772262
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770886
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_72</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770824
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771198
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773712
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773146
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771674
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773944
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771536
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777782
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770926
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774888
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_64</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771140
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770764
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_71</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770648
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774532
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776670
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770942
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774438
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773016
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_70</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774766
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771112
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775002
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774276
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770942
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774204
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30783214
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771122
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772988
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_14_2027255_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770844
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770604
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770696
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770616
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30780772
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770954
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772392
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770838
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771748
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777678
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773618
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770714
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775922
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773060
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773146
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771536
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771674
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770572
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770814
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774276
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771198
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773016
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771704
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772716
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771746
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777340
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770528
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770582
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770618
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771252
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772584
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30779580
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773712
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771878
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773944
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771884
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775742
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772192
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776334
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770926
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771918
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774888
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771038
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771118
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772322
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30778442
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770634
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776670
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776636
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772346
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774766
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771732
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770824
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771460
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770960
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773538
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774594
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771140
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770484
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771292
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771188
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772262
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771708
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772006
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771648
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30773226
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30795510
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777126
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770502
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770640
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771836
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771584
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30780702
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771112
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30776030
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775002
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772462
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771014
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772292
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771290
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770648
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774532
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770828
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771400
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770942
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774204
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30783214
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774438
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770886
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770660
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770592
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770626
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772610
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30780942
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774952
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30775654
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770756
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770764
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777782
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770844
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771742
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770868
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771384
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30774620
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771122
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30772988
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771632
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_14_2027255.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30770948
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30771296
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30791054
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_14_2027255.30777292
</commentlist>
</conversation>
