<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_07_13_1734220</id>
	<title>Building a 10 TB Array For Around $1,000</title>
	<author>ScuttleMonkey</author>
	<datestamp>1247511780000</datestamp>
	<htmltext>As storage hardware costs continue to plummet, the folks over at Tom's Hardware have decided to throw together their version of the "<a href="http://www.tomshardware.com/reviews/10tb-hdd-raid,2344.html">&#220;ber RAID Array</a>."  While the array still doesn't stack up against SSDs for access time, a large array is capable of higher throughput via striping.  Unfortunately, the amount of work required to assemble a setup like this seems to make it too much trouble for anything but a fun experiment.   <i>"Most people probably don't want to install more than a few hard drives into their PC, as it requires a massive case with sufficient ventilation as well as a solid power supply. We don't consider this project to be something enthusiasts should necessarily reproduce. Instead, we set out to analyze what level of storage performance you'd get if you were to spend the same money as on an enthusiast processor, such as a $1,000 Core i7-975 Extreme. For the same cost, you could assemble 12 1 TB Samsung Spinpoint F1 hard drives. Of course, you still need a suitable multi-port controller, which is why we selected Areca's ARC-1680iX-20."</i></htmltext>
<tokenext>As storage hardware costs continue to plummet , the folks over at Tom 's Hardware have decided to throw together their version of the "   ber RAID Array .
" While the array still does n't stack up against SSDs for access time , a large array is capable of higher throughput via striping .
Unfortunately , the amount of work required to assemble a setup like this seems to make it too much trouble for anything but a fun experiment .
" Most people probably do n't want to install more than a few hard drives into their PC , as it requires a massive case with sufficient ventilation as well as a solid power supply .
We do n't consider this project to be something enthusiasts should necessarily reproduce .
Instead , we set out to analyze what level of storage performance you 'd get if you were to spend the same money as on an enthusiast processor , such as a $ 1,000 Core i7-975 Extreme .
For the same cost , you could assemble 12 1 TB Samsung Spinpoint F1 hard drives .
Of course , you still need a suitable multi-port controller , which is why we selected Areca 's ARC-1680iX-20 .
"</tokentext>
<sentencetext>As storage hardware costs continue to plummet, the folks over at Tom's Hardware have decided to throw together their version of the "Über RAID Array.
"  While the array still doesn't stack up against SSDs for access time, a large array is capable of higher throughput via striping.
Unfortunately, the amount of work required to assemble a setup like this seems to make it too much trouble for anything but a fun experiment.
"Most people probably don't want to install more than a few hard drives into their PC, as it requires a massive case with sufficient ventilation as well as a solid power supply.
We don't consider this project to be something enthusiasts should necessarily reproduce.
Instead, we set out to analyze what level of storage performance you'd get if you were to spend the same money as on an enthusiast processor, such as a $1,000 Core i7-975 Extreme.
For the same cost, you could assemble 12 1 TB Samsung Spinpoint F1 hard drives.
Of course, you still need a suitable multi-port controller, which is why we selected Areca's ARC-1680iX-20.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684697</id>
	<title>Re:Redundant Array of INEXPENSIVE Disks</title>
	<author>Anonymous</author>
	<datestamp>1247490840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'm after spindle count as opposed to space. My total data foot-print is about 40G of useful data, and another 40 in what I call 'installers' (hard to find software and custom OS iso's).</p><p>HP P400 8-port SAS card $150 or less from ebay<br>Dell Perc5 or 6, i or e $200 from ebay<br>dumb LSI 8-port SAS card $30 from ebay<br>2Gb FC cards $15 from ebay<br>16 port FC switch $100 from ebay</p><p>6 drive ATX cases $35 from Microcenter<br>Coolermaster 4-in-3 $25 from NewEgg<br>250GB raid family drives $.1/GB from ebay</p><p>I put 6+4+4=14 250gb drives into an E-ATX case with the MB, 32GB of ram, and 2x Quad Xeon's and FC/Raid adapters.</p><p>Then the $40 case houses 6 native + 3 cooler-master =18 250gb drives and is connected to the Perc6E via SAS ML cables. So not including the cost of the Xeon's, ram, and special MB (re-used from previous project) for the cost of the pathetic semi-pro-sumer 'NAS's out there, I have massive I/O capability.</p></htmltext>
<tokenext>I 'm after spindle count as opposed to space .
My total data foot-print is about 40G of useful data , and another 40 in what I call 'installers ' ( hard to find software and custom OS iso 's ) .HP P400 8-port SAS card $ 150 or less from ebayDell Perc5 or 6 , i or e $ 200 from ebaydumb LSI 8-port SAS card $ 30 from ebay2Gb FC cards $ 15 from ebay16 port FC switch $ 100 from ebay6 drive ATX cases $ 35 from MicrocenterCoolermaster 4-in-3 $ 25 from NewEgg250GB raid family drives $ .1/GB from ebayI put 6 + 4 + 4 = 14 250gb drives into an E-ATX case with the MB , 32GB of ram , and 2x Quad Xeon 's and FC/Raid adapters.Then the $ 40 case houses 6 native + 3 cooler-master = 18 250gb drives and is connected to the Perc6E via SAS ML cables .
So not including the cost of the Xeon 's , ram , and special MB ( re-used from previous project ) for the cost of the pathetic semi-pro-sumer 'NAS 's out there , I have massive I/O capability .</tokentext>
<sentencetext>I'm after spindle count as opposed to space.
My total data foot-print is about 40G of useful data, and another 40 in what I call 'installers' (hard to find software and custom OS iso's).HP P400 8-port SAS card $150 or less from ebayDell Perc5 or 6, i or e $200 from ebaydumb LSI 8-port SAS card $30 from ebay2Gb FC cards $15 from ebay16 port FC switch $100 from ebay6 drive ATX cases $35 from MicrocenterCoolermaster 4-in-3 $25 from NewEgg250GB raid family drives $.1/GB from ebayI put 6+4+4=14 250gb drives into an E-ATX case with the MB, 32GB of ram, and 2x Quad Xeon's and FC/Raid adapters.Then the $40 case houses 6 native + 3 cooler-master =18 250gb drives and is connected to the Perc6E via SAS ML cables.
So not including the cost of the Xeon's, ram, and special MB (re-used from previous project) for the cost of the pathetic semi-pro-sumer 'NAS's out there, I have massive I/O capability.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28689439</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>Phoghat</author>
	<datestamp>1247579280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>"And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links, god I love those with all my heart.",</i> <p>
Please god shoot the MF who dreamed them up.</p></htmltext>
<tokenext>" And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links , god I love those with all my heart .
" , Please god shoot the MF who dreamed them up .</tokentext>
<sentencetext>"And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links, god I love those with all my heart.
", 
Please god shoot the MF who dreamed them up.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28688965</id>
	<title>Re:Redundant Array of INEXPENSIVE Disks</title>
	<author>noc007</author>
	<datestamp>1247576640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I would recommend using ZFS for your software RAID. It's a completely different approach to file systems, but it's great. I even created a RAID5 setup within ZFS, wiped the OS that was on another disk, reinstalled the OS (don't ask), and it only took two simple commands to have my ZFS RAID online and running.</p></htmltext>
<tokenext>I would recommend using ZFS for your software RAID .
It 's a completely different approach to file systems , but it 's great .
I even created a RAID5 setup within ZFS , wiped the OS that was on another disk , reinstalled the OS ( do n't ask ) , and it only took two simple commands to have my ZFS RAID online and running .</tokentext>
<sentencetext>I would recommend using ZFS for your software RAID.
It's a completely different approach to file systems, but it's great.
I even created a RAID5 setup within ZFS, wiped the OS that was on another disk, reinstalled the OS (don't ask), and it only took two simple commands to have my ZFS RAID online and running.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684663</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>spire3661</author>
	<datestamp>1247490480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>And how do you back it up?</htmltext>
<tokenext>And how do you back it up ?</tokentext>
<sentencetext>And how do you back it up?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681313</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28757875</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>Anonymous</author>
	<datestamp>1248112380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>One: The title is a borderline lie.  Yes, you can buy 12x 1TB drives for about a grand.  But if I'm going to build an array and bench mark it and constantly compare it to buying a Core i7-975 Extreme, the drives alone don't do me any good!  (And I love how you continually reiterate with statements like "The Idea: Massive Hard Drive Storage Within a $1,000 Budget")</p><p>Two: Said controller does not exist.  They listed the controller as ARC-1680ix-<b>20</b>.  Areca <a href="http://www.areca.com.tw/products/pcietosas1680series.htm" title="areca.com.tw" rel="nofollow">makes no such controller</a> [areca.com.tw].  They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.</p><p>Three: Said controller is going to easily run you <a href="http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&amp;DEPA=0&amp;Order=BESTMATCH&amp;Description=ARC-1680ix&amp;x=0&amp;y=0" title="newegg.com" rel="nofollow">another grand</a> [newegg.com].  And I'm certain most controllers that accomplish what you're asking are pretty damned expensive and they will have a bigger impact than the drives on your results.</p><p>Four: You don't compare this hardware setup with any other setup.  Build the "Uber RAID Array" you claim.  Uber compared to what, precisely?  How does a cheap <a href="http://www.amazon.com/gp/product/B000NX0Y8C" title="amazon.com" rel="nofollow">Adaptac compare</a> [amazon.com]?  Are you sure there's not a better controller for less money?</p><p>All you showed was that we increase our throughput and reduce our access times with RAID 0 &amp; 5 compared to a single drive.  So?  Isn't that what's supposed to happen?  Oh, and you split it across seven pages like Tom's Hardware loves to do.  And I can't click print to read the article uninterrupted anymore without logging in.  And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links, god I love those with all my heart.</p><p>So feel free to correct me but we are left with a marketing advertisement for an Areca product that doesn't even exist and a notice that storage just keeps getting cheaper.  Did I miss anything?</p></div><p>Add to that, if I my memory doesn't fail that they published the RAID 5 wasn't safe to run with 1TB disks... crap.</p><p>For the ads... I see no ads. The old "Proxomitron" takes care of everything</p></div>
	</htmltext>
<tokenext>One : The title is a borderline lie .
Yes , you can buy 12x 1TB drives for about a grand .
But if I 'm going to build an array and bench mark it and constantly compare it to buying a Core i7-975 Extreme , the drives alone do n't do me any good !
( And I love how you continually reiterate with statements like " The Idea : Massive Hard Drive Storage Within a $ 1,000 Budget " ) Two : Said controller does not exist .
They listed the controller as ARC-1680ix-20 .
Areca makes no such controller [ areca.com.tw ] .
They make an 8 , 12 , 16 , 24 but no 20 unless they 've got some advanced product unlisted anywhere.Three : Said controller is going to easily run you another grand [ newegg.com ] .
And I 'm certain most controllers that accomplish what you 're asking are pretty damned expensive and they will have a bigger impact than the drives on your results.Four : You do n't compare this hardware setup with any other setup .
Build the " Uber RAID Array " you claim .
Uber compared to what , precisely ?
How does a cheap Adaptac compare [ amazon.com ] ?
Are you sure there 's not a better controller for less money ? All you showed was that we increase our throughput and reduce our access times with RAID 0 &amp; 5 compared to a single drive .
So ? Is n't that what 's supposed to happen ?
Oh , and you split it across seven pages like Tom 's Hardware loves to do .
And I ca n't click print to read the article uninterrupted anymore without logging in .
And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links , god I love those with all my heart.So feel free to correct me but we are left with a marketing advertisement for an Areca product that does n't even exist and a notice that storage just keeps getting cheaper .
Did I miss anything ? Add to that , if I my memory does n't fail that they published the RAID 5 was n't safe to run with 1TB disks... crap.For the ads... I see no ads .
The old " Proxomitron " takes care of everything</tokentext>
<sentencetext>One: The title is a borderline lie.
Yes, you can buy 12x 1TB drives for about a grand.
But if I'm going to build an array and bench mark it and constantly compare it to buying a Core i7-975 Extreme, the drives alone don't do me any good!
(And I love how you continually reiterate with statements like "The Idea: Massive Hard Drive Storage Within a $1,000 Budget")Two: Said controller does not exist.
They listed the controller as ARC-1680ix-20.
Areca makes no such controller [areca.com.tw].
They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.Three: Said controller is going to easily run you another grand [newegg.com].
And I'm certain most controllers that accomplish what you're asking are pretty damned expensive and they will have a bigger impact than the drives on your results.Four: You don't compare this hardware setup with any other setup.
Build the "Uber RAID Array" you claim.
Uber compared to what, precisely?
How does a cheap Adaptac compare [amazon.com]?
Are you sure there's not a better controller for less money?All you showed was that we increase our throughput and reduce our access times with RAID 0 &amp; 5 compared to a single drive.
So?  Isn't that what's supposed to happen?
Oh, and you split it across seven pages like Tom's Hardware loves to do.
And I can't click print to read the article uninterrupted anymore without logging in.
And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links, god I love those with all my heart.So feel free to correct me but we are left with a marketing advertisement for an Areca product that doesn't even exist and a notice that storage just keeps getting cheaper.
Did I miss anything?Add to that, if I my memory doesn't fail that they published the RAID 5 wasn't safe to run with 1TB disks... crap.For the ads... I see no ads.
The old "Proxomitron" takes care of everything
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681227</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>T Murphy</author>
	<datestamp>1247517120000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p>Two: Said controller does not exist. They listed the controller as ARC-1680ix-20. Areca makes no such controller. They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.</p></div><p>He glued the 8 and the 12 together. Duh.</p></div>
	</htmltext>
<tokenext>Two : Said controller does not exist .
They listed the controller as ARC-1680ix-20 .
Areca makes no such controller .
They make an 8 , 12 , 16 , 24 but no 20 unless they 've got some advanced product unlisted anywhere.He glued the 8 and the 12 together .
Duh .</tokentext>
<sentencetext>Two: Said controller does not exist.
They listed the controller as ARC-1680ix-20.
Areca makes no such controller.
They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.He glued the 8 and the 12 together.
Duh.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680967</id>
	<title>Try FreeBSD and ZFS</title>
	<author>Anonymous</author>
	<datestamp>1247515980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Really? You run 10 TB array on Windows vista. nuff said - <a href="http://forums.freebsd.org/showthread.php?t=3689" title="freebsd.org" rel="nofollow">http://forums.freebsd.org/showthread.php?t=3689</a> [freebsd.org]</p><p>And get FF plugin to avoid 8 pages clicks <a href="http://www.teesoft.info/content/view/68/1/lang,en/" title="teesoft.info" rel="nofollow">http://www.teesoft.info/content/view/68/1/lang,en/</a> [teesoft.info]</p><p>Disclaimer: I'm one of regular contributor and part of mod team @ the official freebsd forum.</p></htmltext>
<tokenext>Really ?
You run 10 TB array on Windows vista .
nuff said - http : //forums.freebsd.org/showthread.php ? t = 3689 [ freebsd.org ] And get FF plugin to avoid 8 pages clicks http : //www.teesoft.info/content/view/68/1/lang,en/ [ teesoft.info ] Disclaimer : I 'm one of regular contributor and part of mod team @ the official freebsd forum .</tokentext>
<sentencetext>Really?
You run 10 TB array on Windows vista.
nuff said - http://forums.freebsd.org/showthread.php?t=3689 [freebsd.org]And get FF plugin to avoid 8 pages clicks http://www.teesoft.info/content/view/68/1/lang,en/ [teesoft.info]Disclaimer: I'm one of regular contributor and part of mod team @ the official freebsd forum.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28692879</id>
	<title>Re:What for?</title>
	<author>columbus</author>
	<datestamp>1247594160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I mean, who is the target audience for the article??</p></div><p>I guess it's for people like me.</p><p>Let me explain.  I had an old friend visiting over the weekend.  He's a professional filmmaker and photographer.  We got to talking about making movies and data handling.  He said he would like to be able to upload footage directly from a film shoot to a server.  He just wants to know that it's always on &amp; that space is always available.  So how much space does he need?  He said original files for a movie before editing could be around 4TB.  We started talking about system requirements for a machine that would suit his needs.  At least 6TB - with data preservation &amp; redundancy more important than throughput &amp; fast IO time.</p><p>Of course, he spends most of his time shooting movies &amp; doesn't have the inclination to learn how to build a machine like this.</p><p>. . .  But I do.</p><p>ps.  I'm guessing that the follow on questions are '4TB for one movie?  What about the rest of them?  And what about backups?'.  I believe he's got a shelf full of 1TB external drives, detached &amp; powered down.  I think a lot of A / V people end up with a setup like this.  Not exactly disaster proof, but it provides a whole lot of storage for a relatively low price.  I wouldn't expect that a storage server would replace this setup either;  I think it would supplement it.  It would serve as an interim, medium term storage before the data went into cold storage offline.</p></div>
	</htmltext>
<tokenext>I mean , who is the target audience for the article ?
? I guess it 's for people like me.Let me explain .
I had an old friend visiting over the weekend .
He 's a professional filmmaker and photographer .
We got to talking about making movies and data handling .
He said he would like to be able to upload footage directly from a film shoot to a server .
He just wants to know that it 's always on &amp; that space is always available .
So how much space does he need ?
He said original files for a movie before editing could be around 4TB .
We started talking about system requirements for a machine that would suit his needs .
At least 6TB - with data preservation &amp; redundancy more important than throughput &amp; fast IO time.Of course , he spends most of his time shooting movies &amp; does n't have the inclination to learn how to build a machine like this.. . .
But I do.ps .
I 'm guessing that the follow on questions are '4TB for one movie ?
What about the rest of them ?
And what about backups ? ' .
I believe he 's got a shelf full of 1TB external drives , detached &amp; powered down .
I think a lot of A / V people end up with a setup like this .
Not exactly disaster proof , but it provides a whole lot of storage for a relatively low price .
I would n't expect that a storage server would replace this setup either ; I think it would supplement it .
It would serve as an interim , medium term storage before the data went into cold storage offline .</tokentext>
<sentencetext>I mean, who is the target audience for the article?
?I guess it's for people like me.Let me explain.
I had an old friend visiting over the weekend.
He's a professional filmmaker and photographer.
We got to talking about making movies and data handling.
He said he would like to be able to upload footage directly from a film shoot to a server.
He just wants to know that it's always on &amp; that space is always available.
So how much space does he need?
He said original files for a movie before editing could be around 4TB.
We started talking about system requirements for a machine that would suit his needs.
At least 6TB - with data preservation &amp; redundancy more important than throughput &amp; fast IO time.Of course, he spends most of his time shooting movies &amp; doesn't have the inclination to learn how to build a machine like this.. . .
But I do.ps.
I'm guessing that the follow on questions are '4TB for one movie?
What about the rest of them?
And what about backups?'.
I believe he's got a shelf full of 1TB external drives, detached &amp; powered down.
I think a lot of A / V people end up with a setup like this.
Not exactly disaster proof, but it provides a whole lot of storage for a relatively low price.
I wouldn't expect that a storage server would replace this setup either;  I think it would supplement it.
It would serve as an interim, medium term storage before the data went into cold storage offline.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681463</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683199</id>
	<title>Re:Misleading headline</title>
	<author>gnomeza</author>
	<datestamp>1247482020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And indeed, if all you need is large amounts of cheap storage, then an adapter with port multiplier support, like a SiI3132, and two SiI3726s will let you attach up to ten drives per PCIe port.</p><p>A decent solution for all that storage you're going to need to backup all that other storage.<br>(Since, after all, a backup is way more important than availability for that porn collection, right?)</p></htmltext>
<tokenext>And indeed , if all you need is large amounts of cheap storage , then an adapter with port multiplier support , like a SiI3132 , and two SiI3726s will let you attach up to ten drives per PCIe port.A decent solution for all that storage you 're going to need to backup all that other storage .
( Since , after all , a backup is way more important than availability for that porn collection , right ?
)</tokentext>
<sentencetext>And indeed, if all you need is large amounts of cheap storage, then an adapter with port multiplier support, like a SiI3132, and two SiI3726s will let you attach up to ten drives per PCIe port.A decent solution for all that storage you're going to need to backup all that other storage.
(Since, after all, a backup is way more important than availability for that porn collection, right?
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681151</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28686689</id>
	<title>Or...  5 or 6 2TB drives.</title>
	<author>Anonymous Freak</author>
	<datestamp>1247507280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Or, if you have a somewhat modern Intel-chipset board (with 6 SATA ports, and Intel's RAID-5 capable "Matrix RAID",) you could buy 5 or 6 Seagate or Western Digital 2 TB drives for $230 each.  No fancy total-price-doubling adapter needed.</p><p>If you're willing to risk your data, 5 drives is enough to do a 10 TB RAID-0; if you're a little less willing to risk your data, and have a motherboard with either a PATA controller, or an additional SATA controller for your optical drive, or are willing to live with an external optical drive, you can go whole-hog and get 6 in a 10 TB RAID-5.</p></htmltext>
<tokenext>Or , if you have a somewhat modern Intel-chipset board ( with 6 SATA ports , and Intel 's RAID-5 capable " Matrix RAID " , ) you could buy 5 or 6 Seagate or Western Digital 2 TB drives for $ 230 each .
No fancy total-price-doubling adapter needed.If you 're willing to risk your data , 5 drives is enough to do a 10 TB RAID-0 ; if you 're a little less willing to risk your data , and have a motherboard with either a PATA controller , or an additional SATA controller for your optical drive , or are willing to live with an external optical drive , you can go whole-hog and get 6 in a 10 TB RAID-5 .</tokentext>
<sentencetext>Or, if you have a somewhat modern Intel-chipset board (with 6 SATA ports, and Intel's RAID-5 capable "Matrix RAID",) you could buy 5 or 6 Seagate or Western Digital 2 TB drives for $230 each.
No fancy total-price-doubling adapter needed.If you're willing to risk your data, 5 drives is enough to do a 10 TB RAID-0; if you're a little less willing to risk your data, and have a motherboard with either a PATA controller, or an additional SATA controller for your optical drive, or are willing to live with an external optical drive, you can go whole-hog and get 6 in a 10 TB RAID-5.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685147</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>Anonymous</author>
	<datestamp>1247494680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have that case.  That is an awesome case.</p></htmltext>
<tokenext>I have that case .
That is an awesome case .</tokentext>
<sentencetext>I have that case.
That is an awesome case.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681313</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28693363</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>sootman</author>
	<datestamp>1247596020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>All praise St. Moore and the disk wizards at IBM. From 2001: <a href="http://hardware.slashdot.org/article.pl?sid=01/07/19/1554216" title="slashdot.org">Build a 1 TB server for $5,000!</a> [slashdot.org]</p></htmltext>
<tokenext>All praise St. Moore and the disk wizards at IBM .
From 2001 : Build a 1 TB server for $ 5,000 !
[ slashdot.org ]</tokentext>
<sentencetext>All praise St. Moore and the disk wizards at IBM.
From 2001: Build a 1 TB server for $5,000!
[slashdot.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681313</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681921</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>Amazing Quantum Man</author>
	<datestamp>1247476620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No, no, no.</p><p>He put the 8 and the 12 together in a RAIN (Redundant Array of Insignificant Numbers)</p></htmltext>
<tokenext>No , no , no.He put the 8 and the 12 together in a RAIN ( Redundant Array of Insignificant Numbers )</tokentext>
<sentencetext>No, no, no.He put the 8 and the 12 together in a RAIN (Redundant Array of Insignificant Numbers)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681227</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28688067</id>
	<title>Even cheaper is possible ?</title>
	<author>petermp</author>
	<datestamp>1247566440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>6 x 1.5 TB Segate = 720 USD
Motherboard Intel (6 SATA ports) = 70 USD
Processor Core2duo  - 100 USD
Ram 1G:  10 USD
Case + extra power supply: 50 USD
Flash drive(to hold os): 10 USD
FreeNAS(provides software RAID): $0 USD

9 TB = 960 USD, all included<nobr> <wbr></nobr>:-)</htmltext>
<tokenext>6 x 1.5 TB Segate = 720 USD Motherboard Intel ( 6 SATA ports ) = 70 USD Processor Core2duo - 100 USD Ram 1G : 10 USD Case + extra power supply : 50 USD Flash drive ( to hold os ) : 10 USD FreeNAS ( provides software RAID ) : $ 0 USD 9 TB = 960 USD , all included : - )</tokentext>
<sentencetext>6 x 1.5 TB Segate = 720 USD
Motherboard Intel (6 SATA ports) = 70 USD
Processor Core2duo  - 100 USD
Ram 1G:  10 USD
Case + extra power supply: 50 USD
Flash drive(to hold os): 10 USD
FreeNAS(provides software RAID): $0 USD

9 TB = 960 USD, all included :-)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681151</id>
	<title>Re:Misleading headline</title>
	<author>gweihir</author>
	<datestamp>1247516760000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p><em> but you won't have anything to connect them to, as the controller itself is another $1100. </em></p><p>You don't need that. Get a port with enoigh SATA ports on PCI-E and add more ports per cheap PCI-E controller. Then use Linux software RAID. I did this for several research data servers and this is quite enough to saturate GbE unless you have a lot of small accesses.</p></htmltext>
<tokenext>but you wo n't have anything to connect them to , as the controller itself is another $ 1100 .
You do n't need that .
Get a port with enoigh SATA ports on PCI-E and add more ports per cheap PCI-E controller .
Then use Linux software RAID .
I did this for several research data servers and this is quite enough to saturate GbE unless you have a lot of small accesses .</tokentext>
<sentencetext> but you won't have anything to connect them to, as the controller itself is another $1100.
You don't need that.
Get a port with enoigh SATA ports on PCI-E and add more ports per cheap PCI-E controller.
Then use Linux software RAID.
I did this for several research data servers and this is quite enough to saturate GbE unless you have a lot of small accesses.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680919</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875</id>
	<title>Redundant Array of INEXPENSIVE Disks</title>
	<author>Anonymous</author>
	<datestamp>1247476380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>I've done this every 2-3 years three times now for personal use and a couple times for work. My first was 7x120 and used 2 4 port ATA controllers and software RAID5.  My second was 7x400 and used a Highpoint rocket RAID card. My third one is 8x750gb and also uses a Highpoint card.</p><p>Lessons learned:<br>1. Non RAID type drives cause unpredictable and annoying performance issues as the RAID ages and fills with data.<br>
&nbsp; 1a. The drives can potentially drop out of the raid group (necessitating an automated rebuild) if they don't respond for too long.<br>
&nbsp; 1b. A single drive with some bad sectors can drag down performance to a crawl.<br>2. Software RAID is probably faster than hardware RAID for the money. A fast CPU is much cheaper than a very high performance RAID card low end cards like the Highpoint are likely slower for the money.<br>3. Software RAID setup is usually more complicated.<br>4. Compatibility issues with Highpoint cards and motherboards are no fun<br>5. For work purposes use RAID approved drives and 3Ware cards or software.<br>6. Old PCI will max out your performance. 33Mhz * 32bit = 132MB/sec minus over head, minus passing through it a couple times == 30MB/sec performance<br>7. If you go with software RAID you'll need a fat power supply, if you choose a raid card most of them support staggered start up and you won't really need much. Spin up power is 1-2amps typically but once they're running they don't take a lot of power.<br>8. Really cheap cases that hold 8 drives are hard to find. Careful to get enough mounting brackets, fans, power Y-adapters online so you don't spend too much on them at your local Fry's.</p><p>For my 4th personal RAID I will probably choose RAID6 and go back to software RAID. Likely at least 9x1.5TB if I were to do it today. 1.5TB drives can be had for $100 on discount. So RAID5 $800 for ~10TB formatted or $900 for RAID6. +case/cpu/etc...</p><p>I'd love to hear others feedback on similar personal use ULTRA CHEAP RAID setups.</p></htmltext>
<tokenext>I 've done this every 2-3 years three times now for personal use and a couple times for work .
My first was 7x120 and used 2 4 port ATA controllers and software RAID5 .
My second was 7x400 and used a Highpoint rocket RAID card .
My third one is 8x750gb and also uses a Highpoint card.Lessons learned : 1 .
Non RAID type drives cause unpredictable and annoying performance issues as the RAID ages and fills with data .
  1a .
The drives can potentially drop out of the raid group ( necessitating an automated rebuild ) if they do n't respond for too long .
  1b .
A single drive with some bad sectors can drag down performance to a crawl.2 .
Software RAID is probably faster than hardware RAID for the money .
A fast CPU is much cheaper than a very high performance RAID card low end cards like the Highpoint are likely slower for the money.3 .
Software RAID setup is usually more complicated.4 .
Compatibility issues with Highpoint cards and motherboards are no fun5 .
For work purposes use RAID approved drives and 3Ware cards or software.6 .
Old PCI will max out your performance .
33Mhz * 32bit = 132MB/sec minus over head , minus passing through it a couple times = = 30MB/sec performance7 .
If you go with software RAID you 'll need a fat power supply , if you choose a raid card most of them support staggered start up and you wo n't really need much .
Spin up power is 1-2amps typically but once they 're running they do n't take a lot of power.8 .
Really cheap cases that hold 8 drives are hard to find .
Careful to get enough mounting brackets , fans , power Y-adapters online so you do n't spend too much on them at your local Fry 's.For my 4th personal RAID I will probably choose RAID6 and go back to software RAID .
Likely at least 9x1.5TB if I were to do it today .
1.5TB drives can be had for $ 100 on discount .
So RAID5 $ 800 for ~ 10TB formatted or $ 900 for RAID6 .
+ case/cpu/etc...I 'd love to hear others feedback on similar personal use ULTRA CHEAP RAID setups .</tokentext>
<sentencetext>I've done this every 2-3 years three times now for personal use and a couple times for work.
My first was 7x120 and used 2 4 port ATA controllers and software RAID5.
My second was 7x400 and used a Highpoint rocket RAID card.
My third one is 8x750gb and also uses a Highpoint card.Lessons learned:1.
Non RAID type drives cause unpredictable and annoying performance issues as the RAID ages and fills with data.
  1a.
The drives can potentially drop out of the raid group (necessitating an automated rebuild) if they don't respond for too long.
  1b.
A single drive with some bad sectors can drag down performance to a crawl.2.
Software RAID is probably faster than hardware RAID for the money.
A fast CPU is much cheaper than a very high performance RAID card low end cards like the Highpoint are likely slower for the money.3.
Software RAID setup is usually more complicated.4.
Compatibility issues with Highpoint cards and motherboards are no fun5.
For work purposes use RAID approved drives and 3Ware cards or software.6.
Old PCI will max out your performance.
33Mhz * 32bit = 132MB/sec minus over head, minus passing through it a couple times == 30MB/sec performance7.
If you go with software RAID you'll need a fat power supply, if you choose a raid card most of them support staggered start up and you won't really need much.
Spin up power is 1-2amps typically but once they're running they don't take a lot of power.8.
Really cheap cases that hold 8 drives are hard to find.
Careful to get enough mounting brackets, fans, power Y-adapters online so you don't spend too much on them at your local Fry's.For my 4th personal RAID I will probably choose RAID6 and go back to software RAID.
Likely at least 9x1.5TB if I were to do it today.
1.5TB drives can be had for $100 on discount.
So RAID5 $800 for ~10TB formatted or $900 for RAID6.
+case/cpu/etc...I'd love to hear others feedback on similar personal use ULTRA CHEAP RAID setups.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682055</id>
	<title>Please delete missleading article</title>
	<author>Anonymous</author>
	<datestamp>1247477160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Seriously this is not news. it's an advertisement to sell gear that doesn't exsist.</p></htmltext>
<tokenext>Seriously this is not news .
it 's an advertisement to sell gear that does n't exsist .</tokentext>
<sentencetext>Seriously this is not news.
it's an advertisement to sell gear that doesn't exsist.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683319</id>
	<title>Re:How does the home user back this up?</title>
	<author>asc99c</author>
	<datestamp>1247482560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I backup to more hard discs.  I've been running RAID systems at home for a few years, and fairly recently replaced a 1.6TB array made up of 5 x 400GB discs, with a single 1.5TB disc.  Also, I've suffered the failure of 3 500GB discs which I RMAed for replacement, but due to the timescales, went out and bought new drives before the replacements arrived.</p><p>So currently I have 3.5 TB of unused hard discs, which have become my backup discs.  In primary storage, I've got the 1.5 TB disc and a 2.5 TB array, but I'm currently only using just over 3 TB of space, so I've got room to backup everything to hard disc.  When I need more space, I'm a bit paranoid going beyond 6 discs in RAID5.  Also there are currently 7 discs running which makes enough noise and uses enough power already.  So I'm currently expecting to replace the 6 x 500 GB array with a 3 x 2 TB array, leaving me with an extra 3 TB of backup discs.</p><p>If your storage requirements are just growing incrementally, then old hard discs are very possibly something you'll have, and make a very good choice for the backups.  But if you just throw together a 10TB array one day for fun, it's probably quite an expensive option.</p></htmltext>
<tokenext>I backup to more hard discs .
I 've been running RAID systems at home for a few years , and fairly recently replaced a 1.6TB array made up of 5 x 400GB discs , with a single 1.5TB disc .
Also , I 've suffered the failure of 3 500GB discs which I RMAed for replacement , but due to the timescales , went out and bought new drives before the replacements arrived.So currently I have 3.5 TB of unused hard discs , which have become my backup discs .
In primary storage , I 've got the 1.5 TB disc and a 2.5 TB array , but I 'm currently only using just over 3 TB of space , so I 've got room to backup everything to hard disc .
When I need more space , I 'm a bit paranoid going beyond 6 discs in RAID5 .
Also there are currently 7 discs running which makes enough noise and uses enough power already .
So I 'm currently expecting to replace the 6 x 500 GB array with a 3 x 2 TB array , leaving me with an extra 3 TB of backup discs.If your storage requirements are just growing incrementally , then old hard discs are very possibly something you 'll have , and make a very good choice for the backups .
But if you just throw together a 10TB array one day for fun , it 's probably quite an expensive option .</tokentext>
<sentencetext>I backup to more hard discs.
I've been running RAID systems at home for a few years, and fairly recently replaced a 1.6TB array made up of 5 x 400GB discs, with a single 1.5TB disc.
Also, I've suffered the failure of 3 500GB discs which I RMAed for replacement, but due to the timescales, went out and bought new drives before the replacements arrived.So currently I have 3.5 TB of unused hard discs, which have become my backup discs.
In primary storage, I've got the 1.5 TB disc and a 2.5 TB array, but I'm currently only using just over 3 TB of space, so I've got room to backup everything to hard disc.
When I need more space, I'm a bit paranoid going beyond 6 discs in RAID5.
Also there are currently 7 discs running which makes enough noise and uses enough power already.
So I'm currently expecting to replace the 6 x 500 GB array with a 3 x 2 TB array, leaving me with an extra 3 TB of backup discs.If your storage requirements are just growing incrementally, then old hard discs are very possibly something you'll have, and make a very good choice for the backups.
But if you just throw together a 10TB array one day for fun, it's probably quite an expensive option.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681463</id>
	<title>What for?</title>
	<author>Seth Kriticos</author>
	<datestamp>1247518020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>I mean, who is the target audience for the article??<br><br>People who just want massive amount of data storage for private use just buy a few NAS units, plug them in a gigabit Ethernet or USB hub and keep the more needed data on the internal HDD's.<br><br>On the other side, people who want fast, reliable and a lot of data storage buy something like a HP Proliant, IBM or similar Rack server with redundant PSU's, RAID controller with battery packs and SAS HDD's at 10-15k rpm (and possibly a tape drive).<br><br>The later setup costs more in the short run, but you spare your self a lot of head aches (repair service, configuration, downtime, data loss) in the long run, as this hardware is designed for this kind of tasks.<br><br>So who is the article targeted at: wannabe computer leet folks? And why on earth is this article on the Slashdot frontpage??</htmltext>
<tokenext>I mean , who is the target audience for the article ?
? People who just want massive amount of data storage for private use just buy a few NAS units , plug them in a gigabit Ethernet or USB hub and keep the more needed data on the internal HDD 's.On the other side , people who want fast , reliable and a lot of data storage buy something like a HP Proliant , IBM or similar Rack server with redundant PSU 's , RAID controller with battery packs and SAS HDD 's at 10-15k rpm ( and possibly a tape drive ) .The later setup costs more in the short run , but you spare your self a lot of head aches ( repair service , configuration , downtime , data loss ) in the long run , as this hardware is designed for this kind of tasks.So who is the article targeted at : wannabe computer leet folks ?
And why on earth is this article on the Slashdot frontpage ?
?</tokentext>
<sentencetext>I mean, who is the target audience for the article?
?People who just want massive amount of data storage for private use just buy a few NAS units, plug them in a gigabit Ethernet or USB hub and keep the more needed data on the internal HDD's.On the other side, people who want fast, reliable and a lot of data storage buy something like a HP Proliant, IBM or similar Rack server with redundant PSU's, RAID controller with battery packs and SAS HDD's at 10-15k rpm (and possibly a tape drive).The later setup costs more in the short run, but you spare your self a lot of head aches (repair service, configuration, downtime, data loss) in the long run, as this hardware is designed for this kind of tasks.So who is the article targeted at: wannabe computer leet folks?
And why on earth is this article on the Slashdot frontpage?
?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28688841</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>vrmlguy</author>
	<datestamp>1247575440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>And how do you back it up?</p></div><p>You build two, rsync them together, and give the second to a friend or relative in a different time zone.  You can give them their own partition (sized according to how much of the cost they're willing to kick in) and each of you provides the off-site backup for the other.</p><p>Of course, that doesn't protect you from accidentally clobbering your own data, as the results would be automatically replicated to the off-site array.  In that case, you want to look at something like <a href="http://rsnapshot.org/" title="rsnapshot.org">rsnapshot</a> [rsnapshot.org] or ZFS.</p></div>
	</htmltext>
<tokenext>And how do you back it up ? You build two , rsync them together , and give the second to a friend or relative in a different time zone .
You can give them their own partition ( sized according to how much of the cost they 're willing to kick in ) and each of you provides the off-site backup for the other.Of course , that does n't protect you from accidentally clobbering your own data , as the results would be automatically replicated to the off-site array .
In that case , you want to look at something like rsnapshot [ rsnapshot.org ] or ZFS .</tokentext>
<sentencetext>And how do you back it up?You build two, rsync them together, and give the second to a friend or relative in a different time zone.
You can give them their own partition (sized according to how much of the cost they're willing to kick in) and each of you provides the off-site backup for the other.Of course, that doesn't protect you from accidentally clobbering your own data, as the results would be automatically replicated to the off-site array.
In that case, you want to look at something like rsnapshot [rsnapshot.org] or ZFS.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684663</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141</id>
	<title>How does the home user back this up?</title>
	<author>Anonymous</author>
	<datestamp>1247516760000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Ok, so let's say you built one of these monsters. Or you rolled your own with linux and a bunch of drives.... How would a home user, back this up?  They've got every picture/movie/mp3/resume/recipe etc.. that they've ever owned on it.</p><ul><li>Blu-Ray DVD?  Those have a capacity of 50GB</li></ul><ul><li>An old LTO-3 drive from eBay.  They have a native (no compression) of about 400GB.  So you'd still need 4-5tapes for all your data. Though this will cost you over a grand. Plus you'll need to buy a LVD external SCSI adapter.</li></ul><ul><li>Online/internet backup?  Backup and restore times would be brutal.</li></ul><p>Anybody got any reasonable ideas?</p></htmltext>
<tokenext>Ok , so let 's say you built one of these monsters .
Or you rolled your own with linux and a bunch of drives.... How would a home user , back this up ?
They 've got every picture/movie/mp3/resume/recipe etc.. that they 've ever owned on it.Blu-Ray DVD ?
Those have a capacity of 50GBAn old LTO-3 drive from eBay .
They have a native ( no compression ) of about 400GB .
So you 'd still need 4-5tapes for all your data .
Though this will cost you over a grand .
Plus you 'll need to buy a LVD external SCSI adapter.Online/internet backup ?
Backup and restore times would be brutal.Anybody got any reasonable ideas ?</tokentext>
<sentencetext>Ok, so let's say you built one of these monsters.
Or you rolled your own with linux and a bunch of drives.... How would a home user, back this up?
They've got every picture/movie/mp3/resume/recipe etc.. that they've ever owned on it.Blu-Ray DVD?
Those have a capacity of 50GBAn old LTO-3 drive from eBay.
They have a native (no compression) of about 400GB.
So you'd still need 4-5tapes for all your data.
Though this will cost you over a grand.
Plus you'll need to buy a LVD external SCSI adapter.Online/internet backup?
Backup and restore times would be brutal.Anybody got any reasonable ideas?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681313</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>TheMMaster</author>
	<datestamp>1247517420000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>I actually did something similar around a year ago. 12 x 750Gb of diskspace including disks, controllers, system and everything for around 2000 dollars. It uses Linux softraid but I still get an easy 400MegaBYTE/s from it. I have some pictures here:</p><p><a href="http://www.tmm.cx/~hp/new\_server" title="www.tmm.cx">http://www.tmm.cx/~hp/new\_server</a> [www.tmm.cx]</p><p>Tom's hardware's idea is very late to the party<nobr> <wbr></nobr>;)</p></htmltext>
<tokenext>I actually did something similar around a year ago .
12 x 750Gb of diskspace including disks , controllers , system and everything for around 2000 dollars .
It uses Linux softraid but I still get an easy 400MegaBYTE/s from it .
I have some pictures here : http : //www.tmm.cx/ ~ hp/new \ _server [ www.tmm.cx ] Tom 's hardware 's idea is very late to the party ; )</tokentext>
<sentencetext>I actually did something similar around a year ago.
12 x 750Gb of diskspace including disks, controllers, system and everything for around 2000 dollars.
It uses Linux softraid but I still get an easy 400MegaBYTE/s from it.
I have some pictures here:http://www.tmm.cx/~hp/new\_server [www.tmm.cx]Tom's hardware's idea is very late to the party ;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28692729</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>cthulhu11</author>
	<datestamp>1247593620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There's a certain irony to a post that deems another post "stupid"  yet misuses the word "reiterate".</p></htmltext>
<tokenext>There 's a certain irony to a post that deems another post " stupid " yet misuses the word " reiterate " .</tokentext>
<sentencetext>There's a certain irony to a post that deems another post "stupid"  yet misuses the word "reiterate".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681743</id>
	<title>Dell Perc 5/i</title>
	<author>fireduck64</author>
	<datestamp>1247475840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Good point about the controller costs.  I have been facing a similar problem with my own massive storage setup.

One thing I have found that works well is getting Dell Perc 5/i cards from ebay.  New they are around $500 or $600.  You can get them on ebay for maybe $125.  This allows you to connect 8 SATA drives via one PCI express slot.

I've only tried it with FreeBSD and JBOD style configuration though.</htmltext>
<tokenext>Good point about the controller costs .
I have been facing a similar problem with my own massive storage setup .
One thing I have found that works well is getting Dell Perc 5/i cards from ebay .
New they are around $ 500 or $ 600 .
You can get them on ebay for maybe $ 125 .
This allows you to connect 8 SATA drives via one PCI express slot .
I 've only tried it with FreeBSD and JBOD style configuration though .</tokentext>
<sentencetext>Good point about the controller costs.
I have been facing a similar problem with my own massive storage setup.
One thing I have found that works well is getting Dell Perc 5/i cards from ebay.
New they are around $500 or $600.
You can get them on ebay for maybe $125.
This allows you to connect 8 SATA drives via one PCI express slot.
I've only tried it with FreeBSD and JBOD style configuration though.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683225</id>
	<title>Re:Redundant Array of INEXPENSIVE Disks</title>
	<author>vlm</author>
	<datestamp>1247482140000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>Lessons learned:</p></div><p>9. Software raid is much easier to remotely admin online while using SSH and linux command line.  Hardware raid often requires downtime and reboots.</p><p>10. Your hardware RAID card manufacturer may go out of business, replacements may be unavailable, etc.  Linux software raid is available until approximately the end of time, much lower risk.</p><p>11. The more drives you have, the more you'll appreciate installing them all in drive caddy/shelf things.  With internal drives you'll have to disconnect all the cables, haul the box out, unscrew it, open it, then unscrew all the drives, downtime measured in hours.  With some spare drive caddies, you can hit the power, pull the old caddy, slide in the new caddy with the new drive, hit the power, downtime measured in seconds to minutes.  Also I prefer installing new drives into caddies at my comfy workbench rather than crawling around the server case on the floor.</p></div>
	</htmltext>
<tokenext>Lessons learned : 9 .
Software raid is much easier to remotely admin online while using SSH and linux command line .
Hardware raid often requires downtime and reboots.10 .
Your hardware RAID card manufacturer may go out of business , replacements may be unavailable , etc .
Linux software raid is available until approximately the end of time , much lower risk.11 .
The more drives you have , the more you 'll appreciate installing them all in drive caddy/shelf things .
With internal drives you 'll have to disconnect all the cables , haul the box out , unscrew it , open it , then unscrew all the drives , downtime measured in hours .
With some spare drive caddies , you can hit the power , pull the old caddy , slide in the new caddy with the new drive , hit the power , downtime measured in seconds to minutes .
Also I prefer installing new drives into caddies at my comfy workbench rather than crawling around the server case on the floor .</tokentext>
<sentencetext>Lessons learned:9.
Software raid is much easier to remotely admin online while using SSH and linux command line.
Hardware raid often requires downtime and reboots.10.
Your hardware RAID card manufacturer may go out of business, replacements may be unavailable, etc.
Linux software raid is available until approximately the end of time, much lower risk.11.
The more drives you have, the more you'll appreciate installing them all in drive caddy/shelf things.
With internal drives you'll have to disconnect all the cables, haul the box out, unscrew it, open it, then unscrew all the drives, downtime measured in hours.
With some spare drive caddies, you can hit the power, pull the old caddy, slide in the new caddy with the new drive, hit the power, downtime measured in seconds to minutes.
Also I prefer installing new drives into caddies at my comfy workbench rather than crawling around the server case on the floor.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681303</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>Anonymous</author>
	<datestamp>1247517420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You missed that it's posted on a gear site, where the point is to build stuff that's totally uber.</p></htmltext>
<tokenext>You missed that it 's posted on a gear site , where the point is to build stuff that 's totally uber .</tokentext>
<sentencetext>You missed that it's posted on a gear site, where the point is to build stuff that's totally uber.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685613</id>
	<title>Personal experience...</title>
	<author>pjr.cc</author>
	<datestamp>1247499300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Tom's hardware's blatant lying aside (12 disks + controller &gt;&gt; $1000). Its a good example of what people are getting sick to death of in the server market. i.e. add "enterprise" to a name and everything gets needlessly expensive.</p><p>if you look around at a couple of fibre-connect (or even scsi-connect) jbod's the cost is ludicrous even without the drives. Some of that is due to the weighty licensing costs of fibre channel and some of it is just greed (and an inability to efficiently produce components).</p><p>On the flip side, my personal experience with hardware raid controllers has never been great, the controller dies and you no longer have anyway of accessing your data cause only that controller knows how to read the meta data on your disks. Its also rather pointless in most situations these days. What exactly are you going to do with the much disk bandwidth? did you get a 10gb network suddenly you can share it over? About to do some film-quality HD video editing (in real time) for harry potter?</p><p>With the speed of most harddrives these days, theres just very little point using a hardware controller to build a "cheap system" when you could use a software one thats so much easier to replace if it fails. Your still going to get the speedy access.</p></htmltext>
<tokenext>Tom 's hardware 's blatant lying aside ( 12 disks + controller &gt; &gt; $ 1000 ) .
Its a good example of what people are getting sick to death of in the server market .
i.e. add " enterprise " to a name and everything gets needlessly expensive.if you look around at a couple of fibre-connect ( or even scsi-connect ) jbod 's the cost is ludicrous even without the drives .
Some of that is due to the weighty licensing costs of fibre channel and some of it is just greed ( and an inability to efficiently produce components ) .On the flip side , my personal experience with hardware raid controllers has never been great , the controller dies and you no longer have anyway of accessing your data cause only that controller knows how to read the meta data on your disks .
Its also rather pointless in most situations these days .
What exactly are you going to do with the much disk bandwidth ?
did you get a 10gb network suddenly you can share it over ?
About to do some film-quality HD video editing ( in real time ) for harry potter ? With the speed of most harddrives these days , theres just very little point using a hardware controller to build a " cheap system " when you could use a software one thats so much easier to replace if it fails .
Your still going to get the speedy access .</tokentext>
<sentencetext>Tom's hardware's blatant lying aside (12 disks + controller &gt;&gt; $1000).
Its a good example of what people are getting sick to death of in the server market.
i.e. add "enterprise" to a name and everything gets needlessly expensive.if you look around at a couple of fibre-connect (or even scsi-connect) jbod's the cost is ludicrous even without the drives.
Some of that is due to the weighty licensing costs of fibre channel and some of it is just greed (and an inability to efficiently produce components).On the flip side, my personal experience with hardware raid controllers has never been great, the controller dies and you no longer have anyway of accessing your data cause only that controller knows how to read the meta data on your disks.
Its also rather pointless in most situations these days.
What exactly are you going to do with the much disk bandwidth?
did you get a 10gb network suddenly you can share it over?
About to do some film-quality HD video editing (in real time) for harry potter?With the speed of most harddrives these days, theres just very little point using a hardware controller to build a "cheap system" when you could use a software one thats so much easier to replace if it fails.
Your still going to get the speedy access.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393</id>
	<title>What about the electricity?</title>
	<author>btempleton</author>
	<datestamp>1247517780000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Such a RAID is for an always-on server.  Expect about 8 watts per drive after power supply inefficiencies.   So 12 drives, around 100 watts.    So 870 kwh in a year.</p><p>On California Tier 3 pricing at 31 cents/kwh, 12 drives costs $270 of electricity per year, or around $800 in the 3 year lifetime of the drives.</p><p>In other words, about the same price as the drives themselves.    Do the 2TB drives draw more power than the 1TB?  I have not looked.  If they are similar, then 6x2TB plus 3 years of 50 watts is actually the same price as 12x1TB plus 3 years of 100 watts, but I don't think they are exactly the same power.</p><p>My real point is, that when doing the cost of a RAID like this, you do need to consider the electricity.  Add 30\% to the cost of the electricity for cooling if this is to have AC, at least in many areas.   And the cost of the electricity for the RAID controller etc.  These factors would also be considered in comparison to a SSD, though of course 10TB of SSD is still too expensive.</p></htmltext>
<tokenext>Such a RAID is for an always-on server .
Expect about 8 watts per drive after power supply inefficiencies .
So 12 drives , around 100 watts .
So 870 kwh in a year.On California Tier 3 pricing at 31 cents/kwh , 12 drives costs $ 270 of electricity per year , or around $ 800 in the 3 year lifetime of the drives.In other words , about the same price as the drives themselves .
Do the 2TB drives draw more power than the 1TB ?
I have not looked .
If they are similar , then 6x2TB plus 3 years of 50 watts is actually the same price as 12x1TB plus 3 years of 100 watts , but I do n't think they are exactly the same power.My real point is , that when doing the cost of a RAID like this , you do need to consider the electricity .
Add 30 \ % to the cost of the electricity for cooling if this is to have AC , at least in many areas .
And the cost of the electricity for the RAID controller etc .
These factors would also be considered in comparison to a SSD , though of course 10TB of SSD is still too expensive .</tokentext>
<sentencetext>Such a RAID is for an always-on server.
Expect about 8 watts per drive after power supply inefficiencies.
So 12 drives, around 100 watts.
So 870 kwh in a year.On California Tier 3 pricing at 31 cents/kwh, 12 drives costs $270 of electricity per year, or around $800 in the 3 year lifetime of the drives.In other words, about the same price as the drives themselves.
Do the 2TB drives draw more power than the 1TB?
I have not looked.
If they are similar, then 6x2TB plus 3 years of 50 watts is actually the same price as 12x1TB plus 3 years of 100 watts, but I don't think they are exactly the same power.My real point is, that when doing the cost of a RAID like this, you do need to consider the electricity.
Add 30\% to the cost of the electricity for cooling if this is to have AC, at least in many areas.
And the cost of the electricity for the RAID controller etc.
These factors would also be considered in comparison to a SSD, though of course 10TB of SSD is still too expensive.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683197</id>
	<title>Re:We do this now</title>
	<author>JDevers</author>
	<datestamp>1247482020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Do you know what JBOD is?  Just a Bunch Of Disks, in other words exactly what you have setup.</p></htmltext>
<tokenext>Do you know what JBOD is ?
Just a Bunch Of Disks , in other words exactly what you have setup .</tokentext>
<sentencetext>Do you know what JBOD is?
Just a Bunch Of Disks, in other words exactly what you have setup.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683469</id>
	<title>Re:Misleading headline</title>
	<author>Nightspirit</author>
	<datestamp>1247483220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>My $60 asus motherboardboard I bought 3 years ago came with 10 sata ports.</p></htmltext>
<tokenext>My $ 60 asus motherboardboard I bought 3 years ago came with 10 sata ports .</tokentext>
<sentencetext>My $60 asus motherboardboard I bought 3 years ago came with 10 sata ports.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680919</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683759</id>
	<title>My 8TB NAS</title>
	<author>Johnny\_Longtorso</author>
	<datestamp>1247485200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I just built this last week:<br>http://www.flickr.com/photos/vancod/3679263377/</p><p>8 x 1TB drives: $712<br>PSU: $92<br>Case: $29<br>RAM: $34<br>CPU/MOBO: $131<br>Intel RAID card: $275<br>Cache/Battery: $112</p><p>Total: $1385 (all items free shipping)</p><p>Don't know how you're going to get  $1000 for 10TB and have it be worth a shit.</p></htmltext>
<tokenext>I just built this last week : http : //www.flickr.com/photos/vancod/3679263377/8 x 1TB drives : $ 712PSU : $ 92Case : $ 29RAM : $ 34CPU/MOBO : $ 131Intel RAID card : $ 275Cache/Battery : $ 112Total : $ 1385 ( all items free shipping ) Do n't know how you 're going to get $ 1000 for 10TB and have it be worth a shit .</tokentext>
<sentencetext>I just built this last week:http://www.flickr.com/photos/vancod/3679263377/8 x 1TB drives: $712PSU: $92Case: $29RAM: $34CPU/MOBO: $131Intel RAID card: $275Cache/Battery: $112Total: $1385 (all items free shipping)Don't know how you're going to get  $1000 for 10TB and have it be worth a shit.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683173</id>
	<title>RAID types</title>
	<author>Anonymous</author>
	<datestamp>1247481840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Drive space is so cheap per GB these days, for a lot of companies there's almost not much point in RAID 5 anymore (or of course RAID 6 or 0 but whoever used those anyways).  If you have an even number of drive slots greater than or equal to 4, mind as well just buy a bunch of large drives and RAID 10 the thing.  Certainly there's exceptions and specialized configurations, but in general, as drive space per $ goes up, the storage capacity issue goes down.</p></htmltext>
<tokenext>Drive space is so cheap per GB these days , for a lot of companies there 's almost not much point in RAID 5 anymore ( or of course RAID 6 or 0 but whoever used those anyways ) .
If you have an even number of drive slots greater than or equal to 4 , mind as well just buy a bunch of large drives and RAID 10 the thing .
Certainly there 's exceptions and specialized configurations , but in general , as drive space per $ goes up , the storage capacity issue goes down .</tokentext>
<sentencetext>Drive space is so cheap per GB these days, for a lot of companies there's almost not much point in RAID 5 anymore (or of course RAID 6 or 0 but whoever used those anyways).
If you have an even number of drive slots greater than or equal to 4, mind as well just buy a bunch of large drives and RAID 10 the thing.
Certainly there's exceptions and specialized configurations, but in general, as drive space per $ goes up, the storage capacity issue goes down.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681015</id>
	<title>Another selling point for double parity</title>
	<author>zaibazu</author>
	<datestamp>1247516160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Another thing with RAID arrays that have quiete a few drives is, you have no method of correcting a flipped bit. You need at least RAID6 to correct these errors. With such vast amounts of data, a flipped bit isn't that unlikely.</htmltext>
<tokenext>Another thing with RAID arrays that have quiete a few drives is , you have no method of correcting a flipped bit .
You need at least RAID6 to correct these errors .
With such vast amounts of data , a flipped bit is n't that unlikely .</tokentext>
<sentencetext>Another thing with RAID arrays that have quiete a few drives is, you have no method of correcting a flipped bit.
You need at least RAID6 to correct these errors.
With such vast amounts of data, a flipped bit isn't that unlikely.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955</id>
	<title>We do this now</title>
	<author>mcrbids</author>
	<datestamp>1247515920000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>We needed a solution for backups. Performance is therefore not important, just reliability, storage space, and price.</p><p>I reviewed a number of solutions with acronyms like JBOD, with prices that weren't cheap... I ended up going to the local PC shop and getting a fairly generic MOBO with 6 SATA plugs, and a SATA daughter card (for another 4 ports) running CentOS 5. The price dropped from thousands of dollars to hundreds, and took me a full workday to get set up.</p><p>It's currently got 8 drives in it, cost a little over the thousand quoted in TFA, and is very conveniently obtained. It has a script that backs up everything nightly, and we have some external USB HDDs that we use for archival monthly backups.</p><p>The drives are all redundant, backups are done automatically, and it works quite well for our needs. It's near zero administration after initial setup.</p></htmltext>
<tokenext>We needed a solution for backups .
Performance is therefore not important , just reliability , storage space , and price.I reviewed a number of solutions with acronyms like JBOD , with prices that were n't cheap... I ended up going to the local PC shop and getting a fairly generic MOBO with 6 SATA plugs , and a SATA daughter card ( for another 4 ports ) running CentOS 5 .
The price dropped from thousands of dollars to hundreds , and took me a full workday to get set up.It 's currently got 8 drives in it , cost a little over the thousand quoted in TFA , and is very conveniently obtained .
It has a script that backs up everything nightly , and we have some external USB HDDs that we use for archival monthly backups.The drives are all redundant , backups are done automatically , and it works quite well for our needs .
It 's near zero administration after initial setup .</tokentext>
<sentencetext>We needed a solution for backups.
Performance is therefore not important, just reliability, storage space, and price.I reviewed a number of solutions with acronyms like JBOD, with prices that weren't cheap... I ended up going to the local PC shop and getting a fairly generic MOBO with 6 SATA plugs, and a SATA daughter card (for another 4 ports) running CentOS 5.
The price dropped from thousands of dollars to hundreds, and took me a full workday to get set up.It's currently got 8 drives in it, cost a little over the thousand quoted in TFA, and is very conveniently obtained.
It has a script that backs up everything nightly, and we have some external USB HDDs that we use for archival monthly backups.The drives are all redundant, backups are done automatically, and it works quite well for our needs.
It's near zero administration after initial setup.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681113</id>
	<title>And even cheaper</title>
	<author>gweihir</author>
	<datestamp>1247516640000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>I did someting some years ago with 200GB (and later 500GB) drives:</p><p>10 drives in a chieftec Big tower. 6 drives go into the two internal drive cases, 4 go into a 4-for-3 mounting with a 120mm fan. Controller: 2 SATA on board and 2 x Promise 4 port SATA conroller 300 TX4 (a lot cheaper than Arcea and kernel native support). Put Linux software RAID 6 on the drives, spare 1 GB or so per drive for RAID1 (n-way) system. Done.</p></htmltext>
<tokenext>I did someting some years ago with 200GB ( and later 500GB ) drives : 10 drives in a chieftec Big tower .
6 drives go into the two internal drive cases , 4 go into a 4-for-3 mounting with a 120mm fan .
Controller : 2 SATA on board and 2 x Promise 4 port SATA conroller 300 TX4 ( a lot cheaper than Arcea and kernel native support ) .
Put Linux software RAID 6 on the drives , spare 1 GB or so per drive for RAID1 ( n-way ) system .
Done .</tokentext>
<sentencetext>I did someting some years ago with 200GB (and later 500GB) drives:10 drives in a chieftec Big tower.
6 drives go into the two internal drive cases, 4 go into a 4-for-3 mounting with a 120mm fan.
Controller: 2 SATA on board and 2 x Promise 4 port SATA conroller 300 TX4 (a lot cheaper than Arcea and kernel native support).
Put Linux software RAID 6 on the drives, spare 1 GB or so per drive for RAID1 (n-way) system.
Done.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681485</id>
	<title>Re:How does the home user back this up?</title>
	<author>fishbowl</author>
	<datestamp>1247518080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt;Anybody got any reasonable ideas?</p><p>I'm loving my HP 1/8 LTO-4 SAS Autoloader.  It's faster (both reading and writing) than anything I can feed it.</p></htmltext>
<tokenext>&gt; Anybody got any reasonable ideas ? I 'm loving my HP 1/8 LTO-4 SAS Autoloader .
It 's faster ( both reading and writing ) than anything I can feed it .</tokentext>
<sentencetext>&gt;Anybody got any reasonable ideas?I'm loving my HP 1/8 LTO-4 SAS Autoloader.
It's faster (both reading and writing) than anything I can feed it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28692027</id>
	<title>Re:Misleading headline</title>
	<author>Anonymous</author>
	<datestamp>1247590620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>By horderves, did you mean hors d'oeuvre?</htmltext>
<tokenext>By horderves , did you mean hors d'oeuvre ?</tokentext>
<sentencetext>By horderves, did you mean hors d'oeuvre?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682203</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</id>
	<title>Why This Article Is Stupid</title>
	<author>eldavojohn</author>
	<datestamp>1247515500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>

One: The title is a borderline lie.  Yes, you can buy 12x 1TB drives for about a grand.  But if I'm going to build an array and bench mark it and constantly compare it to buying a Core i7-975 Extreme, the drives alone don't do me any good!  (And I love how you continually reiterate with statements like "The Idea: Massive Hard Drive Storage Within a $1,000 Budget")<br> <br>

Two: Said controller does not exist.  They listed the controller as ARC-1680ix-<b>20</b>.  Areca <a href="http://www.areca.com.tw/products/pcietosas1680series.htm" title="areca.com.tw" rel="nofollow">makes no such controller</a> [areca.com.tw].  They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.  <br> <br>

Three: Said controller is going to easily run you <a href="http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&amp;DEPA=0&amp;Order=BESTMATCH&amp;Description=ARC-1680ix&amp;x=0&amp;y=0" title="newegg.com" rel="nofollow">another grand</a> [newegg.com].  And I'm certain most controllers that accomplish what you're asking are pretty damned expensive and they will have a bigger impact than the drives on your results.  <br> <br>

Four: You don't compare this hardware setup with any other setup.  Build the "Uber RAID Array" you claim.  Uber compared to what, precisely?  How does a cheap <a href="http://www.amazon.com/gp/product/B000NX0Y8C" title="amazon.com" rel="nofollow">Adaptac compare</a> [amazon.com]?  Are you sure there's not a better controller for less money?  <br> <br>

All you showed was that we increase our throughput and reduce our access times with RAID 0 &amp; 5 compared to a single drive.  So?  Isn't that what's supposed to happen?  Oh, and you split it across seven pages like Tom's Hardware loves to do.  And I can't click print to read the article uninterrupted anymore without logging in.  And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links, god I love those with all my heart.  <br> <br>So feel free to correct me but we are left with a marketing advertisement for an Areca product that doesn't even exist and a notice that storage just keeps getting cheaper.  Did I miss anything?</htmltext>
<tokenext>One : The title is a borderline lie .
Yes , you can buy 12x 1TB drives for about a grand .
But if I 'm going to build an array and bench mark it and constantly compare it to buying a Core i7-975 Extreme , the drives alone do n't do me any good !
( And I love how you continually reiterate with statements like " The Idea : Massive Hard Drive Storage Within a $ 1,000 Budget " ) Two : Said controller does not exist .
They listed the controller as ARC-1680ix-20 .
Areca makes no such controller [ areca.com.tw ] .
They make an 8 , 12 , 16 , 24 but no 20 unless they 've got some advanced product unlisted anywhere .
Three : Said controller is going to easily run you another grand [ newegg.com ] .
And I 'm certain most controllers that accomplish what you 're asking are pretty damned expensive and they will have a bigger impact than the drives on your results .
Four : You do n't compare this hardware setup with any other setup .
Build the " Uber RAID Array " you claim .
Uber compared to what , precisely ?
How does a cheap Adaptac compare [ amazon.com ] ?
Are you sure there 's not a better controller for less money ?
All you showed was that we increase our throughput and reduce our access times with RAID 0 &amp; 5 compared to a single drive .
So ? Is n't that what 's supposed to happen ?
Oh , and you split it across seven pages like Tom 's Hardware loves to do .
And I ca n't click print to read the article uninterrupted anymore without logging in .
And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links , god I love those with all my heart .
So feel free to correct me but we are left with a marketing advertisement for an Areca product that does n't even exist and a notice that storage just keeps getting cheaper .
Did I miss anything ?</tokentext>
<sentencetext>

One: The title is a borderline lie.
Yes, you can buy 12x 1TB drives for about a grand.
But if I'm going to build an array and bench mark it and constantly compare it to buying a Core i7-975 Extreme, the drives alone don't do me any good!
(And I love how you continually reiterate with statements like "The Idea: Massive Hard Drive Storage Within a $1,000 Budget") 

Two: Said controller does not exist.
They listed the controller as ARC-1680ix-20.
Areca makes no such controller [areca.com.tw].
They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.
Three: Said controller is going to easily run you another grand [newegg.com].
And I'm certain most controllers that accomplish what you're asking are pretty damned expensive and they will have a bigger impact than the drives on your results.
Four: You don't compare this hardware setup with any other setup.
Build the "Uber RAID Array" you claim.
Uber compared to what, precisely?
How does a cheap Adaptac compare [amazon.com]?
Are you sure there's not a better controller for less money?
All you showed was that we increase our throughput and reduce our access times with RAID 0 &amp; 5 compared to a single drive.
So?  Isn't that what's supposed to happen?
Oh, and you split it across seven pages like Tom's Hardware loves to do.
And I can't click print to read the article uninterrupted anymore without logging in.
And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links, god I love those with all my heart.
So feel free to correct me but we are left with a marketing advertisement for an Areca product that doesn't even exist and a notice that storage just keeps getting cheaper.
Did I miss anything?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685341</id>
	<title>This is a joke - and here's why</title>
	<author>Rooked\_One</author>
	<datestamp>1247496660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I work for a company that you've heard of, and makes storage arrays.  The major difference is in between SAS drives and SATA drives, and of course the raid controller.<br> <br>Big companies see it all...  Some jbod like this has seen maybe one or two different environments.  Would it work for you?  Maybe, but you aren't going to be putting data on it that you're making money from.  If you are, I suggest you get your head checked.   Large companies, have relationships with the hard drive manufacts and we get to cherry pick our hard drives... basically, we pay A LOT more for drives that have much higher tolerances - they might be the same model number as something you can buy from newegg, but when you have 15 drives or more in a raid-5, I would hope you have the brain to spend some cash on good hard drives... if not, a spinal cord would suffice for you.</htmltext>
<tokenext>I work for a company that you 've heard of , and makes storage arrays .
The major difference is in between SAS drives and SATA drives , and of course the raid controller .
Big companies see it all... Some jbod like this has seen maybe one or two different environments .
Would it work for you ?
Maybe , but you are n't going to be putting data on it that you 're making money from .
If you are , I suggest you get your head checked .
Large companies , have relationships with the hard drive manufacts and we get to cherry pick our hard drives... basically , we pay A LOT more for drives that have much higher tolerances - they might be the same model number as something you can buy from newegg , but when you have 15 drives or more in a raid-5 , I would hope you have the brain to spend some cash on good hard drives... if not , a spinal cord would suffice for you .</tokentext>
<sentencetext>I work for a company that you've heard of, and makes storage arrays.
The major difference is in between SAS drives and SATA drives, and of course the raid controller.
Big companies see it all...  Some jbod like this has seen maybe one or two different environments.
Would it work for you?
Maybe, but you aren't going to be putting data on it that you're making money from.
If you are, I suggest you get your head checked.
Large companies, have relationships with the hard drive manufacts and we get to cherry pick our hard drives... basically, we pay A LOT more for drives that have much higher tolerances - they might be the same model number as something you can buy from newegg, but when you have 15 drives or more in a raid-5, I would hope you have the brain to spend some cash on good hard drives... if not, a spinal cord would suffice for you.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683967</id>
	<title>misleading title, indeed</title>
	<author>boss\_hog</author>
	<datestamp>1247486280000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>From building two or three of these at home myself, my practical experience for someone wanting a monster file server for home, on the cheap, consists of these high/low points:</p><p>1. the other poster(s) above are 100\% correct about the raid card.  to get it all in one card you'll pay as much as 4-5 more hdd's, and that's on the low end for the card. decent dedicated PCI-E raid cards are still in the $300+ range for anything with 8 ports or more.</p><p>2. be careful about buying older raid cards. I have 2 16-port and 2 8-port adaptec PCI-X sata raid cards that are useless. why? they only support raid arrays up to 2tb in size. "update the firmware", you say. sure, let me just grab the latest, from 2005, I'm sure that fixes it. oh, wait, my raid cards already have that, and it doesn't remove that limitation. 8 drives, 16 drives, even, and they hard-code a limit of 2tb? lame.</p><p>3. I've seen nothing in a home-budget price range that performs as well as linux software raid.  My 1.5 yr old 500$ tyan workstation mobo(S5397, in another computer) has dedicated SAS raid that can't seem to do better than 10mbyte/sec throughput. reading data from drives that individually bench out at 50-60mbyte/sec.</p><p>4. which leads me to: use linux software raid. It's much more configurable than any hardware raid card, both in supported raid levels and monitoring capabilities. raid disks/arrays can be easily moved from one machine to another, one controller to another, etc. I've moved most of my disks between machines and controllers at least once.</p><p>5. I've come to believe over time that what you're really looking for is X SATA ports, not "controller capable of doing raid over X disks". Use SATA "mass storage" cards, or raid cards that will let you use them in pass-through mode to access the individual disks directly in the OS. here you have to be careful you don't get bit by #1, 2, or 3 again, since some raid cards don't behave well when not actually doing raid (I'm still looking at you, Adaptec). this makes it easier and much cheaper, you can mix and match lower-capacity cards to get 8-20+ sata ports for raid.</p><p>5.1 "hw vs sw raid tangent" : what happens on a dedicated raid card when you run out of ports? you usually can't span raid cards, unless you get multiple identical fancy (aka expensive) raid controllers from the same manufacturer. all linux needs is hard drives recognizable by the BIOS.</p><p>6. when using software raid, buy a decent CPU. You don't need some quad-core beast, but you don't want to be waiting on the CPU to finish your raid calculations. any 2-2.5ghz C2D is probably more than adequate...I've drawn the line with anything under 2ghz.</p><p>7. kiss backups good-bye.  the price of any decent backup system capable of covering this much storage is WAY over the price of this whole setup. Anything I really don't want to lose gets saved multiple places outside of the raid array, otherwise I factor the potential for data loss as a risk of operating this way.  Personally I don't really see how you could do otherwise in a setup like this.</p><p>8. be prepared for bottlenecks. you're doing this on a home budget, you probably won't get 300mbyte/sec reads off of your array, no matter how many drives configured at what raid level. I can only get 10-20mbyte/sec across my gigE network going to/from my raid 5 array.  This is probably due to the cheap PCI sata cards I'm using.  I willingly make this trade-off to obtain the capacity I have for the price I spent.</p><p>If any of these points is an overriding concern for your intended use, then you'd have to re-evaluate the importance of all the other considerations.</p><p>For me, stability, capacity and price are top three, leading me to research linux-stable cheap sata expansion cards (which is just a nice way of saying, I buy and try probably 2x the # of controllers I actually use, to find ones that won't corrupt data, time out on random drive accesses, or simply not display the real drives to the OS, etc), and compromise by waiting a bit longer for network transfers. Usua</p></htmltext>
<tokenext>From building two or three of these at home myself , my practical experience for someone wanting a monster file server for home , on the cheap , consists of these high/low points : 1. the other poster ( s ) above are 100 \ % correct about the raid card .
to get it all in one card you 'll pay as much as 4-5 more hdd 's , and that 's on the low end for the card .
decent dedicated PCI-E raid cards are still in the $ 300 + range for anything with 8 ports or more.2 .
be careful about buying older raid cards .
I have 2 16-port and 2 8-port adaptec PCI-X sata raid cards that are useless .
why ? they only support raid arrays up to 2tb in size .
" update the firmware " , you say .
sure , let me just grab the latest , from 2005 , I 'm sure that fixes it .
oh , wait , my raid cards already have that , and it does n't remove that limitation .
8 drives , 16 drives , even , and they hard-code a limit of 2tb ?
lame.3. I 've seen nothing in a home-budget price range that performs as well as linux software raid .
My 1.5 yr old 500 $ tyan workstation mobo ( S5397 , in another computer ) has dedicated SAS raid that ca n't seem to do better than 10mbyte/sec throughput .
reading data from drives that individually bench out at 50-60mbyte/sec.4 .
which leads me to : use linux software raid .
It 's much more configurable than any hardware raid card , both in supported raid levels and monitoring capabilities .
raid disks/arrays can be easily moved from one machine to another , one controller to another , etc .
I 've moved most of my disks between machines and controllers at least once.5 .
I 've come to believe over time that what you 're really looking for is X SATA ports , not " controller capable of doing raid over X disks " .
Use SATA " mass storage " cards , or raid cards that will let you use them in pass-through mode to access the individual disks directly in the OS .
here you have to be careful you do n't get bit by # 1 , 2 , or 3 again , since some raid cards do n't behave well when not actually doing raid ( I 'm still looking at you , Adaptec ) .
this makes it easier and much cheaper , you can mix and match lower-capacity cards to get 8-20 + sata ports for raid.5.1 " hw vs sw raid tangent " : what happens on a dedicated raid card when you run out of ports ?
you usually ca n't span raid cards , unless you get multiple identical fancy ( aka expensive ) raid controllers from the same manufacturer .
all linux needs is hard drives recognizable by the BIOS.6 .
when using software raid , buy a decent CPU .
You do n't need some quad-core beast , but you do n't want to be waiting on the CPU to finish your raid calculations .
any 2-2.5ghz C2D is probably more than adequate...I 've drawn the line with anything under 2ghz.7 .
kiss backups good-bye .
the price of any decent backup system capable of covering this much storage is WAY over the price of this whole setup .
Anything I really do n't want to lose gets saved multiple places outside of the raid array , otherwise I factor the potential for data loss as a risk of operating this way .
Personally I do n't really see how you could do otherwise in a setup like this.8 .
be prepared for bottlenecks .
you 're doing this on a home budget , you probably wo n't get 300mbyte/sec reads off of your array , no matter how many drives configured at what raid level .
I can only get 10-20mbyte/sec across my gigE network going to/from my raid 5 array .
This is probably due to the cheap PCI sata cards I 'm using .
I willingly make this trade-off to obtain the capacity I have for the price I spent.If any of these points is an overriding concern for your intended use , then you 'd have to re-evaluate the importance of all the other considerations.For me , stability , capacity and price are top three , leading me to research linux-stable cheap sata expansion cards ( which is just a nice way of saying , I buy and try probably 2x the # of controllers I actually use , to find ones that wo n't corrupt data , time out on random drive accesses , or simply not display the real drives to the OS , etc ) , and compromise by waiting a bit longer for network transfers .
Usua</tokentext>
<sentencetext>From building two or three of these at home myself, my practical experience for someone wanting a monster file server for home, on the cheap, consists of these high/low points:1. the other poster(s) above are 100\% correct about the raid card.
to get it all in one card you'll pay as much as 4-5 more hdd's, and that's on the low end for the card.
decent dedicated PCI-E raid cards are still in the $300+ range for anything with 8 ports or more.2.
be careful about buying older raid cards.
I have 2 16-port and 2 8-port adaptec PCI-X sata raid cards that are useless.
why? they only support raid arrays up to 2tb in size.
"update the firmware", you say.
sure, let me just grab the latest, from 2005, I'm sure that fixes it.
oh, wait, my raid cards already have that, and it doesn't remove that limitation.
8 drives, 16 drives, even, and they hard-code a limit of 2tb?
lame.3. I've seen nothing in a home-budget price range that performs as well as linux software raid.
My 1.5 yr old 500$ tyan workstation mobo(S5397, in another computer) has dedicated SAS raid that can't seem to do better than 10mbyte/sec throughput.
reading data from drives that individually bench out at 50-60mbyte/sec.4.
which leads me to: use linux software raid.
It's much more configurable than any hardware raid card, both in supported raid levels and monitoring capabilities.
raid disks/arrays can be easily moved from one machine to another, one controller to another, etc.
I've moved most of my disks between machines and controllers at least once.5.
I've come to believe over time that what you're really looking for is X SATA ports, not "controller capable of doing raid over X disks".
Use SATA "mass storage" cards, or raid cards that will let you use them in pass-through mode to access the individual disks directly in the OS.
here you have to be careful you don't get bit by #1, 2, or 3 again, since some raid cards don't behave well when not actually doing raid (I'm still looking at you, Adaptec).
this makes it easier and much cheaper, you can mix and match lower-capacity cards to get 8-20+ sata ports for raid.5.1 "hw vs sw raid tangent" : what happens on a dedicated raid card when you run out of ports?
you usually can't span raid cards, unless you get multiple identical fancy (aka expensive) raid controllers from the same manufacturer.
all linux needs is hard drives recognizable by the BIOS.6.
when using software raid, buy a decent CPU.
You don't need some quad-core beast, but you don't want to be waiting on the CPU to finish your raid calculations.
any 2-2.5ghz C2D is probably more than adequate...I've drawn the line with anything under 2ghz.7.
kiss backups good-bye.
the price of any decent backup system capable of covering this much storage is WAY over the price of this whole setup.
Anything I really don't want to lose gets saved multiple places outside of the raid array, otherwise I factor the potential for data loss as a risk of operating this way.
Personally I don't really see how you could do otherwise in a setup like this.8.
be prepared for bottlenecks.
you're doing this on a home budget, you probably won't get 300mbyte/sec reads off of your array, no matter how many drives configured at what raid level.
I can only get 10-20mbyte/sec across my gigE network going to/from my raid 5 array.
This is probably due to the cheap PCI sata cards I'm using.
I willingly make this trade-off to obtain the capacity I have for the price I spent.If any of these points is an overriding concern for your intended use, then you'd have to re-evaluate the importance of all the other considerations.For me, stability, capacity and price are top three, leading me to research linux-stable cheap sata expansion cards (which is just a nice way of saying, I buy and try probably 2x the # of controllers I actually use, to find ones that won't corrupt data, time out on random drive accesses, or simply not display the real drives to the OS, etc), and compromise by waiting a bit longer for network transfers.
Usua</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682165</id>
	<title>Re:What about the electricity?</title>
	<author>Anonymous</author>
	<datestamp>1247477640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Do the 2TB drives draw more power than the 1TB? I have not looked.</p></div><p>It all depends on the number of platters. Overall they don't draw any more power and actually draw far less power per byte.<br>Especially with all the new "low power" <a href="http://www.techreport.com/articles.x/16393/11" title="techreport.com" rel="nofollow">drives</a> [techreport.com].</p></div>
	</htmltext>
<tokenext>Do the 2TB drives draw more power than the 1TB ?
I have not looked.It all depends on the number of platters .
Overall they do n't draw any more power and actually draw far less power per byte.Especially with all the new " low power " drives [ techreport.com ] .</tokentext>
<sentencetext>Do the 2TB drives draw more power than the 1TB?
I have not looked.It all depends on the number of platters.
Overall they don't draw any more power and actually draw far less power per byte.Especially with all the new "low power" drives [techreport.com].
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680905</id>
	<title>$1000 my ass</title>
	<author>Anonymous</author>
	<datestamp>1247515740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>That'll buy the disks.  But nothing else.  "Hey, look at my 10TB array.  It's sitting there on the table in those cardboard boxes."</p></htmltext>
<tokenext>That 'll buy the disks .
But nothing else .
" Hey , look at my 10TB array .
It 's sitting there on the table in those cardboard boxes .
"</tokentext>
<sentencetext>That'll buy the disks.
But nothing else.
"Hey, look at my 10TB array.
It's sitting there on the table in those cardboard boxes.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681257</id>
	<title>Sigh...</title>
	<author>Anonymous</author>
	<datestamp>1247517240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>From the<nobr> <wbr></nobr>.COM bust, I have two leftover Netapp filers, with a dozen or so shelves, about 2T of storage.  Each unit was about $250,000 new.  A half million dollars worth of gear.  Sitting in my shed.  It's not worth the cost of shipping to even give the unit away any more.  I guess it'll probably just go to the recycling depot.  It seems a bit sad for such a cool piece of hardware.</p><p>On the cheerier side, it is nice to enjoy the benefits of the new densities; I have two 1T external drives, I bought for $100 each, mirrored for redundancy, that sit in the corner of my desk, silently, drawing next to no power.  (Of course the NetApp would have better throughput in a major server environment, but for most practical purposes, a small RAID of modern 1T drives is just fine.)</p></htmltext>
<tokenext>From the .COM bust , I have two leftover Netapp filers , with a dozen or so shelves , about 2T of storage .
Each unit was about $ 250,000 new .
A half million dollars worth of gear .
Sitting in my shed .
It 's not worth the cost of shipping to even give the unit away any more .
I guess it 'll probably just go to the recycling depot .
It seems a bit sad for such a cool piece of hardware.On the cheerier side , it is nice to enjoy the benefits of the new densities ; I have two 1T external drives , I bought for $ 100 each , mirrored for redundancy , that sit in the corner of my desk , silently , drawing next to no power .
( Of course the NetApp would have better throughput in a major server environment , but for most practical purposes , a small RAID of modern 1T drives is just fine .
)</tokentext>
<sentencetext>From the .COM bust, I have two leftover Netapp filers, with a dozen or so shelves, about 2T of storage.
Each unit was about $250,000 new.
A half million dollars worth of gear.
Sitting in my shed.
It's not worth the cost of shipping to even give the unit away any more.
I guess it'll probably just go to the recycling depot.
It seems a bit sad for such a cool piece of hardware.On the cheerier side, it is nice to enjoy the benefits of the new densities; I have two 1T external drives, I bought for $100 each, mirrored for redundancy, that sit in the corner of my desk, silently, drawing next to no power.
(Of course the NetApp would have better throughput in a major server environment, but for most practical purposes, a small RAID of modern 1T drives is just fine.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682155</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>Anonymous</author>
	<datestamp>1247477580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Agree with all your points, except in point one where you criticize the $1000 as only applying to the drives and that doesn't equate an array.  However, the $1000 also only applies to the processor and that also doesn't equate to a computer.  I do agree that the statements ("The Idea: Massive Hard Drive Storage Within a $1,000 Budget") are misleading.</p></htmltext>
<tokenext>Agree with all your points , except in point one where you criticize the $ 1000 as only applying to the drives and that does n't equate an array .
However , the $ 1000 also only applies to the processor and that also does n't equate to a computer .
I do agree that the statements ( " The Idea : Massive Hard Drive Storage Within a $ 1,000 Budget " ) are misleading .</tokentext>
<sentencetext>Agree with all your points, except in point one where you criticize the $1000 as only applying to the drives and that doesn't equate an array.
However, the $1000 also only applies to the processor and that also doesn't equate to a computer.
I do agree that the statements ("The Idea: Massive Hard Drive Storage Within a $1,000 Budget") are misleading.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681577</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>Anonymous</author>
	<datestamp>1247518440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>8x Seagate 7200.11 1.5TB Drives @ $119/ea from Microcenter<br>1x Highpoint RocketRAID 2322 w/ cables @ $329.97<br>1x 8 Drive SATA enclosure @ $225.00<br>Plug into a Mac Pro = 600MB/sec RAID 5<br>Sweet.</p></htmltext>
<tokenext>8x Seagate 7200.11 1.5TB Drives @ $ 119/ea from Microcenter1x Highpoint RocketRAID 2322 w/ cables @ $ 329.971x 8 Drive SATA enclosure @ $ 225.00Plug into a Mac Pro = 600MB/sec RAID 5Sweet .</tokentext>
<sentencetext>8x Seagate 7200.11 1.5TB Drives @ $119/ea from Microcenter1x Highpoint RocketRAID 2322 w/ cables @ $329.971x 8 Drive SATA enclosure @ $225.00Plug into a Mac Pro = 600MB/sec RAID 5Sweet.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681905</id>
	<title>Re:What about the electricity?</title>
	<author>fireduck64</author>
	<datestamp>1247476500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I can't agree with this enough.  My personal server room used to cost me about $60/mo in electricity.  Now with fewer and smaller drives and moving a number of things to a single vmware server I've reduced it to about $30/mo.  For anyone interested in seeing what things really use, I recommend picking up a kill-a-watt meter for $25.  <a href="http://www.amazon.com/P3-International-P4400-Electricity-Monitor/dp/B00009MDBU/ref=sr\_1\_1?ie=UTF8&amp;s=electronics&amp;qid=1247515957&amp;sr=8-1" title="amazon.com" rel="nofollow">http://www.amazon.com/P3-International-P4400-Electricity-Monitor/dp/B00009MDBU/ref=sr\_1\_1?ie=UTF8&amp;s=electronics&amp;qid=1247515957&amp;sr=8-1</a> [amazon.com]

Of course they can't measure things that are under 15A on a 110v outlet.  Higher loads can be measured using one of those loops on a multimeter, but I think you have to isolate and loop around the hot line only which can be a pain unless you are comfortable opening the breaker box while live.</htmltext>
<tokenext>I ca n't agree with this enough .
My personal server room used to cost me about $ 60/mo in electricity .
Now with fewer and smaller drives and moving a number of things to a single vmware server I 've reduced it to about $ 30/mo .
For anyone interested in seeing what things really use , I recommend picking up a kill-a-watt meter for $ 25 .
http : //www.amazon.com/P3-International-P4400-Electricity-Monitor/dp/B00009MDBU/ref = sr \ _1 \ _1 ? ie = UTF8&amp;s = electronics&amp;qid = 1247515957&amp;sr = 8-1 [ amazon.com ] Of course they ca n't measure things that are under 15A on a 110v outlet .
Higher loads can be measured using one of those loops on a multimeter , but I think you have to isolate and loop around the hot line only which can be a pain unless you are comfortable opening the breaker box while live .</tokentext>
<sentencetext>I can't agree with this enough.
My personal server room used to cost me about $60/mo in electricity.
Now with fewer and smaller drives and moving a number of things to a single vmware server I've reduced it to about $30/mo.
For anyone interested in seeing what things really use, I recommend picking up a kill-a-watt meter for $25.
http://www.amazon.com/P3-International-P4400-Electricity-Monitor/dp/B00009MDBU/ref=sr\_1\_1?ie=UTF8&amp;s=electronics&amp;qid=1247515957&amp;sr=8-1 [amazon.com]

Of course they can't measure things that are under 15A on a 110v outlet.
Higher loads can be measured using one of those loops on a multimeter, but I think you have to isolate and loop around the hot line only which can be a pain unless you are comfortable opening the breaker box while live.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682483</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>Anonymous</author>
	<datestamp>1247479020000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><i>"Did I miss anything?"</i>
<br> <br>
You forgot reason Five, which is stated in the article:  "we decided to create the ultimate RAID array, <b>one that should be able store all of your data for years to come</b> while providing much faster performance than any individual drive could."
<br> <br>
If this is suppose to be storing data for years, why am I dropping $1,000 on it today?  Why am I (or anyone) buying "the next several years" of storage all at once?  Did I win a huge settlement <a href="http://yro.slashdot.org/story/09/07/13/1727218/Wells-Fargo-Bank-Sues-Itself?art\_pos=1" title="slashdot.org">from suing myself?</a> [slashdot.org].  Did I win the lottery?  Did the economy suddenly rebound?
<br> <br>
And in several years when you actually use all 10 tb you're gonna be the douche with twelve old 1 tb drives while you're buddies are cruising along with single 5 and 7 tb drives that they spent $100-$200 on.
<br> <br>
Wouldn't it make more sense to buy more when I fill what I already have?  What's the point of having 10 TB with 95\% of it empty?  Spending a grand on storage that will sit largely empty for several years all the while burning up electricity to keep those drives running doesn't make sense.  Might as well leave them in the box and lower the electric bill a bit for a few years.
<br> <br>
Am I'm surprised they even bothered with testing RAID 0.  12 drives, no redundancy?  Good way to lose 10 TB of data if you ask me.
<br> <br>
Just for shits and grins I decided to look up what drive the $85 they spent on a 1 tb drive would have bought 5 years ago, to see how this article would have gone if it was July 2004.  <a href="http://web.archive.org/web/20040627052554/www.pricewatch.com/1/26/5138-1.htm" title="archive.org">Looks like they'd have twelve 120gb SATA drives</a> [archive.org] or <a href="http://web.archive.org/web/20040627043336/www.pricewatch.com/1/26/4429-1.htm" title="archive.org">twelve 160gb IDE</a> [archive.org].  The IDE drives would be sadly outdated by now and the SATA drives would have given you 1.2 TB of storage all for $1,000.  I imagine we'll be looking at this article 5 years from now and thinking "WTF were they thinking??"</htmltext>
<tokenext>" Did I miss anything ?
" You forgot reason Five , which is stated in the article : " we decided to create the ultimate RAID array , one that should be able store all of your data for years to come while providing much faster performance than any individual drive could .
" If this is suppose to be storing data for years , why am I dropping $ 1,000 on it today ?
Why am I ( or anyone ) buying " the next several years " of storage all at once ?
Did I win a huge settlement from suing myself ?
[ slashdot.org ] . Did I win the lottery ?
Did the economy suddenly rebound ?
And in several years when you actually use all 10 tb you 're gon na be the douche with twelve old 1 tb drives while you 're buddies are cruising along with single 5 and 7 tb drives that they spent $ 100- $ 200 on .
Would n't it make more sense to buy more when I fill what I already have ?
What 's the point of having 10 TB with 95 \ % of it empty ?
Spending a grand on storage that will sit largely empty for several years all the while burning up electricity to keep those drives running does n't make sense .
Might as well leave them in the box and lower the electric bill a bit for a few years .
Am I 'm surprised they even bothered with testing RAID 0 .
12 drives , no redundancy ?
Good way to lose 10 TB of data if you ask me .
Just for shits and grins I decided to look up what drive the $ 85 they spent on a 1 tb drive would have bought 5 years ago , to see how this article would have gone if it was July 2004 .
Looks like they 'd have twelve 120gb SATA drives [ archive.org ] or twelve 160gb IDE [ archive.org ] .
The IDE drives would be sadly outdated by now and the SATA drives would have given you 1.2 TB of storage all for $ 1,000 .
I imagine we 'll be looking at this article 5 years from now and thinking " WTF were they thinking ? ?
"</tokentext>
<sentencetext>"Did I miss anything?
"
 
You forgot reason Five, which is stated in the article:  "we decided to create the ultimate RAID array, one that should be able store all of your data for years to come while providing much faster performance than any individual drive could.
"
 
If this is suppose to be storing data for years, why am I dropping $1,000 on it today?
Why am I (or anyone) buying "the next several years" of storage all at once?
Did I win a huge settlement from suing myself?
[slashdot.org].  Did I win the lottery?
Did the economy suddenly rebound?
And in several years when you actually use all 10 tb you're gonna be the douche with twelve old 1 tb drives while you're buddies are cruising along with single 5 and 7 tb drives that they spent $100-$200 on.
Wouldn't it make more sense to buy more when I fill what I already have?
What's the point of having 10 TB with 95\% of it empty?
Spending a grand on storage that will sit largely empty for several years all the while burning up electricity to keep those drives running doesn't make sense.
Might as well leave them in the box and lower the electric bill a bit for a few years.
Am I'm surprised they even bothered with testing RAID 0.
12 drives, no redundancy?
Good way to lose 10 TB of data if you ask me.
Just for shits and grins I decided to look up what drive the $85 they spent on a 1 tb drive would have bought 5 years ago, to see how this article would have gone if it was July 2004.
Looks like they'd have twelve 120gb SATA drives [archive.org] or twelve 160gb IDE [archive.org].
The IDE drives would be sadly outdated by now and the SATA drives would have given you 1.2 TB of storage all for $1,000.
I imagine we'll be looking at this article 5 years from now and thinking "WTF were they thinking??
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909</id>
	<title>*gag*</title>
	<author>Anonymous</author>
	<datestamp>1247515740000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>Sorry, I saw Areca and I threw up in my mouth a little. Their controllers are terrible, and gave our company nothing but trouble in the short amount of time we used them in the past. Those that are still out in the field (sold to customers and have service contracts) are a constant nuisance.</htmltext>
<tokenext>Sorry , I saw Areca and I threw up in my mouth a little .
Their controllers are terrible , and gave our company nothing but trouble in the short amount of time we used them in the past .
Those that are still out in the field ( sold to customers and have service contracts ) are a constant nuisance .</tokentext>
<sentencetext>Sorry, I saw Areca and I threw up in my mouth a little.
Their controllers are terrible, and gave our company nothing but trouble in the short amount of time we used them in the past.
Those that are still out in the field (sold to customers and have service contracts) are a constant nuisance.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685687</id>
	<title>Re:What for?</title>
	<author>dbIII</author>
	<datestamp>1247499720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>There's the state in between of a white box server with a decent RAID card.  Why buy HP crap with dismal support for more when you can have a Supermicro board and a 3ware RAID card?  The big names only make sense if they actually have support staff in the city you are based on, otherwise you can get new parts faster than they can fly in.  Also the machine may not be critical enough that a few days of downtime is going to cost you a lot.  Redundancy of some things can be cheaper than most would expect if you are willing to put up with reduced performance for a while.<br>Then there's the state below that which many University or poorly funded private labs occupy.  A few TB of fast scratch data space cobbled together as cheaply as possible can fill a few roles.</htmltext>
<tokenext>There 's the state in between of a white box server with a decent RAID card .
Why buy HP crap with dismal support for more when you can have a Supermicro board and a 3ware RAID card ?
The big names only make sense if they actually have support staff in the city you are based on , otherwise you can get new parts faster than they can fly in .
Also the machine may not be critical enough that a few days of downtime is going to cost you a lot .
Redundancy of some things can be cheaper than most would expect if you are willing to put up with reduced performance for a while.Then there 's the state below that which many University or poorly funded private labs occupy .
A few TB of fast scratch data space cobbled together as cheaply as possible can fill a few roles .</tokentext>
<sentencetext>There's the state in between of a white box server with a decent RAID card.
Why buy HP crap with dismal support for more when you can have a Supermicro board and a 3ware RAID card?
The big names only make sense if they actually have support staff in the city you are based on, otherwise you can get new parts faster than they can fly in.
Also the machine may not be critical enough that a few days of downtime is going to cost you a lot.
Redundancy of some things can be cheaper than most would expect if you are willing to put up with reduced performance for a while.Then there's the state below that which many University or poorly funded private labs occupy.
A few TB of fast scratch data space cobbled together as cheaply as possible can fill a few roles.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681463</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28688623</id>
	<title>Not gag.</title>
	<author>Sxooter</author>
	<datestamp>1247572860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've been using the 1680 series for a couple of years now and they've been rock solid, for the most part.  I had one that was delivered bad, replaced it and the replacement is running smoothly a year later.  Have you got any kind of outside numbers that show them having a higher failure rate / data corruption rate?  The brand I've had problems with in the past has mostly been adaptec.</p></htmltext>
<tokenext>I 've been using the 1680 series for a couple of years now and they 've been rock solid , for the most part .
I had one that was delivered bad , replaced it and the replacement is running smoothly a year later .
Have you got any kind of outside numbers that show them having a higher failure rate / data corruption rate ?
The brand I 've had problems with in the past has mostly been adaptec .</tokentext>
<sentencetext>I've been using the 1680 series for a couple of years now and they've been rock solid, for the most part.
I had one that was delivered bad, replaced it and the replacement is running smoothly a year later.
Have you got any kind of outside numbers that show them having a higher failure rate / data corruption rate?
The brand I've had problems with in the past has mostly been adaptec.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28699937</id>
	<title>RAID0?  Aww cmon</title>
	<author>stanchion7</author>
	<datestamp>1247591160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>When I first saw this article on Tom's Hardware I just about laughed my ass off.  10 hard drives in a RAID0 they propose?  Okay, that is sick as fuck and everyone knows it.  Must be a really slow summer.

Anyhow, here is my current party piece.  Prices in Canadian loonies:

Solaris Express snv\_114 [free]
Intel C2D @ 2.66GHz, 6GB DDR-II, Asus P5Q-E, plus trimmings and 2x Seagate Barracuda 7200.11 1TB internal (ZFS boot+mirror) [Only 'new' part was the mobo, $200.  Total value is probably like $800]
2x LSI SAS3442E-R PCI Express HBA, these were relatively cheap.   [$230 each]
2x AMCC/3Ware CBL-SFF8088IB-20M 2m (SFF-8088) to (SFF-8470) Cable [$80 each]
EnhanceBOX E8 MS  External Mini SAS Enclosure with dual PS [$800]
8x Seagate Barracuda 7200.11 1500GB 32mb 7200rpm, in 2x RAIDZ [$140 each]

So after tax like $3500 Canadian.  It is worth it though.

A *little* bit of redundancy to protect your data at the very least is a good idea.  RAID1+n is supposed to be best but, my data is largely multimedia.  I might be miffed to lose files but I certainly won't be out of a job.
I can upgrade RAID size by swapping out hard drives for bigger ones (must be done in block of four.  What comes after 2TB drives I wonder?).
The external enclosure has 2 RAIDZ's consisting of four drives each.  If a drive dies, I hot swap it out.   The odds of a drive dying before I replace it with a bigger one is low.
If a controller dies, I can switch back and forth between RAID sets until I find a replacement controller.  New controller type doesn't matter, as long as its mini-SAS and supported by Open Solaris!
Battery backed controller doesn't matter since ZFS is always consistent on disk.
Redundant power in the external enclosure.
If the PC dies I can just replace it (needs 2x standard PCI-e x16 slots though, which not all mobos have.)

All in all I think its a pretty sweet setup.  I get 50+MBps copying files on or off of the system.  Only drawback is I had to upgrade the firmware on some of my drives because of Seagate's 'bug'.</htmltext>
<tokenext>When I first saw this article on Tom 's Hardware I just about laughed my ass off .
10 hard drives in a RAID0 they propose ?
Okay , that is sick as fuck and everyone knows it .
Must be a really slow summer .
Anyhow , here is my current party piece .
Prices in Canadian loonies : Solaris Express snv \ _114 [ free ] Intel C2D @ 2.66GHz , 6GB DDR-II , Asus P5Q-E , plus trimmings and 2x Seagate Barracuda 7200.11 1TB internal ( ZFS boot + mirror ) [ Only 'new ' part was the mobo , $ 200 .
Total value is probably like $ 800 ] 2x LSI SAS3442E-R PCI Express HBA , these were relatively cheap .
[ $ 230 each ] 2x AMCC/3Ware CBL-SFF8088IB-20M 2m ( SFF-8088 ) to ( SFF-8470 ) Cable [ $ 80 each ] EnhanceBOX E8 MS External Mini SAS Enclosure with dual PS [ $ 800 ] 8x Seagate Barracuda 7200.11 1500GB 32mb 7200rpm , in 2x RAIDZ [ $ 140 each ] So after tax like $ 3500 Canadian .
It is worth it though .
A * little * bit of redundancy to protect your data at the very least is a good idea .
RAID1 + n is supposed to be best but , my data is largely multimedia .
I might be miffed to lose files but I certainly wo n't be out of a job .
I can upgrade RAID size by swapping out hard drives for bigger ones ( must be done in block of four .
What comes after 2TB drives I wonder ? ) .
The external enclosure has 2 RAIDZ 's consisting of four drives each .
If a drive dies , I hot swap it out .
The odds of a drive dying before I replace it with a bigger one is low .
If a controller dies , I can switch back and forth between RAID sets until I find a replacement controller .
New controller type does n't matter , as long as its mini-SAS and supported by Open Solaris !
Battery backed controller does n't matter since ZFS is always consistent on disk .
Redundant power in the external enclosure .
If the PC dies I can just replace it ( needs 2x standard PCI-e x16 slots though , which not all mobos have .
) All in all I think its a pretty sweet setup .
I get 50 + MBps copying files on or off of the system .
Only drawback is I had to upgrade the firmware on some of my drives because of Seagate 's 'bug' .</tokentext>
<sentencetext>When I first saw this article on Tom's Hardware I just about laughed my ass off.
10 hard drives in a RAID0 they propose?
Okay, that is sick as fuck and everyone knows it.
Must be a really slow summer.
Anyhow, here is my current party piece.
Prices in Canadian loonies:

Solaris Express snv\_114 [free]
Intel C2D @ 2.66GHz, 6GB DDR-II, Asus P5Q-E, plus trimmings and 2x Seagate Barracuda 7200.11 1TB internal (ZFS boot+mirror) [Only 'new' part was the mobo, $200.
Total value is probably like $800]
2x LSI SAS3442E-R PCI Express HBA, these were relatively cheap.
[$230 each]
2x AMCC/3Ware CBL-SFF8088IB-20M 2m (SFF-8088) to (SFF-8470) Cable [$80 each]
EnhanceBOX E8 MS  External Mini SAS Enclosure with dual PS [$800]
8x Seagate Barracuda 7200.11 1500GB 32mb 7200rpm, in 2x RAIDZ [$140 each]

So after tax like $3500 Canadian.
It is worth it though.
A *little* bit of redundancy to protect your data at the very least is a good idea.
RAID1+n is supposed to be best but, my data is largely multimedia.
I might be miffed to lose files but I certainly won't be out of a job.
I can upgrade RAID size by swapping out hard drives for bigger ones (must be done in block of four.
What comes after 2TB drives I wonder?).
The external enclosure has 2 RAIDZ's consisting of four drives each.
If a drive dies, I hot swap it out.
The odds of a drive dying before I replace it with a bigger one is low.
If a controller dies, I can switch back and forth between RAID sets until I find a replacement controller.
New controller type doesn't matter, as long as its mini-SAS and supported by Open Solaris!
Battery backed controller doesn't matter since ZFS is always consistent on disk.
Redundant power in the external enclosure.
If the PC dies I can just replace it (needs 2x standard PCI-e x16 slots though, which not all mobos have.
)

All in all I think its a pretty sweet setup.
I get 50+MBps copying files on or off of the system.
Only drawback is I had to upgrade the firmware on some of my drives because of Seagate's 'bug'.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682203</id>
	<title>Re:Misleading headline</title>
	<author>MartinSchou</author>
	<datestamp>1247477760000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>GbE is 1,000 megabits/s in theory. That's no more than 125 megabytes/s. With four Intel X25-E drives you'll hit <a href="http://www.anandtech.com/storage/showdoc.aspx?i=3531&amp;p=25" title="anandtech.com">226 MB/s random read and 127 MB/s random write</a> [anandtech.com] throughput.</p><p>I'm fairly certain you can settle for the four on-board SATA ports for that. And those four drives combined will more or less eat a few thousand IO/s as horderves.</p></htmltext>
<tokenext>GbE is 1,000 megabits/s in theory .
That 's no more than 125 megabytes/s .
With four Intel X25-E drives you 'll hit 226 MB/s random read and 127 MB/s random write [ anandtech.com ] throughput.I 'm fairly certain you can settle for the four on-board SATA ports for that .
And those four drives combined will more or less eat a few thousand IO/s as horderves .</tokentext>
<sentencetext>GbE is 1,000 megabits/s in theory.
That's no more than 125 megabytes/s.
With four Intel X25-E drives you'll hit 226 MB/s random read and 127 MB/s random write [anandtech.com] throughput.I'm fairly certain you can settle for the four on-board SATA ports for that.
And those four drives combined will more or less eat a few thousand IO/s as horderves.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681151</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684529</id>
	<title>Re:So what's the MBTF on this array?</title>
	<author>TClevenger</author>
	<datestamp>1247489580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>12 consumer level SATA drives by Samsung. What'd be interesting is to see how long it takes before it fails with complete data loss due to drive failure. Raid 5 isn't going to save this turkey.</p></div><p>I think that applies to any one company.  If you spread your RAID out among similar-sized disks from different manufacturers, you stand less of a chance of a bad batch of drives (Deathstars) or firmware (yeah you, Seagate) dying in a short period of time.</p></div>
	</htmltext>
<tokenext>12 consumer level SATA drives by Samsung .
What 'd be interesting is to see how long it takes before it fails with complete data loss due to drive failure .
Raid 5 is n't going to save this turkey.I think that applies to any one company .
If you spread your RAID out among similar-sized disks from different manufacturers , you stand less of a chance of a bad batch of drives ( Deathstars ) or firmware ( yeah you , Seagate ) dying in a short period of time .</tokentext>
<sentencetext>12 consumer level SATA drives by Samsung.
What'd be interesting is to see how long it takes before it fails with complete data loss due to drive failure.
Raid 5 isn't going to save this turkey.I think that applies to any one company.
If you spread your RAID out among similar-sized disks from different manufacturers, you stand less of a chance of a bad batch of drives (Deathstars) or firmware (yeah you, Seagate) dying in a short period of time.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681505</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681839</id>
	<title>Re:What about the electricity?</title>
	<author>Anonymous</author>
	<datestamp>1247476260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Haha, electricity in Quebec is 5.9c/kwh.</p></htmltext>
<tokenext>Haha , electricity in Quebec is 5.9c/kwh .</tokentext>
<sentencetext>Haha, electricity in Quebec is 5.9c/kwh.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684877</id>
	<title>OpenFiler</title>
	<author>meehawl</author>
	<datestamp>1247492340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>I'd love to hear others feedback on similar personal use ULTRA CHEAP RAID setups.</i></p><p>For software, use <a href="http://en.wikipedia.org/wiki/Openfiler" title="wikipedia.org">OpenFiler</a> [wikipedia.org].</p></htmltext>
<tokenext>I 'd love to hear others feedback on similar personal use ULTRA CHEAP RAID setups.For software , use OpenFiler [ wikipedia.org ] .</tokentext>
<sentencetext>I'd love to hear others feedback on similar personal use ULTRA CHEAP RAID setups.For software, use OpenFiler [wikipedia.org].</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683115</id>
	<title>Re:...How is this news?</title>
	<author>Anonymous</author>
	<datestamp>1247481540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>there is no such thing as "buy more storage than you could possibly need"<nobr> <wbr></nobr>...</p><p>You give us 1TB, and we fill it.</p><p>you give us 10TB, and we fill it too...</p><p>and I don't have hard data on this, but I bet that we can fill 10TB proportionally faster than 1TB</p></htmltext>
<tokenext>there is no such thing as " buy more storage than you could possibly need " ...You give us 1TB , and we fill it.you give us 10TB , and we fill it too...and I do n't have hard data on this , but I bet that we can fill 10TB proportionally faster than 1TB</tokentext>
<sentencetext>there is no such thing as "buy more storage than you could possibly need" ...You give us 1TB, and we fill it.you give us 10TB, and we fill it too...and I don't have hard data on this, but I bet that we can fill 10TB proportionally faster than 1TB</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680871</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681347</id>
	<title>Bad Journalism</title>
	<author>Anonymous</author>
	<datestamp>1247517600000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext>Ignoring the fact that $1000 isn't even close to what this system cost, it doesn't say anything about the raid setup, stripe size, bloc size, other configurations.  It doesn't mention rebuild or expansion time on the Raid 5 array.  It would be at least marginally worth reading to some of the people looking at expanding their storage capacities if they had mentioned a few of those things.  Also, if you got your hands on a high performance raid6 card, why not benchmark raid6 versus raid5?!<br> <br>

This article doesn't do anything but publish a glorified spec sheet for Samsung and Areca.  Also notice that they decided to stick desktop drives in a Raid array, a big no-no if you want your array to last more than a few weeks.</htmltext>
<tokenext>Ignoring the fact that $ 1000 is n't even close to what this system cost , it does n't say anything about the raid setup , stripe size , bloc size , other configurations .
It does n't mention rebuild or expansion time on the Raid 5 array .
It would be at least marginally worth reading to some of the people looking at expanding their storage capacities if they had mentioned a few of those things .
Also , if you got your hands on a high performance raid6 card , why not benchmark raid6 versus raid5 ? !
This article does n't do anything but publish a glorified spec sheet for Samsung and Areca .
Also notice that they decided to stick desktop drives in a Raid array , a big no-no if you want your array to last more than a few weeks .</tokentext>
<sentencetext>Ignoring the fact that $1000 isn't even close to what this system cost, it doesn't say anything about the raid setup, stripe size, bloc size, other configurations.
It doesn't mention rebuild or expansion time on the Raid 5 array.
It would be at least marginally worth reading to some of the people looking at expanding their storage capacities if they had mentioned a few of those things.
Also, if you got your hands on a high performance raid6 card, why not benchmark raid6 versus raid5?!
This article doesn't do anything but publish a glorified spec sheet for Samsung and Areca.
Also notice that they decided to stick desktop drives in a Raid array, a big no-no if you want your array to last more than a few weeks.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683527</id>
	<title>Re:...How is this news?</title>
	<author>jedidiah</author>
	<datestamp>1247483580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This is someone publishing their recipe for "a whole lot of disk".</p><p>Reports on just how accessable this technology is are very newsworthy.</p><p>Although this machine is not on the simple side of things.</p></htmltext>
<tokenext>This is someone publishing their recipe for " a whole lot of disk " .Reports on just how accessable this technology is are very newsworthy.Although this machine is not on the simple side of things .</tokentext>
<sentencetext>This is someone publishing their recipe for "a whole lot of disk".Reports on just how accessable this technology is are very newsworthy.Although this machine is not on the simple side of things.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680871</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681571</id>
	<title>Re:Misleading headline</title>
	<author>Beardo the Bearded</author>
	<datestamp>1247518440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Another eye-catching headline to get click throughs, that's just wrong. Sad.</p></div><p>Then we shall give them what they ask for and bring forth the slashpocalypse.</p></div>
	</htmltext>
<tokenext>Another eye-catching headline to get click throughs , that 's just wrong .
Sad.Then we shall give them what they ask for and bring forth the slashpocalypse .</tokentext>
<sentencetext>Another eye-catching headline to get click throughs, that's just wrong.
Sad.Then we shall give them what they ask for and bring forth the slashpocalypse.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680919</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685835</id>
	<title>D-link</title>
	<author>soundguy</author>
	<datestamp>1247500620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have a lot of spare drives (250/300/500) from my web hosting business because I rotate them out every couple of years. I found a 16-bay 4u case on ebay for chump change, grabbed some cheap Hipoint cards, and set to work  building a backup storage array using software raid. As it turns out, the case was cheap because some of the caddy connections were sketchy, and of course the cheap cards would randomly "not see" a drive now and then. After spending way too much time troubleshooting and configuring things, I finally got it all set up and working. Had 4 TB of backup storage. Unfortunately, I never bothered to MEASURE the damned case, which turned out to be 26" deep. This conflicted somewhat with my 24" rack, especially when attempting to close the rear door.</p><p> Plan B: Garage sale with a 16-bay case and dozens of hard drives available. A D-link 321 NAS with built-in web server (for config), ftp, telnet, a 1gbps ethernet port, and a pair of  WD 2 TB drives in RAID0. Roughly $500 for 4TB. It's barely larger than the 2 drives and I have it stashed in another building on the far end of my property. Honestly, DIY just isn't worth is sometimes.</p></htmltext>
<tokenext>I have a lot of spare drives ( 250/300/500 ) from my web hosting business because I rotate them out every couple of years .
I found a 16-bay 4u case on ebay for chump change , grabbed some cheap Hipoint cards , and set to work building a backup storage array using software raid .
As it turns out , the case was cheap because some of the caddy connections were sketchy , and of course the cheap cards would randomly " not see " a drive now and then .
After spending way too much time troubleshooting and configuring things , I finally got it all set up and working .
Had 4 TB of backup storage .
Unfortunately , I never bothered to MEASURE the damned case , which turned out to be 26 " deep .
This conflicted somewhat with my 24 " rack , especially when attempting to close the rear door .
Plan B : Garage sale with a 16-bay case and dozens of hard drives available .
A D-link 321 NAS with built-in web server ( for config ) , ftp , telnet , a 1gbps ethernet port , and a pair of WD 2 TB drives in RAID0 .
Roughly $ 500 for 4TB .
It 's barely larger than the 2 drives and I have it stashed in another building on the far end of my property .
Honestly , DIY just is n't worth is sometimes .</tokentext>
<sentencetext>I have a lot of spare drives (250/300/500) from my web hosting business because I rotate them out every couple of years.
I found a 16-bay 4u case on ebay for chump change, grabbed some cheap Hipoint cards, and set to work  building a backup storage array using software raid.
As it turns out, the case was cheap because some of the caddy connections were sketchy, and of course the cheap cards would randomly "not see" a drive now and then.
After spending way too much time troubleshooting and configuring things, I finally got it all set up and working.
Had 4 TB of backup storage.
Unfortunately, I never bothered to MEASURE the damned case, which turned out to be 26" deep.
This conflicted somewhat with my 24" rack, especially when attempting to close the rear door.
Plan B: Garage sale with a 16-bay case and dozens of hard drives available.
A D-link 321 NAS with built-in web server (for config), ftp, telnet, a 1gbps ethernet port, and a pair of  WD 2 TB drives in RAID0.
Roughly $500 for 4TB.
It's barely larger than the 2 drives and I have it stashed in another building on the far end of my property.
Honestly, DIY just isn't worth is sometimes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683881</id>
	<title>Re:...How is this news?</title>
	<author>HTH NE1</author>
	<datestamp>1247485860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Yes, we know that you can buy more storage then you could possibly need.</p></div><p>Reasonable Limits Aren't.</p></div>
	</htmltext>
<tokenext>Yes , we know that you can buy more storage then you could possibly need.Reasonable Limits Are n't .</tokentext>
<sentencetext>Yes, we know that you can buy more storage then you could possibly need.Reasonable Limits Aren't.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680871</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681403</id>
	<title>Seems Incomplete</title>
	<author>darkmeridian</author>
	<datestamp>1247517780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've been using FreeNAS 0.7 RC1 for a while. It works pretty well for a NAS, and does the job for my small business. However, I don't think it would be useful for a larger business that requires great performance and reliability.</p></htmltext>
<tokenext>I 've been using FreeNAS 0.7 RC1 for a while .
It works pretty well for a NAS , and does the job for my small business .
However , I do n't think it would be useful for a larger business that requires great performance and reliability .</tokentext>
<sentencetext>I've been using FreeNAS 0.7 RC1 for a while.
It works pretty well for a NAS, and does the job for my small business.
However, I don't think it would be useful for a larger business that requires great performance and reliability.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685115</id>
	<title>4*$25 SATAx4 Cards Instead of 1 $1200 SAS/SATAx20</title>
	<author>Doc Ruby</author>
	<datestamp>1247494140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This article is stupid mainly because it spends over $1000 (something like $1200) on the RAID card, while spending another $1000 on 12 drives. A RAID card that supports 20 drives, not just 12, and mixed SAS and SATA drives instead of just the SATA it needs. Not to mention that the RAID itself can go in SW under the Linux kernel instead of spending on HW to do it. And that single card is a single failurepoint, making the 12x redundancy of the drives kinda irrelevant.</p><p>Instead, 4 $25 4-port SATA cards are enough. If you want parallel throughput, go for PCI-e, but just the parallelism from 4 cards in old PCI is enough for most apps. Spend 10*$80=$800 on 10 1TB drives, the $100 on the SATA cards, $50 on a 400W power supply (plenty for 10 drives each pulling about 15W, + motherboard), $20 on a 10-bay case, and blow past $1000 a little by putting a P4/2.4GHz/1GB-ethernet motherboard in there for $50. Now you've got almost 1TB capacity, good thruput. Install Ubuntu server, config your SW RAID, ethernet/webserver and whatever NAS server SW you prefer (sshfs, NFS, whatever). Presto! for just over $1000 (for real), you've got almost 10TB (spend $1100 for the full 10TBs).</p><p>The only tricky part might be staggering the drives' spinup kickoffs so the 400W power supply doesn't catch all their load spikes at once. But I'm sure someone can post a bootloader config or patch that will handle the only really wizard part of this whole challenge.</p></htmltext>
<tokenext>This article is stupid mainly because it spends over $ 1000 ( something like $ 1200 ) on the RAID card , while spending another $ 1000 on 12 drives .
A RAID card that supports 20 drives , not just 12 , and mixed SAS and SATA drives instead of just the SATA it needs .
Not to mention that the RAID itself can go in SW under the Linux kernel instead of spending on HW to do it .
And that single card is a single failurepoint , making the 12x redundancy of the drives kinda irrelevant.Instead , 4 $ 25 4-port SATA cards are enough .
If you want parallel throughput , go for PCI-e , but just the parallelism from 4 cards in old PCI is enough for most apps .
Spend 10 * $ 80 = $ 800 on 10 1TB drives , the $ 100 on the SATA cards , $ 50 on a 400W power supply ( plenty for 10 drives each pulling about 15W , + motherboard ) , $ 20 on a 10-bay case , and blow past $ 1000 a little by putting a P4/2.4GHz/1GB-ethernet motherboard in there for $ 50 .
Now you 've got almost 1TB capacity , good thruput .
Install Ubuntu server , config your SW RAID , ethernet/webserver and whatever NAS server SW you prefer ( sshfs , NFS , whatever ) .
Presto ! for just over $ 1000 ( for real ) , you 've got almost 10TB ( spend $ 1100 for the full 10TBs ) .The only tricky part might be staggering the drives ' spinup kickoffs so the 400W power supply does n't catch all their load spikes at once .
But I 'm sure someone can post a bootloader config or patch that will handle the only really wizard part of this whole challenge .</tokentext>
<sentencetext>This article is stupid mainly because it spends over $1000 (something like $1200) on the RAID card, while spending another $1000 on 12 drives.
A RAID card that supports 20 drives, not just 12, and mixed SAS and SATA drives instead of just the SATA it needs.
Not to mention that the RAID itself can go in SW under the Linux kernel instead of spending on HW to do it.
And that single card is a single failurepoint, making the 12x redundancy of the drives kinda irrelevant.Instead, 4 $25 4-port SATA cards are enough.
If you want parallel throughput, go for PCI-e, but just the parallelism from 4 cards in old PCI is enough for most apps.
Spend 10*$80=$800 on 10 1TB drives, the $100 on the SATA cards, $50 on a 400W power supply (plenty for 10 drives each pulling about 15W, + motherboard), $20 on a 10-bay case, and blow past $1000 a little by putting a P4/2.4GHz/1GB-ethernet motherboard in there for $50.
Now you've got almost 1TB capacity, good thruput.
Install Ubuntu server, config your SW RAID, ethernet/webserver and whatever NAS server SW you prefer (sshfs, NFS, whatever).
Presto! for just over $1000 (for real), you've got almost 10TB (spend $1100 for the full 10TBs).The only tricky part might be staggering the drives' spinup kickoffs so the 400W power supply doesn't catch all their load spikes at once.
But I'm sure someone can post a bootloader config or patch that will handle the only really wizard part of this whole challenge.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28687229</id>
	<title>This is hobby stuff</title>
	<author>rs79</author>
	<datestamp>1247513880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you want to see real drive arrays snag yourself a pass to the RSNA show where medical imaging technology vendors strut their stuff. They consume massive amounts of data and it's typical for storage products there to have hundreds of drives in them and be the size of a London apartment.</p><p>But this really isn't new and I can't get too excited about this project. I have a Mylex SCSI raid card on my desk (actually I have a box of them) worth all of about $12 on ebay tehse days (never mind it's still $200 from dealers, or was $2200 6 years ago) that has 3 connectors and will manage up to 45 drives (15 per scsi bus).</p><p>Show me a big array of 2.5" notebook drives on a desktop or a raid array of SSD drives and you'll raise my eyebrow, but I've hooked up 10 drives before: you stick them in a cabinet, attach the cable, configure the array then go. This is not really a big deal.</p></htmltext>
<tokenext>If you want to see real drive arrays snag yourself a pass to the RSNA show where medical imaging technology vendors strut their stuff .
They consume massive amounts of data and it 's typical for storage products there to have hundreds of drives in them and be the size of a London apartment.But this really is n't new and I ca n't get too excited about this project .
I have a Mylex SCSI raid card on my desk ( actually I have a box of them ) worth all of about $ 12 on ebay tehse days ( never mind it 's still $ 200 from dealers , or was $ 2200 6 years ago ) that has 3 connectors and will manage up to 45 drives ( 15 per scsi bus ) .Show me a big array of 2.5 " notebook drives on a desktop or a raid array of SSD drives and you 'll raise my eyebrow , but I 've hooked up 10 drives before : you stick them in a cabinet , attach the cable , configure the array then go .
This is not really a big deal .</tokentext>
<sentencetext>If you want to see real drive arrays snag yourself a pass to the RSNA show where medical imaging technology vendors strut their stuff.
They consume massive amounts of data and it's typical for storage products there to have hundreds of drives in them and be the size of a London apartment.But this really isn't new and I can't get too excited about this project.
I have a Mylex SCSI raid card on my desk (actually I have a box of them) worth all of about $12 on ebay tehse days (never mind it's still $200 from dealers, or was $2200 6 years ago) that has 3 connectors and will manage up to 45 drives (15 per scsi bus).Show me a big array of 2.5" notebook drives on a desktop or a raid array of SSD drives and you'll raise my eyebrow, but I've hooked up 10 drives before: you stick them in a cabinet, attach the cable, configure the array then go.
This is not really a big deal.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683021</id>
	<title>Re:*gag*</title>
	<author>gfody</author>
	<datestamp>1247481240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>*sigh* at modding an anecdote "5, Informative"<br>
I work on a particularly IO demanding application and have found Areca controllers to be a godsend. We've had dozens in production servers for many years now and they have proven to be dependable. We rigorously tested many different controllers in their highest performing configurations and nothing came close to the battery backed ARC-1680 w/4GB. This included cards from LSI, 3ware and Adaptec with their respective maximum amounts of cache and battery backup units. When configured for maximum performance (write back cache, aggressive read ahead, etc.) some of the others even failed our recovery scenarios where we pull power during rebuild and under heavy load, but the Areca was rock solid.<br> <br>
I think if you want to be safe with a RAID controller, do your own tests and don't listen to shills on the internet.</htmltext>
<tokenext>* sigh * at modding an anecdote " 5 , Informative " I work on a particularly IO demanding application and have found Areca controllers to be a godsend .
We 've had dozens in production servers for many years now and they have proven to be dependable .
We rigorously tested many different controllers in their highest performing configurations and nothing came close to the battery backed ARC-1680 w/4GB .
This included cards from LSI , 3ware and Adaptec with their respective maximum amounts of cache and battery backup units .
When configured for maximum performance ( write back cache , aggressive read ahead , etc .
) some of the others even failed our recovery scenarios where we pull power during rebuild and under heavy load , but the Areca was rock solid .
I think if you want to be safe with a RAID controller , do your own tests and do n't listen to shills on the internet .</tokentext>
<sentencetext>*sigh* at modding an anecdote "5, Informative"
I work on a particularly IO demanding application and have found Areca controllers to be a godsend.
We've had dozens in production servers for many years now and they have proven to be dependable.
We rigorously tested many different controllers in their highest performing configurations and nothing came close to the battery backed ARC-1680 w/4GB.
This included cards from LSI, 3ware and Adaptec with their respective maximum amounts of cache and battery backup units.
When configured for maximum performance (write back cache, aggressive read ahead, etc.
) some of the others even failed our recovery scenarios where we pull power during rebuild and under heavy load, but the Areca was rock solid.
I think if you want to be safe with a RAID controller, do your own tests and don't listen to shills on the internet.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681903</id>
	<title>A better solution</title>
	<author>Anonymous</author>
	<datestamp>1247476500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>All prices in Canadian $$$

Buy 2 of these to fit a total of 8 SATA drives: <a href="http://www.canadacomputers.com/index.php?do=ShowProduct&amp;cmd=pd&amp;pid=021484&amp;cid=516.690" title="canadacomputers.com" rel="nofollow">http://www.canadacomputers.com/index.php?do=ShowProduct&amp;cmd=pd&amp;pid=021484&amp;cid=516.690</a> [canadacomputers.com] (289.98 + taxes)

Buy 8 1.5TB Drives: <a href="http://www.canadacomputers.com/index.php?do=ShowProduct&amp;cmd=pd&amp;pid=019453&amp;cid=HD.443.877" title="canadacomputers.com" rel="nofollow">http://www.canadacomputers.com/index.php?do=ShowProduct&amp;cmd=pd&amp;pid=019453&amp;cid=HD.443.877</a> [canadacomputers.com] (1119.92 + taxes)

Total: 1409.90 + taxes for 2 external SATA enclosures &amp; 12TB of disk space.  Setup takes less than 1 hour.  You can always just start with 3GB + 1 enclosure for a total of aprox 450.00 to begin with and keep adding on to it as drive prices go down and as you need disk space...you will save even more that way</htmltext>
<tokenext>All prices in Canadian $ $ $ Buy 2 of these to fit a total of 8 SATA drives : http : //www.canadacomputers.com/index.php ? do = ShowProduct&amp;cmd = pd&amp;pid = 021484&amp;cid = 516.690 [ canadacomputers.com ] ( 289.98 + taxes ) Buy 8 1.5TB Drives : http : //www.canadacomputers.com/index.php ? do = ShowProduct&amp;cmd = pd&amp;pid = 019453&amp;cid = HD.443.877 [ canadacomputers.com ] ( 1119.92 + taxes ) Total : 1409.90 + taxes for 2 external SATA enclosures &amp; 12TB of disk space .
Setup takes less than 1 hour .
You can always just start with 3GB + 1 enclosure for a total of aprox 450.00 to begin with and keep adding on to it as drive prices go down and as you need disk space...you will save even more that way</tokentext>
<sentencetext>All prices in Canadian $$$

Buy 2 of these to fit a total of 8 SATA drives: http://www.canadacomputers.com/index.php?do=ShowProduct&amp;cmd=pd&amp;pid=021484&amp;cid=516.690 [canadacomputers.com] (289.98 + taxes)

Buy 8 1.5TB Drives: http://www.canadacomputers.com/index.php?do=ShowProduct&amp;cmd=pd&amp;pid=019453&amp;cid=HD.443.877 [canadacomputers.com] (1119.92 + taxes)

Total: 1409.90 + taxes for 2 external SATA enclosures &amp; 12TB of disk space.
Setup takes less than 1 hour.
You can always just start with 3GB + 1 enclosure for a total of aprox 450.00 to begin with and keep adding on to it as drive prices go down and as you need disk space...you will save even more that way</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684823</id>
	<title>Re:Redundant Array of INEXPENSIVE Disks</title>
	<author>spire3661</author>
	<datestamp>1247491980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>What about backups? Why bother putting an array in raid 6 for a home environment when you have to backup the data anyways. If you DONT necessarily care if the data goes pop (like say daily DVR files that you watch and erase) then why bother with (full) redundancy? Im jsut curious as the the NEED for all of this, when its REALLY hard to back it all up.

RAID, for the most part is about high availability, not data integrity/storage.</htmltext>
<tokenext>What about backups ?
Why bother putting an array in raid 6 for a home environment when you have to backup the data anyways .
If you DONT necessarily care if the data goes pop ( like say daily DVR files that you watch and erase ) then why bother with ( full ) redundancy ?
Im jsut curious as the the NEED for all of this , when its REALLY hard to back it all up .
RAID , for the most part is about high availability , not data integrity/storage .</tokentext>
<sentencetext>What about backups?
Why bother putting an array in raid 6 for a home environment when you have to backup the data anyways.
If you DONT necessarily care if the data goes pop (like say daily DVR files that you watch and erase) then why bother with (full) redundancy?
Im jsut curious as the the NEED for all of this, when its REALLY hard to back it all up.
RAID, for the most part is about high availability, not data integrity/storage.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680871</id>
	<title>...How is this news?</title>
	<author>Darkness404</author>
	<datestamp>1247515680000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>How is this news? Yes, we all know traditional HDs are cheap. Yes, we know that you can buy more storage then you could possibly need. So how is this newsworthy? It really is no faster nor more reliable than SSDs. I think this is more or less a non-story.</htmltext>
<tokenext>How is this news ?
Yes , we all know traditional HDs are cheap .
Yes , we know that you can buy more storage then you could possibly need .
So how is this newsworthy ?
It really is no faster nor more reliable than SSDs .
I think this is more or less a non-story .</tokentext>
<sentencetext>How is this news?
Yes, we all know traditional HDs are cheap.
Yes, we know that you can buy more storage then you could possibly need.
So how is this newsworthy?
It really is no faster nor more reliable than SSDs.
I think this is more or less a non-story.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681843</id>
	<title>Re:How does the home user back this up?</title>
	<author>Anonymous</author>
	<datestamp>1247476320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I currently have an HP MSL2024 (LTO-3 Autoloader) -- I'm averaging 789 GB<nobr> <wbr></nobr>/tape compressed on LTO3 tapes for data on about 40 desktop PC's where we are backing everything up.</p></htmltext>
<tokenext>I currently have an HP MSL2024 ( LTO-3 Autoloader ) -- I 'm averaging 789 GB /tape compressed on LTO3 tapes for data on about 40 desktop PC 's where we are backing everything up .</tokentext>
<sentencetext>I currently have an HP MSL2024 (LTO-3 Autoloader) -- I'm averaging 789 GB /tape compressed on LTO3 tapes for data on about 40 desktop PC's where we are backing everything up.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28693403</id>
	<title>Re:How does the home user back this up?</title>
	<author>sootman</author>
	<datestamp>1247596140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>1400 gmail accounts?</p></htmltext>
<tokenext>1400 gmail accounts ?</tokentext>
<sentencetext>1400 gmail accounts?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682765</id>
	<title>Re:We do this now</title>
	<author>Anonymous</author>
	<datestamp>1247480160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The price dropped from thousands of dollars to hundreds, and took me a full workday to get set up.</p></div><p>How much do they pay you? With benefits, you could easily cost your employer $500-$1000/day.</p></div>
	</htmltext>
<tokenext>The price dropped from thousands of dollars to hundreds , and took me a full workday to get set up.How much do they pay you ?
With benefits , you could easily cost your employer $ 500- $ 1000/day .</tokentext>
<sentencetext>The price dropped from thousands of dollars to hundreds, and took me a full workday to get set up.How much do they pay you?
With benefits, you could easily cost your employer $500-$1000/day.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684257</id>
	<title>Re:We do this now</title>
	<author>HTH NE1</author>
	<datestamp>1247488020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Drobo.</p><p>Needs only a one-time configuration for maximum reported capacity (e.g. 16 TB), then it's a JBOD that configures itself. Hot-swap the smallest drive as bigger drives become available. I got mine used from someone who works at Pixar. There's a $50 rebate on the new 4-bay models w/FW800.</p></htmltext>
<tokenext>Drobo.Needs only a one-time configuration for maximum reported capacity ( e.g .
16 TB ) , then it 's a JBOD that configures itself .
Hot-swap the smallest drive as bigger drives become available .
I got mine used from someone who works at Pixar .
There 's a $ 50 rebate on the new 4-bay models w/FW800 .</tokentext>
<sentencetext>Drobo.Needs only a one-time configuration for maximum reported capacity (e.g.
16 TB), then it's a JBOD that configures itself.
Hot-swap the smallest drive as bigger drives become available.
I got mine used from someone who works at Pixar.
There's a $50 rebate on the new 4-bay models w/FW800.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683535</id>
	<title>Re:*gag*</title>
	<author>PacketShaper</author>
	<datestamp>1247483580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Their controllers are terrible...</p></div><p>


Please elaborate.<br>
We are using several Areca controllers in very large (24 drive) SAS arrays without any issues.<br>
I am not trolling, I am genuinely interested to know the problems you had and if you upgraded to the latest firmwares on each card (they put out updates often and the changelog lists many bugfixes).</p></div>
	</htmltext>
<tokenext>Their controllers are terrible.. . Please elaborate .
We are using several Areca controllers in very large ( 24 drive ) SAS arrays without any issues .
I am not trolling , I am genuinely interested to know the problems you had and if you upgraded to the latest firmwares on each card ( they put out updates often and the changelog lists many bugfixes ) .</tokentext>
<sentencetext>Their controllers are terrible...


Please elaborate.
We are using several Areca controllers in very large (24 drive) SAS arrays without any issues.
I am not trolling, I am genuinely interested to know the problems you had and if you upgraded to the latest firmwares on each card (they put out updates often and the changelog lists many bugfixes).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681287</id>
	<title>Re:How does the home user back this up?</title>
	<author>Anonymous</author>
	<datestamp>1247517300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Media server from hell. If had it linked up via upnp to say:</p><p>- A Latest Gen Console/Television (For UPNP/DLNA support for a media center experience.</p><p>- Per room based speaker system (Think different music in every room, ala airtunes/sonos but not so proprietary)</p><p>- Closed Circuit Webcam system. (Because you're a pornstar/creepy voyeur dude)</p><p>- Internet based music streaming service (You just can't get enough, can you?)</p><p>It's alot. Too much even...but I've been considering something like this for a while as a media server. Except mine would actually cost a grand and wouldn't involve such crap hardware. Yes, I use the integrated RAID feature on a motherboard. Why? Because I'm not an enterprise. I'm a bachelor.</p></htmltext>
<tokenext>Media server from hell .
If had it linked up via upnp to say : - A Latest Gen Console/Television ( For UPNP/DLNA support for a media center experience.- Per room based speaker system ( Think different music in every room , ala airtunes/sonos but not so proprietary ) - Closed Circuit Webcam system .
( Because you 're a pornstar/creepy voyeur dude ) - Internet based music streaming service ( You just ca n't get enough , can you ?
) It 's alot .
Too much even...but I 've been considering something like this for a while as a media server .
Except mine would actually cost a grand and would n't involve such crap hardware .
Yes , I use the integrated RAID feature on a motherboard .
Why ? Because I 'm not an enterprise .
I 'm a bachelor .</tokentext>
<sentencetext>Media server from hell.
If had it linked up via upnp to say:- A Latest Gen Console/Television (For UPNP/DLNA support for a media center experience.- Per room based speaker system (Think different music in every room, ala airtunes/sonos but not so proprietary)- Closed Circuit Webcam system.
(Because you're a pornstar/creepy voyeur dude)- Internet based music streaming service (You just can't get enough, can you?
)It's alot.
Too much even...but I've been considering something like this for a while as a media server.
Except mine would actually cost a grand and wouldn't involve such crap hardware.
Yes, I use the integrated RAID feature on a motherboard.
Why? Because I'm not an enterprise.
I'm a bachelor.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685479</id>
	<title>Re:Redundant Array of INEXPENSIVE Disks</title>
	<author>sofakingon</author>
	<datestamp>1247498100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I just built a 8x500gb Seagate 7200.12 RAID 10 array using a Dell Perc 5i controller. <br> <br>

I bought the controller for $108 on Ebay. Add the battery for write caching, and 2x 4 port SAS&gt;SATA cables, and I spent a total of $190 on the controller/cables/shipping. <br> <br>

I picked the 500gb drives as they use less power, are extremely silent, run very very cool (case temp is a contstant 39c with 8 drives in a standard mid-tower case), and VMWare ESXi only recognizes up to 2TB per array. The total cost was $600 with shipping for the array.<br> <br>

The Perc 5i has been clocked at 2TB/s+ burst and 500GB/s sustained. For detailed benchmarks see <a href="http://www.overclock.net/hard-drives-storage/359025-perc-5-i-raid-card-tips.html" title="overclock.net">http://www.overclock.net/hard-drives-storage/359025-perc-5-i-raid-card-tips.html</a> [overclock.net]</htmltext>
<tokenext>I just built a 8x500gb Seagate 7200.12 RAID 10 array using a Dell Perc 5i controller .
I bought the controller for $ 108 on Ebay .
Add the battery for write caching , and 2x 4 port SAS &gt; SATA cables , and I spent a total of $ 190 on the controller/cables/shipping .
I picked the 500gb drives as they use less power , are extremely silent , run very very cool ( case temp is a contstant 39c with 8 drives in a standard mid-tower case ) , and VMWare ESXi only recognizes up to 2TB per array .
The total cost was $ 600 with shipping for the array .
The Perc 5i has been clocked at 2TB/s + burst and 500GB/s sustained .
For detailed benchmarks see http : //www.overclock.net/hard-drives-storage/359025-perc-5-i-raid-card-tips.html [ overclock.net ]</tokentext>
<sentencetext>I just built a 8x500gb Seagate 7200.12 RAID 10 array using a Dell Perc 5i controller.
I bought the controller for $108 on Ebay.
Add the battery for write caching, and 2x 4 port SAS&gt;SATA cables, and I spent a total of $190 on the controller/cables/shipping.
I picked the 500gb drives as they use less power, are extremely silent, run very very cool (case temp is a contstant 39c with 8 drives in a standard mid-tower case), and VMWare ESXi only recognizes up to 2TB per array.
The total cost was $600 with shipping for the array.
The Perc 5i has been clocked at 2TB/s+ burst and 500GB/s sustained.
For detailed benchmarks see http://www.overclock.net/hard-drives-storage/359025-perc-5-i-raid-card-tips.html [overclock.net]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28704673</id>
	<title>Re:How does the home user back this up?</title>
	<author>Rudd-O</author>
	<datestamp>1247679180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Build your array with ZFS.  Back it up incrementally and atomically with zfs send / zfs receive.</p></htmltext>
<tokenext>Build your array with ZFS .
Back it up incrementally and atomically with zfs send / zfs receive .</tokentext>
<sentencetext>Build your array with ZFS.
Back it up incrementally and atomically with zfs send / zfs receive.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685001</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>Anonymous</author>
	<datestamp>1247493180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>are left with a marketing advertisement for an Areca product that doesn't even exist and a notice that storage just keeps getting cheaper.  Did I miss anything?</p></div><p>you're right this article headline is jacked. i clicked on it to find out if there was some drastic price drop on media but a grand wouldn't cover the RAID controller.</p></div>
	</htmltext>
<tokenext>are left with a marketing advertisement for an Areca product that does n't even exist and a notice that storage just keeps getting cheaper .
Did I miss anything ? you 're right this article headline is jacked .
i clicked on it to find out if there was some drastic price drop on media but a grand would n't cover the RAID controller .</tokentext>
<sentencetext>are left with a marketing advertisement for an Areca product that doesn't even exist and a notice that storage just keeps getting cheaper.
Did I miss anything?you're right this article headline is jacked.
i clicked on it to find out if there was some drastic price drop on media but a grand wouldn't cover the RAID controller.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685137</id>
	<title>Re:How does the home user back this up?</title>
	<author>tbuskey</author>
	<datestamp>1247494440000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Or how would a photographer archive this?  So that your kids could show your pictures to your grandkids.  Like you were able to go through a shoebox full of negatives with good quality.</p><p>1st, you'll want to partition your data.  This I can lose (the TV shows you recorded on your DVR), that I want to keep forever (photos &amp; movies of the kids, 1st house), these I want to protect in case of disaster (taxes, resumes, scans of bills, current work projects).</p><p>Don't bother with the 1st case.  Archive the forever to multiple media and do backups of the last.</p><p>Hopefully, backups are the smallest chunk.  Often, but you don't need to keep more then 2-3 copies.  If you want to retrieve something from x/y/zz, that's an archive not a backup.</p><p>Archives should be made to multiple copies (DVDs?) in diverse locations.  Not magnetic unless you redo it periodically.</p><p>Offline media (tapes, optical, printouts) haven't kept up with online media capacity.  *sigh*</p></htmltext>
<tokenext>Or how would a photographer archive this ?
So that your kids could show your pictures to your grandkids .
Like you were able to go through a shoebox full of negatives with good quality.1st , you 'll want to partition your data .
This I can lose ( the TV shows you recorded on your DVR ) , that I want to keep forever ( photos &amp; movies of the kids , 1st house ) , these I want to protect in case of disaster ( taxes , resumes , scans of bills , current work projects ) .Do n't bother with the 1st case .
Archive the forever to multiple media and do backups of the last.Hopefully , backups are the smallest chunk .
Often , but you do n't need to keep more then 2-3 copies .
If you want to retrieve something from x/y/zz , that 's an archive not a backup.Archives should be made to multiple copies ( DVDs ?
) in diverse locations .
Not magnetic unless you redo it periodically.Offline media ( tapes , optical , printouts ) have n't kept up with online media capacity .
* sigh *</tokentext>
<sentencetext>Or how would a photographer archive this?
So that your kids could show your pictures to your grandkids.
Like you were able to go through a shoebox full of negatives with good quality.1st, you'll want to partition your data.
This I can lose (the TV shows you recorded on your DVR), that I want to keep forever (photos &amp; movies of the kids, 1st house), these I want to protect in case of disaster (taxes, resumes, scans of bills, current work projects).Don't bother with the 1st case.
Archive the forever to multiple media and do backups of the last.Hopefully, backups are the smallest chunk.
Often, but you don't need to keep more then 2-3 copies.
If you want to retrieve something from x/y/zz, that's an archive not a backup.Archives should be made to multiple copies (DVDs?
) in diverse locations.
Not magnetic unless you redo it periodically.Offline media (tapes, optical, printouts) haven't kept up with online media capacity.
*sigh*</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28686597</id>
	<title>been building machines like this for over 3 years</title>
	<author>rcpitt</author>
	<datestamp>1247506560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>10 drives of the current "sweet spot" for drive size vs. cost/megabyte with a relatively inexpensive mother board and CPU and Linux.
<p>
started out with just over 2.5 Gigs with 300 Gig drives and now extends to over 10TB with 1.5TByte drives all done with RAID 5 and no spares
</p><p>
Currently have  a half dozen of them in the house with various loads of video from "live streaming eagle nest cameras" and there are several more at the biologist's and out in the field.
</p><p>
Started out about $2500 (Canadian) and are down to around $1500 - but I've started putting faster chips (than the original Celerons) because I found that having them online was a good thing when doing video editing as a render farm but far better when they had some power<nobr> <wbr></nobr>:)
</p><p>
One thing to remember is that the bit error rate on these large drives is "per megabyte" which means the larger the drive, the more likely there will be a failure. I've been bitten once by a double failure before I could get a spare in and integrated - lost a whole array. I've seen one study that shows that the likelihood of a second drive failure before a spare is integrated, even if spare integration starts immediately an error is detected, is almost 100\% once we hit about 3-4TBytes/drive. I've started using the drives as mirrored pairs and spreading the load (of video files) over them with other means than RAID 5 - even RAID 20 is not going to be enough IMHO - need something new like a completely new subsystem concept I saw a note about a while back but can't find at the moment - I'll post more if I recall/find it.</p></htmltext>
<tokenext>10 drives of the current " sweet spot " for drive size vs. cost/megabyte with a relatively inexpensive mother board and CPU and Linux .
started out with just over 2.5 Gigs with 300 Gig drives and now extends to over 10TB with 1.5TByte drives all done with RAID 5 and no spares Currently have a half dozen of them in the house with various loads of video from " live streaming eagle nest cameras " and there are several more at the biologist 's and out in the field .
Started out about $ 2500 ( Canadian ) and are down to around $ 1500 - but I 've started putting faster chips ( than the original Celerons ) because I found that having them online was a good thing when doing video editing as a render farm but far better when they had some power : ) One thing to remember is that the bit error rate on these large drives is " per megabyte " which means the larger the drive , the more likely there will be a failure .
I 've been bitten once by a double failure before I could get a spare in and integrated - lost a whole array .
I 've seen one study that shows that the likelihood of a second drive failure before a spare is integrated , even if spare integration starts immediately an error is detected , is almost 100 \ % once we hit about 3-4TBytes/drive .
I 've started using the drives as mirrored pairs and spreading the load ( of video files ) over them with other means than RAID 5 - even RAID 20 is not going to be enough IMHO - need something new like a completely new subsystem concept I saw a note about a while back but ca n't find at the moment - I 'll post more if I recall/find it .</tokentext>
<sentencetext>10 drives of the current "sweet spot" for drive size vs. cost/megabyte with a relatively inexpensive mother board and CPU and Linux.
started out with just over 2.5 Gigs with 300 Gig drives and now extends to over 10TB with 1.5TByte drives all done with RAID 5 and no spares

Currently have  a half dozen of them in the house with various loads of video from "live streaming eagle nest cameras" and there are several more at the biologist's and out in the field.
Started out about $2500 (Canadian) and are down to around $1500 - but I've started putting faster chips (than the original Celerons) because I found that having them online was a good thing when doing video editing as a render farm but far better when they had some power :)

One thing to remember is that the bit error rate on these large drives is "per megabyte" which means the larger the drive, the more likely there will be a failure.
I've been bitten once by a double failure before I could get a spare in and integrated - lost a whole array.
I've seen one study that shows that the likelihood of a second drive failure before a spare is integrated, even if spare integration starts immediately an error is detected, is almost 100\% once we hit about 3-4TBytes/drive.
I've started using the drives as mirrored pairs and spreading the load (of video files) over them with other means than RAID 5 - even RAID 20 is not going to be enough IMHO - need something new like a completely new subsystem concept I saw a note about a while back but can't find at the moment - I'll post more if I recall/find it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680939</id>
	<title>No controller? No failover? No interconnect?</title>
	<author>guruevi</author>
	<datestamp>1247515860000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>What good are 12 hard drives without anything else? Absolutely nothing. An enclosure alone to correctly power and cool these drives costs at least $800 and that's only with (e)SATA connections. No SAS, no FibreChannel, no Failovers, no cache or backup batteries, no controllers, no hardware that can connect your clients over eg. NFS or SMB to it.</p><p>Currently I can do professional storage in ~$1000/TB if you get 10TB, including backups, cooling and power that would probably run you $1600/TB over the lifetime of the hard drives (5 years).</p></htmltext>
<tokenext>What good are 12 hard drives without anything else ?
Absolutely nothing .
An enclosure alone to correctly power and cool these drives costs at least $ 800 and that 's only with ( e ) SATA connections .
No SAS , no FibreChannel , no Failovers , no cache or backup batteries , no controllers , no hardware that can connect your clients over eg .
NFS or SMB to it.Currently I can do professional storage in ~ $ 1000/TB if you get 10TB , including backups , cooling and power that would probably run you $ 1600/TB over the lifetime of the hard drives ( 5 years ) .</tokentext>
<sentencetext>What good are 12 hard drives without anything else?
Absolutely nothing.
An enclosure alone to correctly power and cool these drives costs at least $800 and that's only with (e)SATA connections.
No SAS, no FibreChannel, no Failovers, no cache or backup batteries, no controllers, no hardware that can connect your clients over eg.
NFS or SMB to it.Currently I can do professional storage in ~$1000/TB if you get 10TB, including backups, cooling and power that would probably run you $1600/TB over the lifetime of the hard drives (5 years).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682233</id>
	<title>You can get 10 TB for $845</title>
	<author>Hurricane78</author>
	<datestamp>1247477940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Provided you have the controller to cope with it.</p><p>But why is this even on<nobr> <wbr></nobr>/.? Who cares about that personal story?</p><p>Or can I do an "article" tomorrow, about the 127 $5 mice I connected to my pc,and how I got it to display 127 cursors and do coreographies with it on a beamer?<br>Actually I think this would be more interesting than TFA. ^^</p></htmltext>
<tokenext>Provided you have the controller to cope with it.But why is this even on /. ?
Who cares about that personal story ? Or can I do an " article " tomorrow , about the 127 $ 5 mice I connected to my pc,and how I got it to display 127 cursors and do coreographies with it on a beamer ? Actually I think this would be more interesting than TFA .
^ ^</tokentext>
<sentencetext>Provided you have the controller to cope with it.But why is this even on /.?
Who cares about that personal story?Or can I do an "article" tomorrow, about the 127 $5 mice I connected to my pc,and how I got it to display 127 cursors and do coreographies with it on a beamer?Actually I think this would be more interesting than TFA.
^^</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681925</id>
	<title>Re:What about the electricity?</title>
	<author>Anonymous</author>
	<datestamp>1247476680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Wow, I didn't realize California is 31 cents/kwh - I pay 7 cents up here in Canada (no we don't live in igloos)</p><p>However, remember that the 8watts is a MAX figure. A lot of fileservers are idle most of the time, especially for a home user, and using a fraction of that. The Seagate 1TB 7200.12 drives use less than 5Watts at idle and less than 10 at full power.</p><p>But yes, power (in most areas) will need to be a factor in the pricing. Don't forget to rule in power for the CPU, motherboard, etc and A/C that one would use if it heats up their room/home running that many.</p></htmltext>
<tokenext>Wow , I did n't realize California is 31 cents/kwh - I pay 7 cents up here in Canada ( no we do n't live in igloos ) However , remember that the 8watts is a MAX figure .
A lot of fileservers are idle most of the time , especially for a home user , and using a fraction of that .
The Seagate 1TB 7200.12 drives use less than 5Watts at idle and less than 10 at full power.But yes , power ( in most areas ) will need to be a factor in the pricing .
Do n't forget to rule in power for the CPU , motherboard , etc and A/C that one would use if it heats up their room/home running that many .</tokentext>
<sentencetext>Wow, I didn't realize California is 31 cents/kwh - I pay 7 cents up here in Canada (no we don't live in igloos)However, remember that the 8watts is a MAX figure.
A lot of fileservers are idle most of the time, especially for a home user, and using a fraction of that.
The Seagate 1TB 7200.12 drives use less than 5Watts at idle and less than 10 at full power.But yes, power (in most areas) will need to be a factor in the pricing.
Don't forget to rule in power for the CPU, motherboard, etc and A/C that one would use if it heats up their room/home running that many.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682905</id>
	<title>Not stupid</title>
	<author>wsanders</author>
	<datestamp>1247480760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You cannot violate this rule:</p><p>"Pick any two: performance, cost, availability."</p><p>That applies to *any* cost. At $100/TB, it's "pick any one". Your average user is just looking for a place to stash his pr0n, so optimizing for cost is perfectly fine.</p></htmltext>
<tokenext>You can not violate this rule : " Pick any two : performance , cost , availability .
" That applies to * any * cost .
At $ 100/TB , it 's " pick any one " .
Your average user is just looking for a place to stash his pr0n , so optimizing for cost is perfectly fine .</tokentext>
<sentencetext>You cannot violate this rule:"Pick any two: performance, cost, availability.
"That applies to *any* cost.
At $100/TB, it's "pick any one".
Your average user is just looking for a place to stash his pr0n, so optimizing for cost is perfectly fine.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28686821</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>RedWizzard</author>
	<datestamp>1247508300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Two: Said controller does not exist.  They listed the controller as ARC-1680ix-<b>20</b>.  Areca <a href="http://www.areca.com.tw/products/pcietosas1680series.htm" title="areca.com.tw">makes no such controller</a> [areca.com.tw].  They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.</p></div><p>They screwed up the model number. They clearly state that they used the model with 16 internal ports and 4 external ports - which is the ARC-1680ix-16. If anything the WTF here is that Areca call their 20 port controller a<nobr> <wbr></nobr>...-16.</p></div>
	</htmltext>
<tokenext>Two : Said controller does not exist .
They listed the controller as ARC-1680ix-20 .
Areca makes no such controller [ areca.com.tw ] .
They make an 8 , 12 , 16 , 24 but no 20 unless they 've got some advanced product unlisted anywhere.They screwed up the model number .
They clearly state that they used the model with 16 internal ports and 4 external ports - which is the ARC-1680ix-16 .
If anything the WTF here is that Areca call their 20 port controller a ...-16 .</tokentext>
<sentencetext>Two: Said controller does not exist.
They listed the controller as ARC-1680ix-20.
Areca makes no such controller [areca.com.tw].
They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.They screwed up the model number.
They clearly state that they used the model with 16 internal ports and 4 external ports - which is the ARC-1680ix-16.
If anything the WTF here is that Areca call their 20 port controller a ...-16.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681505</id>
	<title>So what's the MBTF on this array?</title>
	<author>Whuffo</author>
	<datestamp>1247518200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>12 consumer level SATA drives by Samsung. What'd be interesting is to see how long it takes before it fails with complete data loss due to drive failure. Raid 5 isn't going to save this turkey.</htmltext>
<tokenext>12 consumer level SATA drives by Samsung .
What 'd be interesting is to see how long it takes before it fails with complete data loss due to drive failure .
Raid 5 is n't going to save this turkey .</tokentext>
<sentencetext>12 consumer level SATA drives by Samsung.
What'd be interesting is to see how long it takes before it fails with complete data loss due to drive failure.
Raid 5 isn't going to save this turkey.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681809</id>
	<title>FreeBSD ZFS raidz</title>
	<author>fireduck64</author>
	<datestamp>1247476140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>For backups I really like FreeBSD ZFS (with raidz).  I rsync in new data from my servers and then create a ZFS snapshot.  It works quite well.  I have running it at home and at work.  Of course the raidz is software so you can use whatever cheap controllers you have around.  The thing that makes me love it is that ZFS does its own checksums so it can detect if the data it is reading does not match what it wrote.</htmltext>
<tokenext>For backups I really like FreeBSD ZFS ( with raidz ) .
I rsync in new data from my servers and then create a ZFS snapshot .
It works quite well .
I have running it at home and at work .
Of course the raidz is software so you can use whatever cheap controllers you have around .
The thing that makes me love it is that ZFS does its own checksums so it can detect if the data it is reading does not match what it wrote .</tokentext>
<sentencetext>For backups I really like FreeBSD ZFS (with raidz).
I rsync in new data from my servers and then create a ZFS snapshot.
It works quite well.
I have running it at home and at work.
Of course the raidz is software so you can use whatever cheap controllers you have around.
The thing that makes me love it is that ZFS does its own checksums so it can detect if the data it is reading does not match what it wrote.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685267</id>
	<title>Re:*gag*</title>
	<author>greg1104</author>
	<datestamp>1247495940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have decidedly mixed feelings about Areca's controllers as well.  The performance has been good, but the management situation has been awful.  I wrote about some of my problems that popped up after the first time I lost a drive <a href="http://notemagnet.blogspot.com/2008/08/linux-disk-failures-areca-is-not-so.html" title="blogspot.com">on my blog</a> [blogspot.com].  If you get one of the cards that uses the network management port as the UI for doing things, supposedly that's better than what I went through, but that still makes for a painful monitoring stack.  Compare to the 3ware cards I've been using recently, where it only took a couple of minutes to setup smartmontools to watch the drive health and I moved on.</p></htmltext>
<tokenext>I have decidedly mixed feelings about Areca 's controllers as well .
The performance has been good , but the management situation has been awful .
I wrote about some of my problems that popped up after the first time I lost a drive on my blog [ blogspot.com ] .
If you get one of the cards that uses the network management port as the UI for doing things , supposedly that 's better than what I went through , but that still makes for a painful monitoring stack .
Compare to the 3ware cards I 've been using recently , where it only took a couple of minutes to setup smartmontools to watch the drive health and I moved on .</tokentext>
<sentencetext>I have decidedly mixed feelings about Areca's controllers as well.
The performance has been good, but the management situation has been awful.
I wrote about some of my problems that popped up after the first time I lost a drive on my blog [blogspot.com].
If you get one of the cards that uses the network management port as the UI for doing things, supposedly that's better than what I went through, but that still makes for a painful monitoring stack.
Compare to the 3ware cards I've been using recently, where it only took a couple of minutes to setup smartmontools to watch the drive health and I moved on.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683431</id>
	<title>Linux software RAID dosen't work</title>
	<author>[HeMaN]</author>
	<datestamp>1247482980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Thecus are selling NAS that use Linux software RAID, and it bloddy well suck!<br>
<br>
Sure it's fast, but it corrupts the data, I've lost 1TB data, and a friend of mine who bought one also, recently lost about the same. The forums are full of people who lost data, so now Thecus include a disclaimer in their firmware that they are not accountable for any dataloss, how secure does that make you feel?<br>
<br>
It can be that Thecus fucked up the embedded Linux they run on it(n5200), but my next NAS is going to be a server pc with internal disks, running ZFS and OpenSolaris or Windows with NTFS.<br>
<br>
-H</htmltext>
<tokenext>Thecus are selling NAS that use Linux software RAID , and it bloddy well suck !
Sure it 's fast , but it corrupts the data , I 've lost 1TB data , and a friend of mine who bought one also , recently lost about the same .
The forums are full of people who lost data , so now Thecus include a disclaimer in their firmware that they are not accountable for any dataloss , how secure does that make you feel ?
It can be that Thecus fucked up the embedded Linux they run on it ( n5200 ) , but my next NAS is going to be a server pc with internal disks , running ZFS and OpenSolaris or Windows with NTFS .
-H</tokentext>
<sentencetext>Thecus are selling NAS that use Linux software RAID, and it bloddy well suck!
Sure it's fast, but it corrupts the data, I've lost 1TB data, and a friend of mine who bought one also, recently lost about the same.
The forums are full of people who lost data, so now Thecus include a disclaimer in their firmware that they are not accountable for any dataloss, how secure does that make you feel?
It can be that Thecus fucked up the embedded Linux they run on it(n5200), but my next NAS is going to be a server pc with internal disks, running ZFS and OpenSolaris or Windows with NTFS.
-H</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681151</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683369</id>
	<title>Re:Redundant Array of INEXPENSIVE Disks</title>
	<author>Arrowmaster</author>
	<datestamp>1247482740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>After I had yet another Western Digital harddrive die making an 8/11 or 72\% failure rate over 5 years, I just ordered parts for a new computer including a hardware RAID from Newegg yesterday.<br>
<br>
8x <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16822152119" title="newegg.com" rel="nofollow">SAMSUNG F1 RAID Class HE103UJ</a> [newegg.com] <br>
1x <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16816116043" title="newegg.com" rel="nofollow">3ware 9650SE-8LPML</a> [newegg.com] <br>
1x <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16816116087" title="newegg.com" rel="nofollow">3ware BBU-MODULE-04 Battery Backup Unit</a> [newegg.com] <br>
<br>
For $1,829.90 to make a 7 drive hardware RAID 6 array with 1 hotspare, but it seems I went with a lot higher quality parts than they did. And I included the price of the controller...<br>
<br>
After my horrible experiences with consumer Western Digital drives (6x 250GB PATA and 2x 500GB SATA dead in the last 5 years), I wasn't about to touch these new consumer 2TB "Green" drives or the cursed Seagate 1.5TB drives so I went with the more expensive HE103UJ's. I hope they are worth it since this will be my first experience with a RAID. In the past I just used everything as separate drives since they weren't purchased all at once and I've paid greatly for that mistake.<br>
<br>
It's not an ULTRA CHEAP RAID but I think it should be a fairly high quality one at least.</htmltext>
<tokenext>After I had yet another Western Digital harddrive die making an 8/11 or 72 \ % failure rate over 5 years , I just ordered parts for a new computer including a hardware RAID from Newegg yesterday .
8x SAMSUNG F1 RAID Class HE103UJ [ newegg.com ] 1x 3ware 9650SE-8LPML [ newegg.com ] 1x 3ware BBU-MODULE-04 Battery Backup Unit [ newegg.com ] For $ 1,829.90 to make a 7 drive hardware RAID 6 array with 1 hotspare , but it seems I went with a lot higher quality parts than they did .
And I included the price of the controller.. . After my horrible experiences with consumer Western Digital drives ( 6x 250GB PATA and 2x 500GB SATA dead in the last 5 years ) , I was n't about to touch these new consumer 2TB " Green " drives or the cursed Seagate 1.5TB drives so I went with the more expensive HE103UJ 's .
I hope they are worth it since this will be my first experience with a RAID .
In the past I just used everything as separate drives since they were n't purchased all at once and I 've paid greatly for that mistake .
It 's not an ULTRA CHEAP RAID but I think it should be a fairly high quality one at least .</tokentext>
<sentencetext>After I had yet another Western Digital harddrive die making an 8/11 or 72\% failure rate over 5 years, I just ordered parts for a new computer including a hardware RAID from Newegg yesterday.
8x SAMSUNG F1 RAID Class HE103UJ [newegg.com] 
1x 3ware 9650SE-8LPML [newegg.com] 
1x 3ware BBU-MODULE-04 Battery Backup Unit [newegg.com] 

For $1,829.90 to make a 7 drive hardware RAID 6 array with 1 hotspare, but it seems I went with a lot higher quality parts than they did.
And I included the price of the controller...

After my horrible experiences with consumer Western Digital drives (6x 250GB PATA and 2x 500GB SATA dead in the last 5 years), I wasn't about to touch these new consumer 2TB "Green" drives or the cursed Seagate 1.5TB drives so I went with the more expensive HE103UJ's.
I hope they are worth it since this will be my first experience with a RAID.
In the past I just used everything as separate drives since they weren't purchased all at once and I've paid greatly for that mistake.
It's not an ULTRA CHEAP RAID but I think it should be a fairly high quality one at least.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680919</id>
	<title>Misleading headline</title>
	<author>Anonymous</author>
	<datestamp>1247515800000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>This headline is very misleading.

Sure you can buy 12x1TB drives for just under a grand, but you won't have anything to connect them to, as the controller itself is another $1100.

Another eye-catching headline to get click through's, that' just wrong. Sad.</htmltext>
<tokenext>This headline is very misleading .
Sure you can buy 12x1TB drives for just under a grand , but you wo n't have anything to connect them to , as the controller itself is another $ 1100 .
Another eye-catching headline to get click through 's , that ' just wrong .
Sad .</tokentext>
<sentencetext>This headline is very misleading.
Sure you can buy 12x1TB drives for just under a grand, but you won't have anything to connect them to, as the controller itself is another $1100.
Another eye-catching headline to get click through's, that' just wrong.
Sad.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684699</id>
	<title>Re:What about the electricity?</title>
	<author>Anonymous</author>
	<datestamp>1247490840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Simple, don't live in Kalifornia.  I live in Ohio and only pay 7.7 cents a kwh.</p><p>My Home Server (Linux of course) has 4 SATA 1TB Hard Drives using RAID-1.  2TB is plenty of storage for me for all my files.  I have 2 external 1TB and 2 External 500GB drives for backup.  I alternate between backup sets (1TB &amp;<nobr> <wbr></nobr>.5TB) and store one set in a safety deposit box at the local bank.</p></htmltext>
<tokenext>Simple , do n't live in Kalifornia .
I live in Ohio and only pay 7.7 cents a kwh.My Home Server ( Linux of course ) has 4 SATA 1TB Hard Drives using RAID-1 .
2TB is plenty of storage for me for all my files .
I have 2 external 1TB and 2 External 500GB drives for backup .
I alternate between backup sets ( 1TB &amp; .5TB ) and store one set in a safety deposit box at the local bank .</tokentext>
<sentencetext>Simple, don't live in Kalifornia.
I live in Ohio and only pay 7.7 cents a kwh.My Home Server (Linux of course) has 4 SATA 1TB Hard Drives using RAID-1.
2TB is plenty of storage for me for all my files.
I have 2 external 1TB and 2 External 500GB drives for backup.
I alternate between backup sets (1TB &amp; .5TB) and store one set in a safety deposit box at the local bank.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681977</id>
	<title>Re:How does the home user back this up?</title>
	<author>ocularDeathRay</author>
	<datestamp>1247476860000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>well I suppose you could build two of them. I still wouldn't trust important data to that setup.... but I don't know of any cheaper setup in the long run if you just want to make one copy of everything. What I was just thinking is for a home user, how would you ever collect that much data worth saving... then I remembered that my shitty verizon DSL is the problem (only real connection where I live). I suppose if I had a fast connection I could collect that much porn or something. seriously though,  it seems like most of the "home users" that I know that have that much data, its just a collection of free (maybe illegal, but free) downloaded crap. I think to a certain extent the original source is your backup. For example, if I download every ep of STtNG from BT, I am not going to bother backing that up at all, because I assume I will probably just be able to download it again, and the quality will probably be better when I do. Most users really don't have very much truely irreplaceable data. A few gigs of pics maybe, some digital media you actually purchased, a collection of resumes and letters. I have been using computers since I was a kid and I only have maybe 2 or 3 gigs of data I believe is actually important, and that is really a stretch. So this article is stupid, its not a solution for enterprise stuff, and very few "home users" really need that kind of storage.</htmltext>
<tokenext>well I suppose you could build two of them .
I still would n't trust important data to that setup.... but I do n't know of any cheaper setup in the long run if you just want to make one copy of everything .
What I was just thinking is for a home user , how would you ever collect that much data worth saving... then I remembered that my shitty verizon DSL is the problem ( only real connection where I live ) .
I suppose if I had a fast connection I could collect that much porn or something .
seriously though , it seems like most of the " home users " that I know that have that much data , its just a collection of free ( maybe illegal , but free ) downloaded crap .
I think to a certain extent the original source is your backup .
For example , if I download every ep of STtNG from BT , I am not going to bother backing that up at all , because I assume I will probably just be able to download it again , and the quality will probably be better when I do .
Most users really do n't have very much truely irreplaceable data .
A few gigs of pics maybe , some digital media you actually purchased , a collection of resumes and letters .
I have been using computers since I was a kid and I only have maybe 2 or 3 gigs of data I believe is actually important , and that is really a stretch .
So this article is stupid , its not a solution for enterprise stuff , and very few " home users " really need that kind of storage .</tokentext>
<sentencetext>well I suppose you could build two of them.
I still wouldn't trust important data to that setup.... but I don't know of any cheaper setup in the long run if you just want to make one copy of everything.
What I was just thinking is for a home user, how would you ever collect that much data worth saving... then I remembered that my shitty verizon DSL is the problem (only real connection where I live).
I suppose if I had a fast connection I could collect that much porn or something.
seriously though,  it seems like most of the "home users" that I know that have that much data, its just a collection of free (maybe illegal, but free) downloaded crap.
I think to a certain extent the original source is your backup.
For example, if I download every ep of STtNG from BT, I am not going to bother backing that up at all, because I assume I will probably just be able to download it again, and the quality will probably be better when I do.
Most users really don't have very much truely irreplaceable data.
A few gigs of pics maybe, some digital media you actually purchased, a collection of resumes and letters.
I have been using computers since I was a kid and I only have maybe 2 or 3 gigs of data I believe is actually important, and that is really a stretch.
So this article is stupid, its not a solution for enterprise stuff, and very few "home users" really need that kind of storage.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681121</id>
	<title>Mod parent up</title>
	<author>Anonymous</author>
	<datestamp>1247516640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>+5 Thorough</p></htmltext>
<tokenext>+ 5 Thorough</tokentext>
<sentencetext>+5 Thorough</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681447</id>
	<title>Re:No controller? No failover? No interconnect?</title>
	<author>Drawsalot</author>
	<datestamp>1247517960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Agreed! Powering and cooling the five internal drives in my tower (1x system- 4x raid) was/is a struggle, let alone twelve TB drives in a conventional tower setup. I would rather opt for a couple external mini-towers with controllers. Even then the cooling on these units is typically not up to par. I'm currently reviewing a 4 bay NAS cube that has nice features, but with a full load of drives it struggles-- temps went from 75 with one drive to 120+ with all four. Just because it can be done doesn't mean it should be.</htmltext>
<tokenext>Agreed !
Powering and cooling the five internal drives in my tower ( 1x system- 4x raid ) was/is a struggle , let alone twelve TB drives in a conventional tower setup .
I would rather opt for a couple external mini-towers with controllers .
Even then the cooling on these units is typically not up to par .
I 'm currently reviewing a 4 bay NAS cube that has nice features , but with a full load of drives it struggles-- temps went from 75 with one drive to 120 + with all four .
Just because it can be done does n't mean it should be .</tokentext>
<sentencetext>Agreed!
Powering and cooling the five internal drives in my tower (1x system- 4x raid) was/is a struggle, let alone twelve TB drives in a conventional tower setup.
I would rather opt for a couple external mini-towers with controllers.
Even then the cooling on these units is typically not up to par.
I'm currently reviewing a 4 bay NAS cube that has nice features, but with a full load of drives it struggles-- temps went from 75 with one drive to 120+ with all four.
Just because it can be done doesn't mean it should be.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680939</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682543</id>
	<title>Re:Why This Article Is Stupid</title>
	<author>Anonymous</author>
	<datestamp>1247479260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>12x 250GB Seagate plus Linux Software RAID-5 here. Built that in late 2006 for about 800EUR. Still working fine.</p></htmltext>
<tokenext>12x 250GB Seagate plus Linux Software RAID-5 here .
Built that in late 2006 for about 800EUR .
Still working fine .</tokentext>
<sentencetext>12x 250GB Seagate plus Linux Software RAID-5 here.
Built that in late 2006 for about 800EUR.
Still working fine.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681313</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682115</id>
	<title>Possible to get that cheap with a SATA expander</title>
	<author>Anonymous</author>
	<datestamp>1247477400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You can get two 6-disk SAS/SATA expanders under $170 each. They require 4 SATA connection in total so a cheap SATA controller or an onboard will be adequate. Then use the software raid of a modern linux kernel, or stuff it with RAM (&gt;8GB) and put opensolaris and a ZFS raidz fs. Bonus extra are hot swappable trays and hardware monitoring.</p></htmltext>
<tokenext>You can get two 6-disk SAS/SATA expanders under $ 170 each .
They require 4 SATA connection in total so a cheap SATA controller or an onboard will be adequate .
Then use the software raid of a modern linux kernel , or stuff it with RAM ( &gt; 8GB ) and put opensolaris and a ZFS raidz fs .
Bonus extra are hot swappable trays and hardware monitoring .</tokentext>
<sentencetext>You can get two 6-disk SAS/SATA expanders under $170 each.
They require 4 SATA connection in total so a cheap SATA controller or an onboard will be adequate.
Then use the software raid of a modern linux kernel, or stuff it with RAM (&gt;8GB) and put opensolaris and a ZFS raidz fs.
Bonus extra are hot swappable trays and hardware monitoring.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682801</id>
	<title>Re:So what's the MBTF on this array?</title>
	<author>slaker</author>
	<datestamp>1247480340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In my experience I see considerably lower failure rates from Samsung hard disks than any other vendor; around<nobr> <wbr></nobr>.5\% (half of one percent), compared to ~2\% to 3\% for Hitachi and Seagate units in the three year lifespan of the drives. My sample size is only about 2000 drives total in their current warranty period, but for as long as I've been tracking hard disk reliability over my sample of client systems (roughly 10 years), Samsung has consistently been better than other brands.</p><p>My highest rates of failure in the warranty period were with IBM 60GXPs (18\%) and original 36GB Western Digital Raptors (33\%, high enough that I pulled them all from service after 18 months).</p><p>Anyway, the last time Samsung was making consistently bad disks was probably around 1998, when its drives were typically ~5 - 10GB. Nowadays they're very conservative in almost everything they make, but usually have an excellent mix of performance, thermal and auditory characteristics when compared to other drives.</p><p>My home storage setup uses four 16-port 3ware controllers (on four different servers) consisting of 14 1TB Samsung F1 + 1 hot spare in a RAID6 configuration (12TB per server). I use rsync to duplicate the data between each pair of servers.</p><p>Also, yes, that configuration was horribly expensive, about $3500 per server.</p><p>If my contracting work ever comes back to what it was 18 months ago, I'll probably add an LTO4 changer to the mix, which will be another $3500.</p></htmltext>
<tokenext>In my experience I see considerably lower failure rates from Samsung hard disks than any other vendor ; around .5 \ % ( half of one percent ) , compared to ~ 2 \ % to 3 \ % for Hitachi and Seagate units in the three year lifespan of the drives .
My sample size is only about 2000 drives total in their current warranty period , but for as long as I 've been tracking hard disk reliability over my sample of client systems ( roughly 10 years ) , Samsung has consistently been better than other brands.My highest rates of failure in the warranty period were with IBM 60GXPs ( 18 \ % ) and original 36GB Western Digital Raptors ( 33 \ % , high enough that I pulled them all from service after 18 months ) .Anyway , the last time Samsung was making consistently bad disks was probably around 1998 , when its drives were typically ~ 5 - 10GB .
Nowadays they 're very conservative in almost everything they make , but usually have an excellent mix of performance , thermal and auditory characteristics when compared to other drives.My home storage setup uses four 16-port 3ware controllers ( on four different servers ) consisting of 14 1TB Samsung F1 + 1 hot spare in a RAID6 configuration ( 12TB per server ) .
I use rsync to duplicate the data between each pair of servers.Also , yes , that configuration was horribly expensive , about $ 3500 per server.If my contracting work ever comes back to what it was 18 months ago , I 'll probably add an LTO4 changer to the mix , which will be another $ 3500 .</tokentext>
<sentencetext>In my experience I see considerably lower failure rates from Samsung hard disks than any other vendor; around .5\% (half of one percent), compared to ~2\% to 3\% for Hitachi and Seagate units in the three year lifespan of the drives.
My sample size is only about 2000 drives total in their current warranty period, but for as long as I've been tracking hard disk reliability over my sample of client systems (roughly 10 years), Samsung has consistently been better than other brands.My highest rates of failure in the warranty period were with IBM 60GXPs (18\%) and original 36GB Western Digital Raptors (33\%, high enough that I pulled them all from service after 18 months).Anyway, the last time Samsung was making consistently bad disks was probably around 1998, when its drives were typically ~5 - 10GB.
Nowadays they're very conservative in almost everything they make, but usually have an excellent mix of performance, thermal and auditory characteristics when compared to other drives.My home storage setup uses four 16-port 3ware controllers (on four different servers) consisting of 14 1TB Samsung F1 + 1 hot spare in a RAID6 configuration (12TB per server).
I use rsync to duplicate the data between each pair of servers.Also, yes, that configuration was horribly expensive, about $3500 per server.If my contracting work ever comes back to what it was 18 months ago, I'll probably add an LTO4 changer to the mix, which will be another $3500.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681505</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685407</id>
	<title>Re:What about the electricity?</title>
	<author>Anonymous</author>
	<datestamp>1247497500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Michigan here, we pay like 7-10 cents per KW - hrm, didn't California deregulate the electricity grid? Interesting. Of course, they also had the fallout like all the blackouts from the Enron shenanigans as well. How's that working out for you?</p></htmltext>
<tokenext>Michigan here , we pay like 7-10 cents per KW - hrm , did n't California deregulate the electricity grid ?
Interesting. Of course , they also had the fallout like all the blackouts from the Enron shenanigans as well .
How 's that working out for you ?</tokentext>
<sentencetext>Michigan here, we pay like 7-10 cents per KW - hrm, didn't California deregulate the electricity grid?
Interesting. Of course, they also had the fallout like all the blackouts from the Enron shenanigans as well.
How's that working out for you?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682489</id>
	<title>Re:We do this now</title>
	<author>Anonymous</author>
	<datestamp>1247479020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>With that much data are you not concerned with filesystem corruption in the event of a hardware/power failure? How does the software raid performance stack up against a dedicated controller card.  I am not trolling, I am interested in doing something similar, these are just some of my concerns. Thanks for your insight.</htmltext>
<tokenext>With that much data are you not concerned with filesystem corruption in the event of a hardware/power failure ?
How does the software raid performance stack up against a dedicated controller card .
I am not trolling , I am interested in doing something similar , these are just some of my concerns .
Thanks for your insight .</tokentext>
<sentencetext>With that much data are you not concerned with filesystem corruption in the event of a hardware/power failure?
How does the software raid performance stack up against a dedicated controller card.
I am not trolling, I am interested in doing something similar, these are just some of my concerns.
Thanks for your insight.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28687825</id>
	<title>Perverts</title>
	<author>Santzes</author>
	<datestamp>1247563860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You'd have to be really pervert to need 10 TBs of mass storage. That's a lot of porn.</htmltext>
<tokenext>You 'd have to be really pervert to need 10 TBs of mass storage .
That 's a lot of porn .</tokentext>
<sentencetext>You'd have to be really pervert to need 10 TBs of mass storage.
That's a lot of porn.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681703</id>
	<title>Re:*gag*</title>
	<author>Anonymous</author>
	<datestamp>1247475660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Not to mention that 'Areca' sounds like a failed erection..</p></htmltext>
<tokenext>Not to mention that 'Areca ' sounds like a failed erection. .</tokentext>
<sentencetext>Not to mention that 'Areca' sounds like a failed erection..</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28692729
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28687229
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683225
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685137
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681925
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28688965
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683535
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681505
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684529
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680919
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681151
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683431
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683369
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682905
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682165
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28693403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682155
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684257
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681485
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681121
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681577
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680919
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681151
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683199
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28686821
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684823
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681313
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682543
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28757875
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680939
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681447
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681905
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680871
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683881
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28688623
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683021
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28689439
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683197
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684877
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681313
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684663
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28688841
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_62</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685001
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680919
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681571
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681313
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685147
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681287
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681313
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28693363
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682765
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681303
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681809
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681839
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681505
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682801
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685479
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684697
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682483
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685115
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681843
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680871
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683115
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681743
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28704673
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685407
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681977
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680871
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683527
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680919
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681151
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682203
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28692027
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685267
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681227
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681921
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681463
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685687
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681463
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28692879
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683319
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682489
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681703
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684699
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_1734220_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680919
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683469
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681393
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681839
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681905
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685407
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684699
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681925
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682165
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681015
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681257
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681113
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681141
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681977
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681843
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685137
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681287
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681485
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28693403
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683319
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28704673
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681875
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683369
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683225
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684823
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28688965
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684877
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684697
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685479
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681463
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685687
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28692879
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685341
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680841
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682155
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685115
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28757875
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28687229
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28689439
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681577
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682905
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681227
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681921
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682483
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681743
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28692729
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681313
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684663
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28688841
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682543
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28693363
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685147
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685001
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681303
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681121
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28686821
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680967
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680919
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681151
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683431
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682203
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28692027
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683199
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683469
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681571
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681903
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681505
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684529
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682801
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681347
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680955
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681809
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682489
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28682765
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28684257
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683197
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680871
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683527
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683115
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683881
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28686689
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680939
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681447
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_1734220.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28680909
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28681703
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28685267
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28688623
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683021
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_1734220.28683535
</commentlist>
</conversation>
