<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_06_22_2134225</id>
	<title>Best eSATA JBOD?</title>
	<author>timothy</author>
	<datestamp>1245663840000</datestamp>
	<htmltext><a href="mailto:redlandmover@gmail.com" rel="nofollow">redlandmover</a> writes <i>"I already have an HP Media Server (upgraded processor, and memory) that has already been upgraded internally to 3.5TB. I'm sure everyone already has their favorite backup solution (RAID, <a href="http://en.wikipedia.org/wiki/Windows\_Home\_Server">WHS</a>, a billion external hard drives, etc). My question is: what is the best JBOD (Just a Bunch of Drives), eSATA-connected, external hard drive enclosure? (Preferably, at least 4 drives.)"</i></htmltext>
<tokenext>redlandmover writes " I already have an HP Media Server ( upgraded processor , and memory ) that has already been upgraded internally to 3.5TB .
I 'm sure everyone already has their favorite backup solution ( RAID , WHS , a billion external hard drives , etc ) .
My question is : what is the best JBOD ( Just a Bunch of Drives ) , eSATA-connected , external hard drive enclosure ?
( Preferably , at least 4 drives .
) "</tokentext>
<sentencetext>redlandmover writes "I already have an HP Media Server (upgraded processor, and memory) that has already been upgraded internally to 3.5TB.
I'm sure everyone already has their favorite backup solution (RAID, WHS, a billion external hard drives, etc).
My question is: what is the best JBOD (Just a Bunch of Drives), eSATA-connected, external hard drive enclosure?
(Preferably, at least 4 drives.
)"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429895</id>
	<title>If you need more than ten disks, go for cheap SAS</title>
	<author>Robotbeat</author>
	<datestamp>1245668700000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>You can get an external (4-port, but acts like one big 1.2 GiByte/s pipe) SAS RAID card for less than $500 that will allow you to make multiple RAID sets of up to 32-disks in a set using true hardware RAID 5,6,10, etc. You can even get a battery backup unit for the RAID card cache for $100 (priceless on critical DB systems).</p><p>An external SAS card allows you to connect over a hundred drives through one connection using SAS expanders (some cards support up to 256 devices). Some external SAS RAID/JBOD cards have two SFF-8088 connections, for eight SAS lanes total. That's 2.4 Gigabytes/sec raw. At that rate, it's your PCI-e bus that's usually the bottleneck.</p><p>A lot of SAS expanders are expensive, but Chenbro has some ones for $300 that spread one x4 lane SAS cable into 24 or 32 cables, plus they can be daisy-chained for more storage. Then, buy a nice 24-slot Supermicro 4U chassis with dual-redundant power. That's a little less than $1000. All you need is the Chenbro expander in the chassis, no need for a motherboard.</p><p>If you're really cheap, you can use a cheaper $150 external SAS JBOD-only card, but hardware raid really is a must if you have a lot of storage. Plus, a hardware raid can use write-back cache, since it has effectively non-volatile RAM using the battery backup unit. And no, a UPS is NOT a replacement for NVRAM... Has your system ever crashed for any reason or hung for any reason? I've never had a RAID card hang or crash.</p><p>So, basically, besides the external SAS card, you have:</p><p>24-slot chassis with redundant power: $1000<br>chenbro SAS expander: $300<br>cables: depends</p><p>That's about $60/slot, plus you have redundant power (and an upgrade route to dual-redundant controllers). You can scale this to hundreds of terabytes, too. Over a petabyte if you have multiple controllers (with raid array rebuilding on one card not affecting rebuilding on another).</p></htmltext>
<tokenext>You can get an external ( 4-port , but acts like one big 1.2 GiByte/s pipe ) SAS RAID card for less than $ 500 that will allow you to make multiple RAID sets of up to 32-disks in a set using true hardware RAID 5,6,10 , etc .
You can even get a battery backup unit for the RAID card cache for $ 100 ( priceless on critical DB systems ) .An external SAS card allows you to connect over a hundred drives through one connection using SAS expanders ( some cards support up to 256 devices ) .
Some external SAS RAID/JBOD cards have two SFF-8088 connections , for eight SAS lanes total .
That 's 2.4 Gigabytes/sec raw .
At that rate , it 's your PCI-e bus that 's usually the bottleneck.A lot of SAS expanders are expensive , but Chenbro has some ones for $ 300 that spread one x4 lane SAS cable into 24 or 32 cables , plus they can be daisy-chained for more storage .
Then , buy a nice 24-slot Supermicro 4U chassis with dual-redundant power .
That 's a little less than $ 1000 .
All you need is the Chenbro expander in the chassis , no need for a motherboard.If you 're really cheap , you can use a cheaper $ 150 external SAS JBOD-only card , but hardware raid really is a must if you have a lot of storage .
Plus , a hardware raid can use write-back cache , since it has effectively non-volatile RAM using the battery backup unit .
And no , a UPS is NOT a replacement for NVRAM... Has your system ever crashed for any reason or hung for any reason ?
I 've never had a RAID card hang or crash.So , basically , besides the external SAS card , you have : 24-slot chassis with redundant power : $ 1000chenbro SAS expander : $ 300cables : dependsThat 's about $ 60/slot , plus you have redundant power ( and an upgrade route to dual-redundant controllers ) .
You can scale this to hundreds of terabytes , too .
Over a petabyte if you have multiple controllers ( with raid array rebuilding on one card not affecting rebuilding on another ) .</tokentext>
<sentencetext>You can get an external (4-port, but acts like one big 1.2 GiByte/s pipe) SAS RAID card for less than $500 that will allow you to make multiple RAID sets of up to 32-disks in a set using true hardware RAID 5,6,10, etc.
You can even get a battery backup unit for the RAID card cache for $100 (priceless on critical DB systems).An external SAS card allows you to connect over a hundred drives through one connection using SAS expanders (some cards support up to 256 devices).
Some external SAS RAID/JBOD cards have two SFF-8088 connections, for eight SAS lanes total.
That's 2.4 Gigabytes/sec raw.
At that rate, it's your PCI-e bus that's usually the bottleneck.A lot of SAS expanders are expensive, but Chenbro has some ones for $300 that spread one x4 lane SAS cable into 24 or 32 cables, plus they can be daisy-chained for more storage.
Then, buy a nice 24-slot Supermicro 4U chassis with dual-redundant power.
That's a little less than $1000.
All you need is the Chenbro expander in the chassis, no need for a motherboard.If you're really cheap, you can use a cheaper $150 external SAS JBOD-only card, but hardware raid really is a must if you have a lot of storage.
Plus, a hardware raid can use write-back cache, since it has effectively non-volatile RAM using the battery backup unit.
And no, a UPS is NOT a replacement for NVRAM... Has your system ever crashed for any reason or hung for any reason?
I've never had a RAID card hang or crash.So, basically, besides the external SAS card, you have:24-slot chassis with redundant power: $1000chenbro SAS expander: $300cables: dependsThat's about $60/slot, plus you have redundant power (and an upgrade route to dual-redundant controllers).
You can scale this to hundreds of terabytes, too.
Over a petabyte if you have multiple controllers (with raid array rebuilding on one card not affecting rebuilding on another).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433527</id>
	<title>Re:Wut</title>
	<author>Anonymous</author>
	<datestamp>1245684480000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext><p>The PC Guide people are morons.  Maybe that's what "JBOD" means in the Windows world, but if I hooked up a bunch of drives and told the controller I wanted JBOD, and what I got was a single volume spanned across the drives, I'd probably toss the thing out for being defective.  Or at least criminally poorly documented.</p><p>JBOD means "just a bunch of disks."  Emphasis on <i>bunch</i>.  It means "don't RAID this, don't span it, just give me a bunch of goddamn block devices."  Typically this is because you want to do something with the devices at a higher level than the disk controller.  (Like you're going to do a software RAID, or you're using some application that spreads its files across multiple disks and does redundancy itself, like some enterprise storage management products do.)</p><p>Spanning is a whole different story.  I don't really like spanning (why would you span and get all the downsides of RAID 0 stripes, without the I/O?), but I can understand some situations where it might be appropriate.  However, it's something that you build at the filesystem/OS level <i>across</i> a JBOD arrangement that's presented by a disk controller.</p></htmltext>
<tokenext>The PC Guide people are morons .
Maybe that 's what " JBOD " means in the Windows world , but if I hooked up a bunch of drives and told the controller I wanted JBOD , and what I got was a single volume spanned across the drives , I 'd probably toss the thing out for being defective .
Or at least criminally poorly documented.JBOD means " just a bunch of disks .
" Emphasis on bunch .
It means " do n't RAID this , do n't span it , just give me a bunch of goddamn block devices .
" Typically this is because you want to do something with the devices at a higher level than the disk controller .
( Like you 're going to do a software RAID , or you 're using some application that spreads its files across multiple disks and does redundancy itself , like some enterprise storage management products do .
) Spanning is a whole different story .
I do n't really like spanning ( why would you span and get all the downsides of RAID 0 stripes , without the I/O ?
) , but I can understand some situations where it might be appropriate .
However , it 's something that you build at the filesystem/OS level across a JBOD arrangement that 's presented by a disk controller .</tokentext>
<sentencetext>The PC Guide people are morons.
Maybe that's what "JBOD" means in the Windows world, but if I hooked up a bunch of drives and told the controller I wanted JBOD, and what I got was a single volume spanned across the drives, I'd probably toss the thing out for being defective.
Or at least criminally poorly documented.JBOD means "just a bunch of disks.
"  Emphasis on bunch.
It means "don't RAID this, don't span it, just give me a bunch of goddamn block devices.
"  Typically this is because you want to do something with the devices at a higher level than the disk controller.
(Like you're going to do a software RAID, or you're using some application that spreads its files across multiple disks and does redundancy itself, like some enterprise storage management products do.
)Spanning is a whole different story.
I don't really like spanning (why would you span and get all the downsides of RAID 0 stripes, without the I/O?
), but I can understand some situations where it might be appropriate.
However, it's something that you build at the filesystem/OS level across a JBOD arrangement that's presented by a disk controller.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431847</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429871</id>
	<title>Re:I stopped reading the summary</title>
	<author>Anonymous</author>
	<datestamp>1245668580000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>RAID 1 + swapping out/rebuilding a mirror disk periodically is a perfectly reasonable backup solution.</p></div></blockquote><p>

Except that your time to bring the backup/RAID1 mirror into sync with the primary RAID1 disk will be far longer than using something like rsync. Your fileserver will be slower because I/O will be flooded with the RAID sync process instead of the much shorter rsync.</p></div>
	</htmltext>
<tokenext>RAID 1 + swapping out/rebuilding a mirror disk periodically is a perfectly reasonable backup solution .
Except that your time to bring the backup/RAID1 mirror into sync with the primary RAID1 disk will be far longer than using something like rsync .
Your fileserver will be slower because I/O will be flooded with the RAID sync process instead of the much shorter rsync .</tokentext>
<sentencetext>RAID 1 + swapping out/rebuilding a mirror disk periodically is a perfectly reasonable backup solution.
Except that your time to bring the backup/RAID1 mirror into sync with the primary RAID1 disk will be far longer than using something like rsync.
Your fileserver will be slower because I/O will be flooded with the RAID sync process instead of the much shorter rsync.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28441803</id>
	<title>Norco eSATA crates</title>
	<author>Ktistec Machine</author>
	<datestamp>1245782820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>For what it's worth, we've had good luck with the Norco disk crates like this one:</p><p><a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16816133023" title="newegg.com">http://www.newegg.com/Product/Product.aspx?Item=N82E16816133023</a> [newegg.com]</p><p>These have 15 hot-swap SATA slots.  Each set of 5 is multiplexed to one eSATA connector on the back of the crate.  The crate comes with a PCI-X 4-port eSATA controller.   We use the crate as Just a Bunch of Disks, but it can be also configured as a RAID array.  At the price (about $800), it's very cheap per slot.  We currently have two of these full of terabyte disks, and an older DS-1200 (12-slot) with a mixture of disks.  They've been very reliable so far.</p></htmltext>
<tokenext>For what it 's worth , we 've had good luck with the Norco disk crates like this one : http : //www.newegg.com/Product/Product.aspx ? Item = N82E16816133023 [ newegg.com ] These have 15 hot-swap SATA slots .
Each set of 5 is multiplexed to one eSATA connector on the back of the crate .
The crate comes with a PCI-X 4-port eSATA controller .
We use the crate as Just a Bunch of Disks , but it can be also configured as a RAID array .
At the price ( about $ 800 ) , it 's very cheap per slot .
We currently have two of these full of terabyte disks , and an older DS-1200 ( 12-slot ) with a mixture of disks .
They 've been very reliable so far .</tokentext>
<sentencetext>For what it's worth, we've had good luck with the Norco disk crates like this one:http://www.newegg.com/Product/Product.aspx?Item=N82E16816133023 [newegg.com]These have 15 hot-swap SATA slots.
Each set of 5 is multiplexed to one eSATA connector on the back of the crate.
The crate comes with a PCI-X 4-port eSATA controller.
We use the crate as Just a Bunch of Disks, but it can be also configured as a RAID array.
At the price (about $800), it's very cheap per slot.
We currently have two of these full of terabyte disks, and an older DS-1200 (12-slot) with a mixture of disks.
They've been very reliable so far.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430727</id>
	<title>Re:I stopped reading the summary</title>
	<author>LoRdTAW</author>
	<datestamp>1245671820000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Best method I have of backing up my data is simple. First equip/upgrade a few existing computers with 1TB disks even if you never plan to fill them up. They can be at your parents house, siblings house or work. Copy your really important data like work, projects, photos, music, video (movies, tv shows and p0rn don't count), basically anything that is irreplaceable. Copy that to a 1TB USB disk and copy all the data to the computers you equipped with the backup drives. Now you have your data spread out all over. You can use rsync over the net or via a USB disk to keep things updated between machines. You can even partition the large 1TB disks and make a separate partition for your data so it cannot be tampered with. If a machine fails then from any of the others you can replicate the data.</p><p>Sounds like a pain in the ass but I keep copies on my brothers PC and my work PC. Its only about 400GB total so its not even half of the 1TB disk which costs about 75 bucks, small price to pay for peace of mind. I have a big software raid 5 array for personal file serving needs but it is in no way shape or form a backup system. I once had my raid 5 go haywire because of some disk controller problems. After a hardware upgrade I almost lost the array but it came back up and had to rebuild itself. Thankfully it didn't send me into a panic because I had my most important and irreplaceable data backed up.</p></htmltext>
<tokenext>Best method I have of backing up my data is simple .
First equip/upgrade a few existing computers with 1TB disks even if you never plan to fill them up .
They can be at your parents house , siblings house or work .
Copy your really important data like work , projects , photos , music , video ( movies , tv shows and p0rn do n't count ) , basically anything that is irreplaceable .
Copy that to a 1TB USB disk and copy all the data to the computers you equipped with the backup drives .
Now you have your data spread out all over .
You can use rsync over the net or via a USB disk to keep things updated between machines .
You can even partition the large 1TB disks and make a separate partition for your data so it can not be tampered with .
If a machine fails then from any of the others you can replicate the data.Sounds like a pain in the ass but I keep copies on my brothers PC and my work PC .
Its only about 400GB total so its not even half of the 1TB disk which costs about 75 bucks , small price to pay for peace of mind .
I have a big software raid 5 array for personal file serving needs but it is in no way shape or form a backup system .
I once had my raid 5 go haywire because of some disk controller problems .
After a hardware upgrade I almost lost the array but it came back up and had to rebuild itself .
Thankfully it did n't send me into a panic because I had my most important and irreplaceable data backed up .</tokentext>
<sentencetext>Best method I have of backing up my data is simple.
First equip/upgrade a few existing computers with 1TB disks even if you never plan to fill them up.
They can be at your parents house, siblings house or work.
Copy your really important data like work, projects, photos, music, video (movies, tv shows and p0rn don't count), basically anything that is irreplaceable.
Copy that to a 1TB USB disk and copy all the data to the computers you equipped with the backup drives.
Now you have your data spread out all over.
You can use rsync over the net or via a USB disk to keep things updated between machines.
You can even partition the large 1TB disks and make a separate partition for your data so it cannot be tampered with.
If a machine fails then from any of the others you can replicate the data.Sounds like a pain in the ass but I keep copies on my brothers PC and my work PC.
Its only about 400GB total so its not even half of the 1TB disk which costs about 75 bucks, small price to pay for peace of mind.
I have a big software raid 5 array for personal file serving needs but it is in no way shape or form a backup system.
I once had my raid 5 go haywire because of some disk controller problems.
After a hardware upgrade I almost lost the array but it came back up and had to rebuild itself.
Thankfully it didn't send me into a panic because I had my most important and irreplaceable data backed up.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430203</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28435513</id>
	<title>Re:Old AT (pre-ATX) case</title>
	<author>cerberusss</author>
	<datestamp>1245699360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Old AT cases and power supplies should be just about free, [...]  you will have a very reliable, well-cooled, very cheap solution.</p></div><p>Two years of constant running could mean that a standard enclosure, consuming less power, is actually cheaper.</p></div>
	</htmltext>
<tokenext>Old AT cases and power supplies should be just about free , [ ... ] you will have a very reliable , well-cooled , very cheap solution.Two years of constant running could mean that a standard enclosure , consuming less power , is actually cheaper .</tokentext>
<sentencetext>Old AT cases and power supplies should be just about free, [...]  you will have a very reliable, well-cooled, very cheap solution.Two years of constant running could mean that a standard enclosure, consuming less power, is actually cheaper.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431237</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430927</id>
	<title>Re:I stopped reading the summary</title>
	<author>Fulcrum of Evil</author>
	<datestamp>1245672540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>RAID 1 + swapping out/rebuilding a mirror disk periodically is a perfectly reasonable backup solution.</p></div><p>Sure, if you're retarded. I was going to say it was ok for home, but no, that's just stupid. Even a batch job that tars a bunch of directories onto a second HD works better (and no additional hardware either).</p></div>
	</htmltext>
<tokenext>RAID 1 + swapping out/rebuilding a mirror disk periodically is a perfectly reasonable backup solution.Sure , if you 're retarded .
I was going to say it was ok for home , but no , that 's just stupid .
Even a batch job that tars a bunch of directories onto a second HD works better ( and no additional hardware either ) .</tokentext>
<sentencetext>RAID 1 + swapping out/rebuilding a mirror disk periodically is a perfectly reasonable backup solution.Sure, if you're retarded.
I was going to say it was ok for home, but no, that's just stupid.
Even a batch job that tars a bunch of directories onto a second HD works better (and no additional hardware either).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431847</id>
	<title>Re:Wut</title>
	<author>sexconker</author>
	<datestamp>1245675960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Uh, no.<br>JBOD is when you link all disks together.<br>JBOD means "just a bunch of disks" strung together.</p><p><a href="http://www.pcguide.com/ref/hdd/perf/raid/levels/jbod.htm" title="pcguide.com">http://www.pcguide.com/ref/hdd/perf/raid/levels/jbod.htm</a> [pcguide.com]</p></htmltext>
<tokenext>Uh , no.JBOD is when you link all disks together.JBOD means " just a bunch of disks " strung together.http : //www.pcguide.com/ref/hdd/perf/raid/levels/jbod.htm [ pcguide.com ]</tokentext>
<sentencetext>Uh, no.JBOD is when you link all disks together.JBOD means "just a bunch of disks" strung together.http://www.pcguide.com/ref/hdd/perf/raid/levels/jbod.htm [pcguide.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430789</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429689</id>
	<title>Just nitpicking, but...</title>
	<author>Anonymous</author>
	<datestamp>1245667920000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>RAID != backup.  Say it with me.</p></htmltext>
<tokenext>RAID ! = backup .
Say it with me .</tokentext>
<sentencetext>RAID != backup.
Say it with me.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</id>
	<title>I stopped reading the summary</title>
	<author>Anonymous</author>
	<datestamp>1245667740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext>after the cretin suggested that RAID was some sort of substitute for a backup.</htmltext>
<tokenext>after the cretin suggested that RAID was some sort of substitute for a backup .</tokentext>
<sentencetext>after the cretin suggested that RAID was some sort of substitute for a backup.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430917</id>
	<title>Re:Why?</title>
	<author>Rockoon</author>
	<datestamp>1245672540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The main reason people choose JBOD is because they have a bunch of differently sized drives, which are not well suited for redundancy or striping.<br>
<br>
In my surviving collection of misc drives, I've got a 40 gig (8 years old), a 200 gig (5 years old), and a 500 gig (9 months old)<br>
<br>
There isnt any concievably usefull redundancy method using these, but I can treat the entire lot as a 740GB backup drive..<br>
<br>
If its for a home media server, backups and redundancy probably isnt a serious issue.. and performance definately isnt.. capacity would be the only real issue..</htmltext>
<tokenext>The main reason people choose JBOD is because they have a bunch of differently sized drives , which are not well suited for redundancy or striping .
In my surviving collection of misc drives , I 've got a 40 gig ( 8 years old ) , a 200 gig ( 5 years old ) , and a 500 gig ( 9 months old ) There isnt any concievably usefull redundancy method using these , but I can treat the entire lot as a 740GB backup drive. . If its for a home media server , backups and redundancy probably isnt a serious issue.. and performance definately isnt.. capacity would be the only real issue. .</tokentext>
<sentencetext>The main reason people choose JBOD is because they have a bunch of differently sized drives, which are not well suited for redundancy or striping.
In my surviving collection of misc drives, I've got a 40 gig (8 years old), a 200 gig (5 years old), and a 500 gig (9 months old)

There isnt any concievably usefull redundancy method using these, but I can treat the entire lot as a 740GB backup drive..

If its for a home media server, backups and redundancy probably isnt a serious issue.. and performance definately isnt.. capacity would be the only real issue..</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429855</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429933</id>
	<title>Re:Duct tape</title>
	<author>roc97007</author>
	<datestamp>1245668880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
Popsicle sticks between the drives, for airflow.</p></htmltext>
<tokenext>Popsicle sticks between the drives , for airflow .</tokentext>
<sentencetext>
Popsicle sticks between the drives, for airflow.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429683</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430897</id>
	<title>Re:I stopped reading the summary</title>
	<author>Skuld-Chan</author>
	<datestamp>1245672420000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>You'll be crying when you rebuild that raid and two disks fail at the same time (happened to me). No - raid isn't a backup solution.</p></htmltext>
<tokenext>You 'll be crying when you rebuild that raid and two disks fail at the same time ( happened to me ) .
No - raid is n't a backup solution .</tokentext>
<sentencetext>You'll be crying when you rebuild that raid and two disks fail at the same time (happened to me).
No - raid isn't a backup solution.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28438349</id>
	<title>Sans Digital FTW</title>
	<author>pak9rabid</author>
	<datestamp>1245769680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I picked up one of <a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16816111057" title="newegg.com">these guys</a> [newegg.com] for my backup purposes.  I filled it with 5 1TB drives and set it up in a Linux software RAID5 config.  It backs up all of my media that resides on an LVM volume.  It's been working out quite nicely so far<nobr> <wbr></nobr>:).  The port multiplier feature is <i>very</i> nice.  I only have to run a single eSATA cable for the 5 disks.</htmltext>
<tokenext>I picked up one of these guys [ newegg.com ] for my backup purposes .
I filled it with 5 1TB drives and set it up in a Linux software RAID5 config .
It backs up all of my media that resides on an LVM volume .
It 's been working out quite nicely so far : ) .
The port multiplier feature is very nice .
I only have to run a single eSATA cable for the 5 disks .</tokentext>
<sentencetext>I picked up one of these guys [newegg.com] for my backup purposes.
I filled it with 5 1TB drives and set it up in a Linux software RAID5 config.
It backs up all of my media that resides on an LVM volume.
It's been working out quite nicely so far :).
The port multiplier feature is very nice.
I only have to run a single eSATA cable for the 5 disks.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429835</id>
	<title>Wut</title>
	<author>Anonymous</author>
	<datestamp>1245668400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Why do you need an enclosure that does JBOD?<br>In my opinion you need an enclosure that does 2 things.</p><p>Encloses your drives.<br>Provides power (since current eSata doesn't, LOL).</p><p>Let your system handle the JBOD.  Everything supports JBOD.  Or, you know, just have them as 4 separate drives and be organized, so you can deal with them as raw drives if need be, and so if one goes dead, it'll be a lot easier to get your shit from the others.</p><p>I have yet to see a multi-drive enclosure that DOESN'T force it's shitty controller on you, unfortunately.</p><p>I would get 4 enclosures and 4 drives.<br>Stack them on top of each other.<br>Strap them together with masking tape (less residue than duct tape, provides a good space to write a label, etc.).<br>Split the output of a single 12v AC adapter (make sure it can put out enough amps) to all 4 inputs.<br>Run 4 eSata cables to the back of your PC.</p><p>Success.<br>The only issue is splitting the power lead (not hard, but you will need to find the jacks and you'll have to do it yourself) and running 4 eSata cables.</p><p>Yes, I'd be willing to do that just to get away from the shitty controllers in external enclosures.  Now, if this were SCSI, you could daisy chain the power and the data for the drives.</p><p>If only there was a serial-attached version of SCSI.</p></htmltext>
<tokenext>Why do you need an enclosure that does JBOD ? In my opinion you need an enclosure that does 2 things.Encloses your drives.Provides power ( since current eSata does n't , LOL ) .Let your system handle the JBOD .
Everything supports JBOD .
Or , you know , just have them as 4 separate drives and be organized , so you can deal with them as raw drives if need be , and so if one goes dead , it 'll be a lot easier to get your shit from the others.I have yet to see a multi-drive enclosure that DOES N'T force it 's shitty controller on you , unfortunately.I would get 4 enclosures and 4 drives.Stack them on top of each other.Strap them together with masking tape ( less residue than duct tape , provides a good space to write a label , etc .
) .Split the output of a single 12v AC adapter ( make sure it can put out enough amps ) to all 4 inputs.Run 4 eSata cables to the back of your PC.Success.The only issue is splitting the power lead ( not hard , but you will need to find the jacks and you 'll have to do it yourself ) and running 4 eSata cables.Yes , I 'd be willing to do that just to get away from the shitty controllers in external enclosures .
Now , if this were SCSI , you could daisy chain the power and the data for the drives.If only there was a serial-attached version of SCSI .</tokentext>
<sentencetext>Why do you need an enclosure that does JBOD?In my opinion you need an enclosure that does 2 things.Encloses your drives.Provides power (since current eSata doesn't, LOL).Let your system handle the JBOD.
Everything supports JBOD.
Or, you know, just have them as 4 separate drives and be organized, so you can deal with them as raw drives if need be, and so if one goes dead, it'll be a lot easier to get your shit from the others.I have yet to see a multi-drive enclosure that DOESN'T force it's shitty controller on you, unfortunately.I would get 4 enclosures and 4 drives.Stack them on top of each other.Strap them together with masking tape (less residue than duct tape, provides a good space to write a label, etc.
).Split the output of a single 12v AC adapter (make sure it can put out enough amps) to all 4 inputs.Run 4 eSata cables to the back of your PC.Success.The only issue is splitting the power lead (not hard, but you will need to find the jacks and you'll have to do it yourself) and running 4 eSata cables.Yes, I'd be willing to do that just to get away from the shitty controllers in external enclosures.
Now, if this were SCSI, you could daisy chain the power and the data for the drives.If only there was a serial-attached version of SCSI.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429713</id>
	<title>RAID is no backup solution</title>
	<author>ls671</author>
	<datestamp>1245667980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Please note that RAID and such are not "backup solutions" ! If your FS get screwed, you loose info.</p><p>Think of a backup solution as independent from the media where the info is kept. Then you decide if you want to use RAID, tapes, etc.</p><p>My backup solution: incremental backups every half-hour. And full backup once a month.</p><p>Now for the media I use to store the backups : RAID mirroring for incremental and hard drives put in a safe at the bank with rotation for full backups. (NO RAID used for full backups).</p></htmltext>
<tokenext>Please note that RAID and such are not " backup solutions " !
If your FS get screwed , you loose info.Think of a backup solution as independent from the media where the info is kept .
Then you decide if you want to use RAID , tapes , etc.My backup solution : incremental backups every half-hour .
And full backup once a month.Now for the media I use to store the backups : RAID mirroring for incremental and hard drives put in a safe at the bank with rotation for full backups .
( NO RAID used for full backups ) .</tokentext>
<sentencetext>Please note that RAID and such are not "backup solutions" !
If your FS get screwed, you loose info.Think of a backup solution as independent from the media where the info is kept.
Then you decide if you want to use RAID, tapes, etc.My backup solution: incremental backups every half-hour.
And full backup once a month.Now for the media I use to store the backups : RAID mirroring for incremental and hard drives put in a safe at the bank with rotation for full backups.
(NO RAID used for full backups).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28438549</id>
	<title>Sans Digital TowerRAID TR8M-B 8 Bay JBOD Enclosure</title>
	<author>Anonymous</author>
	<datestamp>1245770580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>My favorite is the <a href="http://www.justechn.com/2009/04/09/review-sans-digital-towerraid-tr8m-b-8-bay-jbod-enclosure" title="justechn.com" rel="nofollow">Sans Digital TowerRAID TR8M-B 8 Bay JBOD Enclosure</a> [justechn.com]. I do not have an HP MediaSmart server, I built my own, so this may prove to be a challenge for you because you have to add an eSATA card before it will work.</p></htmltext>
<tokenext>My favorite is the Sans Digital TowerRAID TR8M-B 8 Bay JBOD Enclosure [ justechn.com ] .
I do not have an HP MediaSmart server , I built my own , so this may prove to be a challenge for you because you have to add an eSATA card before it will work .</tokentext>
<sentencetext>My favorite is the Sans Digital TowerRAID TR8M-B 8 Bay JBOD Enclosure [justechn.com].
I do not have an HP MediaSmart server, I built my own, so this may prove to be a challenge for you because you have to add an eSATA card before it will work.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429699</id>
	<title>Porn</title>
	<author>Anonymous</author>
	<datestamp>1245667920000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>2 replies and this hasn't been tagged with porn yet? I'm disappointed in you slashbots.</p></htmltext>
<tokenext>2 replies and this has n't been tagged with porn yet ?
I 'm disappointed in you slashbots .</tokentext>
<sentencetext>2 replies and this hasn't been tagged with porn yet?
I'm disappointed in you slashbots.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430159</id>
	<title>Re:I stopped reading the summary</title>
	<author>radtea</author>
	<datestamp>1245669780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>after the cretin suggested that RAID was some sort of substitute for a backup.</i></p><p>I realize that English may not be your first language, but can you point out what makes you think that anything in summary implies RAID is any sort of substitute for a backup?</p><p>He's looking for a system on which to keep a duplicate copy of his primary drives.  RAID gives you relatively cheap mass storage.  Such a duplicate copy is generally known as a "backup".</p><p>RAID can be used as media for backing systems up.  When it is used that way, it is not a substitute for a backup, it IS a backup.</p><p>He's asking if that's a good idea or not.</p><p>It doesn't seem like a question a cretin would ask, to me.</p></htmltext>
<tokenext>after the cretin suggested that RAID was some sort of substitute for a backup.I realize that English may not be your first language , but can you point out what makes you think that anything in summary implies RAID is any sort of substitute for a backup ? He 's looking for a system on which to keep a duplicate copy of his primary drives .
RAID gives you relatively cheap mass storage .
Such a duplicate copy is generally known as a " backup " .RAID can be used as media for backing systems up .
When it is used that way , it is not a substitute for a backup , it IS a backup.He 's asking if that 's a good idea or not.It does n't seem like a question a cretin would ask , to me .</tokentext>
<sentencetext>after the cretin suggested that RAID was some sort of substitute for a backup.I realize that English may not be your first language, but can you point out what makes you think that anything in summary implies RAID is any sort of substitute for a backup?He's looking for a system on which to keep a duplicate copy of his primary drives.
RAID gives you relatively cheap mass storage.
Such a duplicate copy is generally known as a "backup".RAID can be used as media for backing systems up.
When it is used that way, it is not a substitute for a backup, it IS a backup.He's asking if that's a good idea or not.It doesn't seem like a question a cretin would ask, to me.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430921</id>
	<title>Re:If you need more than ten disks, go for cheap S</title>
	<author>pyite</author>
	<datestamp>1245672540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>hardware raid really is a must if you have a lot of storage</i></p><p>No, hardware RAID is a bad idea. You're locked to a proprietary controller and a proprietary on-disk format. ZFS is a much better idea.</p></htmltext>
<tokenext>hardware raid really is a must if you have a lot of storageNo , hardware RAID is a bad idea .
You 're locked to a proprietary controller and a proprietary on-disk format .
ZFS is a much better idea .</tokentext>
<sentencetext>hardware raid really is a must if you have a lot of storageNo, hardware RAID is a bad idea.
You're locked to a proprietary controller and a proprietary on-disk format.
ZFS is a much better idea.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429895</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430203</id>
	<title>Re:I stopped reading the summary</title>
	<author>obarthelemy</author>
	<datestamp>1245669960000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>yeah sure.</p><p>Let's say it again: Backups are:<br>- off-site<br>- offline<br>- multiple<br>- tested</p><p>anything else is just some kind of high-availability solution, that does NOT protect against catastrophic failure, fires, viruses...</p></htmltext>
<tokenext>yeah sure.Let 's say it again : Backups are : - off-site- offline- multiple- testedanything else is just some kind of high-availability solution , that does NOT protect against catastrophic failure , fires , viruses.. .</tokentext>
<sentencetext>yeah sure.Let's say it again: Backups are:- off-site- offline- multiple- testedanything else is just some kind of high-availability solution, that does NOT protect against catastrophic failure, fires, viruses...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28458851</id>
	<title>eSATA or Multilane SAS</title>
	<author>petree</author>
	<datestamp>1245838560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>First, let me suggest you consider using a card with a multilane SAS (4x) connector, also called infiniband, instead of an eSATA connection.  These connectors are just 4x SATA/SAS bundled into one, so each drive gets full bandwidth instead of pushing it all across one 1.5/3gbps eSATA connection.  You can even by a bracket that will combine 4x internal sata connectors to make one multilane sas connector, these run ~$20, so you can use onboard sata for 4 of your external drives if you've got extra sata ports on your mobo.<br>That said, here's the solution I have running with ZFS RaidZ under OpenSolaris (also used under Linux + FUSE ZFS):<br>DatOptic EBOX-M - 8Bay Sata Enclosure (other EBOX/QBOX enclosure available with eSATA/MiniSAS/USB/Firewire instead of Multilane SAS)<br>4x WD 1TB RE2 'Green Drives'<br>Addonics ADSA3GPX8-ML (SilImage 3124 Based, Certified for Solaris, works in Linux/Windows...MAC?)</p><p>I kind of wish I hadn't spent the extra loot on a the 8bay enclosure, since I still haven't dumped a second set of 4 drives, but imagine someday I'll fill it out.  The enclosure is simple, no hot swap, just insert bare drives into the bays, no sleds/mounting hardware required.</p></htmltext>
<tokenext>First , let me suggest you consider using a card with a multilane SAS ( 4x ) connector , also called infiniband , instead of an eSATA connection .
These connectors are just 4x SATA/SAS bundled into one , so each drive gets full bandwidth instead of pushing it all across one 1.5/3gbps eSATA connection .
You can even by a bracket that will combine 4x internal sata connectors to make one multilane sas connector , these run ~ $ 20 , so you can use onboard sata for 4 of your external drives if you 've got extra sata ports on your mobo.That said , here 's the solution I have running with ZFS RaidZ under OpenSolaris ( also used under Linux + FUSE ZFS ) : DatOptic EBOX-M - 8Bay Sata Enclosure ( other EBOX/QBOX enclosure available with eSATA/MiniSAS/USB/Firewire instead of Multilane SAS ) 4x WD 1TB RE2 'Green Drives'Addonics ADSA3GPX8-ML ( SilImage 3124 Based , Certified for Solaris , works in Linux/Windows...MAC ?
) I kind of wish I had n't spent the extra loot on a the 8bay enclosure , since I still have n't dumped a second set of 4 drives , but imagine someday I 'll fill it out .
The enclosure is simple , no hot swap , just insert bare drives into the bays , no sleds/mounting hardware required .</tokentext>
<sentencetext>First, let me suggest you consider using a card with a multilane SAS (4x) connector, also called infiniband, instead of an eSATA connection.
These connectors are just 4x SATA/SAS bundled into one, so each drive gets full bandwidth instead of pushing it all across one 1.5/3gbps eSATA connection.
You can even by a bracket that will combine 4x internal sata connectors to make one multilane sas connector, these run ~$20, so you can use onboard sata for 4 of your external drives if you've got extra sata ports on your mobo.That said, here's the solution I have running with ZFS RaidZ under OpenSolaris (also used under Linux + FUSE ZFS):DatOptic EBOX-M - 8Bay Sata Enclosure (other EBOX/QBOX enclosure available with eSATA/MiniSAS/USB/Firewire instead of Multilane SAS)4x WD 1TB RE2 'Green Drives'Addonics ADSA3GPX8-ML (SilImage 3124 Based, Certified for Solaris, works in Linux/Windows...MAC?
)I kind of wish I hadn't spent the extra loot on a the 8bay enclosure, since I still haven't dumped a second set of 4 drives, but imagine someday I'll fill it out.
The enclosure is simple, no hot swap, just insert bare drives into the bays, no sleds/mounting hardware required.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430351</id>
	<title>How about this...</title>
	<author>Anonymous</author>
	<datestamp>1245670440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>http://www.newegg.com/Product/Product.aspx?Item=N82E16816111048</p></htmltext>
<tokenext>http : //www.newegg.com/Product/Product.aspx ? Item = N82E16816111048</tokentext>
<sentencetext>http://www.newegg.com/Product/Product.aspx?Item=N82E16816111048</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430983</id>
	<title>Re:Wut</title>
	<author>Anonymous</author>
	<datestamp>1245672780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Can't agree with the masking tape. If you  don't peel it off pretty quick, you get an awful residue. Duct tape residue will clean off with a little rubbing alcohol.</p></htmltext>
<tokenext>Ca n't agree with the masking tape .
If you do n't peel it off pretty quick , you get an awful residue .
Duct tape residue will clean off with a little rubbing alcohol .</tokentext>
<sentencetext>Can't agree with the masking tape.
If you  don't peel it off pretty quick, you get an awful residue.
Duct tape residue will clean off with a little rubbing alcohol.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429835</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431155</id>
	<title>Re:Why?</title>
	<author>Jason Pollock</author>
	<datestamp>1245673320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In my case, it's because I don't care if I lose the data.  They're rips of DVDs/CDs that I own, so 1 DVD represents 5minutes of time.  In a lot of the RAID setups, if you lose a disk, you lose the entire RAID.  In others, if you lose the card/motherboard, you lose the entire RAID.
</p><p>
In that situation, the frustration represented by losing the entire array when a disk (or card) bites the dust is a lot higher than the performance benefits, or the supposed reliability benefit.
</p><p>
Remember, in a consumer environment it can take \_weeks\_ to replace a drive under warranty.
</p><p>
Heck, I use drive failures as a method of culling media on the server.<nobr> <wbr></nobr>:)
</p></htmltext>
<tokenext>In my case , it 's because I do n't care if I lose the data .
They 're rips of DVDs/CDs that I own , so 1 DVD represents 5minutes of time .
In a lot of the RAID setups , if you lose a disk , you lose the entire RAID .
In others , if you lose the card/motherboard , you lose the entire RAID .
In that situation , the frustration represented by losing the entire array when a disk ( or card ) bites the dust is a lot higher than the performance benefits , or the supposed reliability benefit .
Remember , in a consumer environment it can take \ _weeks \ _ to replace a drive under warranty .
Heck , I use drive failures as a method of culling media on the server .
: )</tokentext>
<sentencetext>In my case, it's because I don't care if I lose the data.
They're rips of DVDs/CDs that I own, so 1 DVD represents 5minutes of time.
In a lot of the RAID setups, if you lose a disk, you lose the entire RAID.
In others, if you lose the card/motherboard, you lose the entire RAID.
In that situation, the frustration represented by losing the entire array when a disk (or card) bites the dust is a lot higher than the performance benefits, or the supposed reliability benefit.
Remember, in a consumer environment it can take \_weeks\_ to replace a drive under warranty.
Heck, I use drive failures as a method of culling media on the server.
:)
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429855</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429663</id>
	<title>Sonnet Tech Fusion line</title>
	<author>Anonymous</author>
	<datestamp>1245667800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I like the Sonnet Technologies Fusion line.  They are well made, and well supported (Mac and Linux Friendly also)</p></htmltext>
<tokenext>I like the Sonnet Technologies Fusion line .
They are well made , and well supported ( Mac and Linux Friendly also )</tokentext>
<sentencetext>I like the Sonnet Technologies Fusion line.
They are well made, and well supported (Mac and Linux Friendly also)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28432759</id>
	<title>Re:I stopped reading the summary</title>
	<author>magarity</author>
	<datestamp>1245680160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I stopped reading after he uselessly bragged about upgrading the processor and memory.  Isn't there a 'lookatme' tag?</p></htmltext>
<tokenext>I stopped reading after he uselessly bragged about upgrading the processor and memory .
Is n't there a 'lookatme ' tag ?</tokentext>
<sentencetext>I stopped reading after he uselessly bragged about upgrading the processor and memory.
Isn't there a 'lookatme' tag?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429879</id>
	<title>Re:I stopped reading the summary</title>
	<author>Dan Stephans II</author>
	<datestamp>1245668640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Until your controller goes berserk and craps all over your disk or your other disk fails in the middle of the rebuild.  Or...</htmltext>
<tokenext>Until your controller goes berserk and craps all over your disk or your other disk fails in the middle of the rebuild .
Or.. .</tokentext>
<sentencetext>Until your controller goes berserk and craps all over your disk or your other disk fails in the middle of the rebuild.
Or...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28432993</id>
	<title>Re:Old AT (pre-ATX) case</title>
	<author>cadu</author>
	<datestamp>1245681360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Cool solution, BTW, if you don't have an AT case and power supply, you can 'emulate' that hardware switch behavior even with an ATX case:</p><p>
&nbsp; 1- Get the main 20/24-pin atx molex and use a clip or a small piece of wire to patch permanently the only \_green\_ (atx power sens) wire to any black wire (GND) or if you don't care about reusing the power supply, ditch the molex, insulate everything else and twist the green and one black wires together, and insulate.</p><p>
&nbsp; 2- If you have any cruft inside this case (old cdrom drives/motherboard) remove everything and put only the hard drives for extra air flow<nobr> <wbr></nobr>:)</p><p>
&nbsp; 3- wire with sata-2-esata adapters or very long sata cables.</p><p>
&nbsp; 4- Turn on your 'JBOD' with the power supply's back power switch (AT-like behavior)</p><p>
&nbsp; 5- ????</p><p>
&nbsp; 6- Profit??</p></htmltext>
<tokenext>Cool solution , BTW , if you do n't have an AT case and power supply , you can 'emulate ' that hardware switch behavior even with an ATX case :   1- Get the main 20/24-pin atx molex and use a clip or a small piece of wire to patch permanently the only \ _green \ _ ( atx power sens ) wire to any black wire ( GND ) or if you do n't care about reusing the power supply , ditch the molex , insulate everything else and twist the green and one black wires together , and insulate .
  2- If you have any cruft inside this case ( old cdrom drives/motherboard ) remove everything and put only the hard drives for extra air flow : )   3- wire with sata-2-esata adapters or very long sata cables .
  4- Turn on your 'JBOD ' with the power supply 's back power switch ( AT-like behavior )   5- ? ? ? ?
  6- Profit ?
?</tokentext>
<sentencetext>Cool solution, BTW, if you don't have an AT case and power supply, you can 'emulate' that hardware switch behavior even with an ATX case:
  1- Get the main 20/24-pin atx molex and use a clip or a small piece of wire to patch permanently the only \_green\_ (atx power sens) wire to any black wire (GND) or if you don't care about reusing the power supply, ditch the molex, insulate everything else and twist the green and one black wires together, and insulate.
  2- If you have any cruft inside this case (old cdrom drives/motherboard) remove everything and put only the hard drives for extra air flow :)
  3- wire with sata-2-esata adapters or very long sata cables.
  4- Turn on your 'JBOD' with the power supply's back power switch (AT-like behavior)
  5- ????
  6- Profit?
?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431237</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429711</id>
	<title>Just. Fucking. Google. It.</title>
	<author>Anonymous</author>
	<datestamp>1245667980000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>"Hey guys, got a quick question"</p><p>Just. Fucking. Google. It.</p><p>JUST</p><p>FUCKING</p><p>GOOGLE</p><p>IT</p></htmltext>
<tokenext>" Hey guys , got a quick question " Just .
Fucking. Google .
It.JUSTFUCKINGGOOGLEIT</tokentext>
<sentencetext>"Hey guys, got a quick question"Just.
Fucking. Google.
It.JUSTFUCKINGGOOGLEIT</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430419</id>
	<title>bare hard drives</title>
	<author>Anonymous</author>
	<datestamp>1245670620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I like to use bare hard drives.  The Thermaltake BlacX is a great little dock that lets you plug in bare hard drives like Nintendo cartridges.  I also like Hudzee cases for protecting the bare drives (see <a href="http://hudzee.com/" title="hudzee.com" rel="nofollow">hudzee.com</a> [hudzee.com] ).</p></htmltext>
<tokenext>I like to use bare hard drives .
The Thermaltake BlacX is a great little dock that lets you plug in bare hard drives like Nintendo cartridges .
I also like Hudzee cases for protecting the bare drives ( see hudzee.com [ hudzee.com ] ) .</tokentext>
<sentencetext>I like to use bare hard drives.
The Thermaltake BlacX is a great little dock that lets you plug in bare hard drives like Nintendo cartridges.
I also like Hudzee cases for protecting the bare drives (see hudzee.com [hudzee.com] ).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429891</id>
	<title>Re:I stopped reading the summary</title>
	<author>Anonymous</author>
	<datestamp>1245668700000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>What's a backup?</p></htmltext>
<tokenext>What 's a backup ?</tokentext>
<sentencetext>What's a backup?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28439095</id>
	<title>CFI-B8283ER</title>
	<author>kobold2</author>
	<datestamp>1245772860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>8-bay esata, raid 0,1,10,5, s.m.a.r.t., hot-swap, sparing, internal 300w psu, affordable.</htmltext>
<tokenext>8-bay esata , raid 0,1,10,5 , s.m.a.r.t. , hot-swap , sparing , internal 300w psu , affordable .</tokentext>
<sentencetext>8-bay esata, raid 0,1,10,5, s.m.a.r.t., hot-swap, sparing, internal 300w psu, affordable.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28440709</id>
	<title>QNap NAS or SansDigital for small needs</title>
	<author>Anonymous</author>
	<datestamp>1245778980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>QNap make the best NAS devices outside Network Appliance filers (IMHO).</p><p>For a simple external eSata JBOD, I highly suggest a SansDigital TowerRAID device. they are low-cost (I paid 249 for my 8 drive bay unit) and come with a dual port PCIe eSata fake raid controller. It works great as a "rack" of storage for my openfiler NAS. (I'm hosting anywhere from 22-100 VMs for datacenter development work) off this little jbod.</p></htmltext>
<tokenext>QNap make the best NAS devices outside Network Appliance filers ( IMHO ) .For a simple external eSata JBOD , I highly suggest a SansDigital TowerRAID device .
they are low-cost ( I paid 249 for my 8 drive bay unit ) and come with a dual port PCIe eSata fake raid controller .
It works great as a " rack " of storage for my openfiler NAS .
( I 'm hosting anywhere from 22-100 VMs for datacenter development work ) off this little jbod .</tokentext>
<sentencetext>QNap make the best NAS devices outside Network Appliance filers (IMHO).For a simple external eSata JBOD, I highly suggest a SansDigital TowerRAID device.
they are low-cost (I paid 249 for my 8 drive bay unit) and come with a dual port PCIe eSata fake raid controller.
It works great as a "rack" of storage for my openfiler NAS.
(I'm hosting anywhere from 22-100 VMs for datacenter development work) off this little jbod.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431915</id>
	<title>Drobo</title>
	<author>Anonymous</author>
	<datestamp>1245676260000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext>See the <a href="http://drobo.com/products/index.php" title="drobo.com">Drobos</a> [drobo.com] at the linked page. No eSATA, but perhaps you can get an eSATA -&gt; firewire 800 or iSCSI (sp?) dongle. The best part about them is ease-of-use.</htmltext>
<tokenext>See the Drobos [ drobo.com ] at the linked page .
No eSATA , but perhaps you can get an eSATA - &gt; firewire 800 or iSCSI ( sp ?
) dongle .
The best part about them is ease-of-use .</tokentext>
<sentencetext>See the Drobos [drobo.com] at the linked page.
No eSATA, but perhaps you can get an eSATA -&gt; firewire 800 or iSCSI (sp?
) dongle.
The best part about them is ease-of-use.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430307</id>
	<title>Re:I stopped reading the summary</title>
	<author>Anonymous</author>
	<datestamp>1245670320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Just to clear up what the parent was saying, RAID is not a backup solution because it does not protect against fdisk, rm, mkfs, <a href="http://linux.slashdot.org/story/09/03/11/2031231/Apps-That-Rely-On-Ext3s-Commit-Interval-May-Lose-Data-In-Ext4?art\_pos=9" title="slashdot.org" rel="nofollow">crappy filesystems</a> [slashdot.org] or any other weapons of data destruction. Backup solutions do protect against all these and more.</p></htmltext>
<tokenext>Just to clear up what the parent was saying , RAID is not a backup solution because it does not protect against fdisk , rm , mkfs , crappy filesystems [ slashdot.org ] or any other weapons of data destruction .
Backup solutions do protect against all these and more .</tokentext>
<sentencetext>Just to clear up what the parent was saying, RAID is not a backup solution because it does not protect against fdisk, rm, mkfs, crappy filesystems [slashdot.org] or any other weapons of data destruction.
Backup solutions do protect against all these and more.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28591211</id>
	<title>Re:I stopped reading the summary</title>
	<author>obarthelemy</author>
	<datestamp>1246818000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>fixed it for you:</p><p>Someone tell all those people<nobr> <wbr></nobr>... BUYING<nobr> <wbr></nobr>...</p><p>from the Amazon TOS:</p><p>"NEITHER WE NOR ANY OF OUR LICENSORS SHALL BE LIABLE TO YOU FOR ANY [...] DAMAGES[...] INCLUDING, WITHOUT LIMITATION, ANY SUCH DAMAGES RESULTING FROM:<br>(i) THE USE OR THE INABILITY TO USE THE SERVICES;<br>(ii) THE COST OF PROCUREMENT OF SUBSTITUTE GOODS AND SERVICES;<br>(iii) UNAUTHORIZED ACCESS TO OR ALTERATION OF YOUR CONTENT.</p><p>IN ANY CASE, OUR AGGREGATE LIABILITY UNDER THIS AGREEMENT SHALL BE LIMITED TO THE AMOUNT ACTUALLY PAID BY YOU TO US HEREUNDER FOR THE SERVICES. "</p><p>hey, that does inspire confidence... or not.</p></htmltext>
<tokenext>fixed it for you : Someone tell all those people ... BUYING ...from the Amazon TOS : " NEITHER WE NOR ANY OF OUR LICENSORS SHALL BE LIABLE TO YOU FOR ANY [ ... ] DAMAGES [ ... ] INCLUDING , WITHOUT LIMITATION , ANY SUCH DAMAGES RESULTING FROM : ( i ) THE USE OR THE INABILITY TO USE THE SERVICES ; ( ii ) THE COST OF PROCUREMENT OF SUBSTITUTE GOODS AND SERVICES ; ( iii ) UNAUTHORIZED ACCESS TO OR ALTERATION OF YOUR CONTENT.IN ANY CASE , OUR AGGREGATE LIABILITY UNDER THIS AGREEMENT SHALL BE LIMITED TO THE AMOUNT ACTUALLY PAID BY YOU TO US HEREUNDER FOR THE SERVICES .
" hey , that does inspire confidence... or not .</tokentext>
<sentencetext>fixed it for you:Someone tell all those people ... BUYING ...from the Amazon TOS:"NEITHER WE NOR ANY OF OUR LICENSORS SHALL BE LIABLE TO YOU FOR ANY [...] DAMAGES[...] INCLUDING, WITHOUT LIMITATION, ANY SUCH DAMAGES RESULTING FROM:(i) THE USE OR THE INABILITY TO USE THE SERVICES;(ii) THE COST OF PROCUREMENT OF SUBSTITUTE GOODS AND SERVICES;(iii) UNAUTHORIZED ACCESS TO OR ALTERATION OF YOUR CONTENT.IN ANY CASE, OUR AGGREGATE LIABILITY UNDER THIS AGREEMENT SHALL BE LIMITED TO THE AMOUNT ACTUALLY PAID BY YOU TO US HEREUNDER FOR THE SERVICES.
"hey, that does inspire confidence... or not.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28443827</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429855</id>
	<title>Why?</title>
	<author>Doug Neal</author>
	<datestamp>1245668520000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think Linux and Windows can both do this quite easily in software... but why bother? JBOD is the worst of both worlds when it comes to storage arrays. You have all the risk of losing everything if one drive dies, without gaining the performance benefits that RAID 0's striping gives you. Hard disks are cheap enough for a 2TB RAID 10 array to be affordable.</p><p>Yes this was quite a predictable comment, but someone had to say it..</p></htmltext>
<tokenext>I think Linux and Windows can both do this quite easily in software... but why bother ?
JBOD is the worst of both worlds when it comes to storage arrays .
You have all the risk of losing everything if one drive dies , without gaining the performance benefits that RAID 0 's striping gives you .
Hard disks are cheap enough for a 2TB RAID 10 array to be affordable.Yes this was quite a predictable comment , but someone had to say it. .</tokentext>
<sentencetext>I think Linux and Windows can both do this quite easily in software... but why bother?
JBOD is the worst of both worlds when it comes to storage arrays.
You have all the risk of losing everything if one drive dies, without gaining the performance benefits that RAID 0's striping gives you.
Hard disks are cheap enough for a 2TB RAID 10 array to be affordable.Yes this was quite a predictable comment, but someone had to say it..</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430965</id>
	<title>Re:If you need more than ten disks, go for cheap S</title>
	<author>zen\_sky</author>
	<datestamp>1245672720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Jeez, why do I feel I like drank my slurpee too fast?
Thanks for the info dump!<nobr> <wbr></nobr>:)</htmltext>
<tokenext>Jeez , why do I feel I like drank my slurpee too fast ?
Thanks for the info dump !
: )</tokentext>
<sentencetext>Jeez, why do I feel I like drank my slurpee too fast?
Thanks for the info dump!
:)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429895</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433127</id>
	<title>home array backups?</title>
	<author>Anonymous</author>
	<datestamp>1245682020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>As a on-topic secondary question: how do people do backups of large (5TB) arrays?  Specifically, I'm looking at my media server at home.  I -could- buy a tape drive, but those look like they get expensive quickly.</p></htmltext>
<tokenext>As a on-topic secondary question : how do people do backups of large ( 5TB ) arrays ?
Specifically , I 'm looking at my media server at home .
I -could- buy a tape drive , but those look like they get expensive quickly .</tokentext>
<sentencetext>As a on-topic secondary question: how do people do backups of large (5TB) arrays?
Specifically, I'm looking at my media server at home.
I -could- buy a tape drive, but those look like they get expensive quickly.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28434215</id>
	<title>Re:Why?</title>
	<author>adolf</author>
	<datestamp>1245688380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>large JBOD and software RAID/quote</p><p>Isn't that redundant?  What does the above text specify which could not be concisely written with just the words "software RAID"?</p></div></div>
	</htmltext>
<tokenext>large JBOD and software RAID/quoteIs n't that redundant ?
What does the above text specify which could not be concisely written with just the words " software RAID " ?</tokentext>
<sentencetext>large JBOD and software RAID/quoteIsn't that redundant?
What does the above text specify which could not be concisely written with just the words "software RAID"?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430349</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429755</id>
	<title>The best ESATA isn't really ESATA at all.</title>
	<author>Anonymous</author>
	<datestamp>1245668100000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext><p>ESATA is meant as a simple solution to replace the usb 2 interface on external drives. It solves the bottle-neck for a single drive, but doesn't scale well.</p><p>You're better off with an SAS external enclosure and a SAS card with external connections.  These can be expensive, but will pay for themselves quickly with the lack of extra management.</p></htmltext>
<tokenext>ESATA is meant as a simple solution to replace the usb 2 interface on external drives .
It solves the bottle-neck for a single drive , but does n't scale well.You 're better off with an SAS external enclosure and a SAS card with external connections .
These can be expensive , but will pay for themselves quickly with the lack of extra management .</tokentext>
<sentencetext>ESATA is meant as a simple solution to replace the usb 2 interface on external drives.
It solves the bottle-neck for a single drive, but doesn't scale well.You're better off with an SAS external enclosure and a SAS card with external connections.
These can be expensive, but will pay for themselves quickly with the lack of extra management.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430091</id>
	<title>Re:I stopped reading the summary</title>
	<author>drsmithy</author>
	<datestamp>1245669480000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>after the cretin suggested that RAID was some sort of substitute for a backup.</i>
</p><p>RAID combined with a snapshotting system (Time Machine, VSS, ZFS, take your pick) can function as an excellent backup system.  Not including off-site, obviously, but more than adequate for the typical home user.
</p><p>I've never really looked into it, but I assume you can configure WHS to take regular VSS snapshots ?</p></htmltext>
<tokenext>after the cretin suggested that RAID was some sort of substitute for a backup .
RAID combined with a snapshotting system ( Time Machine , VSS , ZFS , take your pick ) can function as an excellent backup system .
Not including off-site , obviously , but more than adequate for the typical home user .
I 've never really looked into it , but I assume you can configure WHS to take regular VSS snapshots ?</tokentext>
<sentencetext> after the cretin suggested that RAID was some sort of substitute for a backup.
RAID combined with a snapshotting system (Time Machine, VSS, ZFS, take your pick) can function as an excellent backup system.
Not including off-site, obviously, but more than adequate for the typical home user.
I've never really looked into it, but I assume you can configure WHS to take regular VSS snapshots ?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430269</id>
	<title>The Rosewill RSV-S8</title>
	<author>UserChrisCanter4</author>
	<datestamp>1245670200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>The <a href="http://www.rosewill.com/products/s\_1189/productDetail.htm" title="rosewill.com">Rosewill RSV-S8</a> [rosewill.com] is pretty much exactly what you've described.  It's an eSATA enclosure with 8 drive caddies, a power supply, and a fan.  It presents the drives to the system as JBOD or one of the various common versions of RAID (implemented in software, I assume).  Ignore the comically inflated MSRP; it's $300 on Newegg.

It ships with its own eSATA card for compatibility purposes, but I assume it would work with any eSATA adapter that followed the proper specifications.  There's also a five drive version available for about $100 less, give or take.

I can't speak to the reliability or ease of use, but this sounds like it will fit your requirements.</htmltext>
<tokenext>The Rosewill RSV-S8 [ rosewill.com ] is pretty much exactly what you 've described .
It 's an eSATA enclosure with 8 drive caddies , a power supply , and a fan .
It presents the drives to the system as JBOD or one of the various common versions of RAID ( implemented in software , I assume ) .
Ignore the comically inflated MSRP ; it 's $ 300 on Newegg .
It ships with its own eSATA card for compatibility purposes , but I assume it would work with any eSATA adapter that followed the proper specifications .
There 's also a five drive version available for about $ 100 less , give or take .
I ca n't speak to the reliability or ease of use , but this sounds like it will fit your requirements .</tokentext>
<sentencetext>The Rosewill RSV-S8 [rosewill.com] is pretty much exactly what you've described.
It's an eSATA enclosure with 8 drive caddies, a power supply, and a fan.
It presents the drives to the system as JBOD or one of the various common versions of RAID (implemented in software, I assume).
Ignore the comically inflated MSRP; it's $300 on Newegg.
It ships with its own eSATA card for compatibility purposes, but I assume it would work with any eSATA adapter that followed the proper specifications.
There's also a five drive version available for about $100 less, give or take.
I can't speak to the reliability or ease of use, but this sounds like it will fit your requirements.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28456347</id>
	<title>How does battery back up help?</title>
	<author>pestie</author>
	<datestamp>1245872280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've often wondered this and have yet to see an answer to it. How does having battery back up on your RAID card's cache help anything when your operating system is probably doing a buttload of caching in system RAM? A crash or power outage is still going to throw that away, leading to a just-as-corrupted filesystem.</p></htmltext>
<tokenext>I 've often wondered this and have yet to see an answer to it .
How does having battery back up on your RAID card 's cache help anything when your operating system is probably doing a buttload of caching in system RAM ?
A crash or power outage is still going to throw that away , leading to a just-as-corrupted filesystem .</tokentext>
<sentencetext>I've often wondered this and have yet to see an answer to it.
How does having battery back up on your RAID card's cache help anything when your operating system is probably doing a buttload of caching in system RAM?
A crash or power outage is still going to throw that away, leading to a just-as-corrupted filesystem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429895</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433169</id>
	<title>Re:I stopped reading the summary</title>
	<author>growse</author>
	<datestamp>1245682260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Again, why bring RAID into it? A disk combined with a decent snapshotting system can function as an excellent backup system.
<br>
<br>
RAID use is orthogonal to backup strategy. The two have nothing to do with each other. RAID helps availability, and sometimes performance.</htmltext>
<tokenext>Again , why bring RAID into it ?
A disk combined with a decent snapshotting system can function as an excellent backup system .
RAID use is orthogonal to backup strategy .
The two have nothing to do with each other .
RAID helps availability , and sometimes performance .</tokentext>
<sentencetext>Again, why bring RAID into it?
A disk combined with a decent snapshotting system can function as an excellent backup system.
RAID use is orthogonal to backup strategy.
The two have nothing to do with each other.
RAID helps availability, and sometimes performance.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430091</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433323</id>
	<title>pc-pitstop.com</title>
	<author>kenh</author>
	<datestamp>1245683100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>At $WORK we just got a nice 8 bay rackmount eSATA chassis from them - dual/redundant power supply two quad-port SAS connectors, about $895, $679 for single power supply version. We bought it with 8x 1TB SATA HDs and an Areca RADI card with cables for just over $2200. (it is available as a chassis without cables, cards, or drives).</p></htmltext>
<tokenext>At $ WORK we just got a nice 8 bay rackmount eSATA chassis from them - dual/redundant power supply two quad-port SAS connectors , about $ 895 , $ 679 for single power supply version .
We bought it with 8x 1TB SATA HDs and an Areca RADI card with cables for just over $ 2200 .
( it is available as a chassis without cables , cards , or drives ) .</tokentext>
<sentencetext>At $WORK we just got a nice 8 bay rackmount eSATA chassis from them - dual/redundant power supply two quad-port SAS connectors, about $895, $679 for single power supply version.
We bought it with 8x 1TB SATA HDs and an Areca RADI card with cables for just over $2200.
(it is available as a chassis without cables, cards, or drives).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429683</id>
	<title>Duct tape</title>
	<author>sakdoctor</author>
	<datestamp>1245667860000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Duct tape the drives together, then use software RAID JBOD.<br>That's what MacGyver would have done.</p></htmltext>
<tokenext>Duct tape the drives together , then use software RAID JBOD.That 's what MacGyver would have done .</tokentext>
<sentencetext>Duct tape the drives together, then use software RAID JBOD.That's what MacGyver would have done.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28444293</id>
	<title>Re:I stopped reading the summary</title>
	<author>OriginalSolver</author>
	<datestamp>1245747960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I really hope you are joking.

Here is a talk I did on backups recently for a LUG:

<a href="http://www.timetraveller.org/talks/backup\_talk.pdf" title="timetraveller.org" rel="nofollow">http://www.timetraveller.org/talks/backup\_talk.pdf</a> [timetraveller.org]

Please download and read this.  It states explicitly why your statement is false.</htmltext>
<tokenext>I really hope you are joking .
Here is a talk I did on backups recently for a LUG : http : //www.timetraveller.org/talks/backup \ _talk.pdf [ timetraveller.org ] Please download and read this .
It states explicitly why your statement is false .</tokentext>
<sentencetext>I really hope you are joking.
Here is a talk I did on backups recently for a LUG:

http://www.timetraveller.org/talks/backup\_talk.pdf [timetraveller.org]

Please download and read this.
It states explicitly why your statement is false.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28438343</id>
	<title>WHS</title>
	<author>po134</author>
	<datestamp>1245769680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>WHS, so you can only "mirror" data that matters.</htmltext>
<tokenext>WHS , so you can only " mirror " data that matters .</tokentext>
<sentencetext>WHS, so you can only "mirror" data that matters.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429853</id>
	<title>raid can help with backups</title>
	<author>Anonymous</author>
	<datestamp>1245668520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>it appears he is expanding a home server.<br>raid can help with backups, if he needs a large volume to back up OTHER COMPUTERS ON HIS NETWORK!</p></htmltext>
<tokenext>it appears he is expanding a home server.raid can help with backups , if he needs a large volume to back up OTHER COMPUTERS ON HIS NETWORK !</tokentext>
<sentencetext>it appears he is expanding a home server.raid can help with backups, if he needs a large volume to back up OTHER COMPUTERS ON HIS NETWORK!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429875</id>
	<title>Re:I stopped reading the summary</title>
	<author>sjames</author>
	<datestamp>1245668580000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>You do know that a RAID can be used for STORING backups don't you? Making your primary storage a RAID is no substitute for a backup. Adding an offline RAID storage can be a backup.</p></htmltext>
<tokenext>You do know that a RAID can be used for STORING backups do n't you ?
Making your primary storage a RAID is no substitute for a backup .
Adding an offline RAID storage can be a backup .</tokentext>
<sentencetext>You do know that a RAID can be used for STORING backups don't you?
Making your primary storage a RAID is no substitute for a backup.
Adding an offline RAID storage can be a backup.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753</id>
	<title>Re:I stopped reading the summary</title>
	<author>lobiusmoop</author>
	<datestamp>1245668100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>RAID 1 + swapping out/rebuilding a mirror disk periodically is a perfectly reasonable backup solution.</p></htmltext>
<tokenext>RAID 1 + swapping out/rebuilding a mirror disk periodically is a perfectly reasonable backup solution .</tokentext>
<sentencetext>RAID 1 + swapping out/rebuilding a mirror disk periodically is a perfectly reasonable backup solution.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28538577</id>
	<title>unRAID</title>
	<author>Coppit</author>
	<datestamp>1246383000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If you're willing to get your hands a little dirty, check out <a href="http://lime-technology.com/" title="lime-technology.com">unRAID</a> [lime-technology.com]. You put the OS (linux) and software on a flash drive and boot from that. You have a big parity drive and a bunch of data drives that are just ReiserFS. The user share feature can aggregate all your files into one big virtual filesystem. When you run out of space you just pop another drive in, or pop out a small drive and put a larger on in, then wait for the data to be rebuilt from parity. You don't have to worry about your RAID controller dying, or a two disk failure (you'll still have the other disks of data). I built my machine for maybe a couple hundred bucks, and just added my 7th disk. I have smaller disks in there, but don't bother to remove them.</htmltext>
<tokenext>If you 're willing to get your hands a little dirty , check out unRAID [ lime-technology.com ] .
You put the OS ( linux ) and software on a flash drive and boot from that .
You have a big parity drive and a bunch of data drives that are just ReiserFS .
The user share feature can aggregate all your files into one big virtual filesystem .
When you run out of space you just pop another drive in , or pop out a small drive and put a larger on in , then wait for the data to be rebuilt from parity .
You do n't have to worry about your RAID controller dying , or a two disk failure ( you 'll still have the other disks of data ) .
I built my machine for maybe a couple hundred bucks , and just added my 7th disk .
I have smaller disks in there , but do n't bother to remove them .</tokentext>
<sentencetext>If you're willing to get your hands a little dirty, check out unRAID [lime-technology.com].
You put the OS (linux) and software on a flash drive and boot from that.
You have a big parity drive and a bunch of data drives that are just ReiserFS.
The user share feature can aggregate all your files into one big virtual filesystem.
When you run out of space you just pop another drive in, or pop out a small drive and put a larger on in, then wait for the data to be rebuilt from parity.
You don't have to worry about your RAID controller dying, or a two disk failure (you'll still have the other disks of data).
I built my machine for maybe a couple hundred bucks, and just added my 7th disk.
I have smaller disks in there, but don't bother to remove them.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430789</id>
	<title>Re:Wut</title>
	<author>Captain Segfault</author>
	<datestamp>1245672060000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><tt>JBOD means "present the drives individually" (as in, don't present them as a single giant possibly RAIDed disk)<br><br>I would call your solution a (primitive) JBOD. However, ideally you only need to connect one data cable to the entire shelf, rather than one per individual disk, although that's a little hard to do with SATA. (in contrast to SAS or other SCSI)</tt></htmltext>
<tokenext>JBOD means " present the drives individually " ( as in , do n't present them as a single giant possibly RAIDed disk ) I would call your solution a ( primitive ) JBOD .
However , ideally you only need to connect one data cable to the entire shelf , rather than one per individual disk , although that 's a little hard to do with SATA .
( in contrast to SAS or other SCSI )</tokentext>
<sentencetext>JBOD means "present the drives individually" (as in, don't present them as a single giant possibly RAIDed disk)I would call your solution a (primitive) JBOD.
However, ideally you only need to connect one data cable to the entire shelf, rather than one per individual disk, although that's a little hard to do with SATA.
(in contrast to SAS or other SCSI)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429835</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28444257</id>
	<title>Re:Why?</title>
	<author>atamido</author>
	<datestamp>1245747840000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>No that's not correct.  JBOD is just that.  Just a bunch of disks.  Has nothing to do with redundancy (or lack of redundancy).</p></div><p>This is incorrect.  JBOD is similar to RAID 0 without striping, allowing one to use disks of dissimilar size.  There are some RAID controllers that will incorrectly refer to presenting physical drives directly.  However most RAID will correctly present a JBOD as a single logical volume.</p><p>Please refer to the <a href="http://en.wikipedia.org/wiki/Standard\_RAID\_levels#Concatenation\_.28SPAN.29" title="wikipedia.org">Wikipedia article on RAID</a> [wikipedia.org].</p></div>
	</htmltext>
<tokenext>No that 's not correct .
JBOD is just that .
Just a bunch of disks .
Has nothing to do with redundancy ( or lack of redundancy ) .This is incorrect .
JBOD is similar to RAID 0 without striping , allowing one to use disks of dissimilar size .
There are some RAID controllers that will incorrectly refer to presenting physical drives directly .
However most RAID will correctly present a JBOD as a single logical volume.Please refer to the Wikipedia article on RAID [ wikipedia.org ] .</tokentext>
<sentencetext>No that's not correct.
JBOD is just that.
Just a bunch of disks.
Has nothing to do with redundancy (or lack of redundancy).This is incorrect.
JBOD is similar to RAID 0 without striping, allowing one to use disks of dissimilar size.
There are some RAID controllers that will incorrectly refer to presenting physical drives directly.
However most RAID will correctly present a JBOD as a single logical volume.Please refer to the Wikipedia article on RAID [wikipedia.org].
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430349</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431123</id>
	<title>Re:I stopped reading the summary</title>
	<author>The Archon V2.0</author>
	<datestamp>1245673200000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>after the cretin suggested that RAID was some sort of substitute for a backup.</p></div><p>Of course RAID is a substitute for backup. If you ever delete data, accidentally reformat, lose files to a corrupt file system, get infected with a virus, or have any other disaster of the sort, it's obviously something you did or should have anticipated. Thus, data loss is a sign you're inferior and a sinner and the gods of IT are punishing you. Accept their swift and painful lesson with whatever microscopic shred of decorum exists within that rotting, unused thing you call a brain and try to rise ever so slightly above the unenlightened mire of your life so it doesn't happen again.</p><p>

Besides, it was probably all porn anyway.</p><p>

Signed,<br>
The cult of BOFH, flagellation division</p></div>
	</htmltext>
<tokenext>after the cretin suggested that RAID was some sort of substitute for a backup.Of course RAID is a substitute for backup .
If you ever delete data , accidentally reformat , lose files to a corrupt file system , get infected with a virus , or have any other disaster of the sort , it 's obviously something you did or should have anticipated .
Thus , data loss is a sign you 're inferior and a sinner and the gods of IT are punishing you .
Accept their swift and painful lesson with whatever microscopic shred of decorum exists within that rotting , unused thing you call a brain and try to rise ever so slightly above the unenlightened mire of your life so it does n't happen again .
Besides , it was probably all porn anyway .
Signed , The cult of BOFH , flagellation division</tokentext>
<sentencetext>after the cretin suggested that RAID was some sort of substitute for a backup.Of course RAID is a substitute for backup.
If you ever delete data, accidentally reformat, lose files to a corrupt file system, get infected with a virus, or have any other disaster of the sort, it's obviously something you did or should have anticipated.
Thus, data loss is a sign you're inferior and a sinner and the gods of IT are punishing you.
Accept their swift and painful lesson with whatever microscopic shred of decorum exists within that rotting, unused thing you call a brain and try to rise ever so slightly above the unenlightened mire of your life so it doesn't happen again.
Besides, it was probably all porn anyway.
Signed,
The cult of BOFH, flagellation division
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28443827</id>
	<title>Re:I stopped reading the summary</title>
	<author>Fweeky</author>
	<datestamp>1245789720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Backups are<nobr> <wbr></nobr>... offline</p></div><p>Shit.  Someone tell all those people selling Amazon S3 based backup services.</p></div>
	</htmltext>
<tokenext>Backups are ... offlineShit. Someone tell all those people selling Amazon S3 based backup services .</tokentext>
<sentencetext>Backups are ... offlineShit.  Someone tell all those people selling Amazon S3 based backup services.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430203</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430181</id>
	<title>Re:Just nitpicking, but...</title>
	<author>TheRealMindChild</author>
	<datestamp>1245669840000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>Except when your backup server uses RAID...</htmltext>
<tokenext>Except when your backup server uses RAID.. .</tokentext>
<sentencetext>Except when your backup server uses RAID...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429689</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433521</id>
	<title>Well, this is fun</title>
	<author>Anonymous</author>
	<datestamp>1245684360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>OP asks questions about external eSATA enclosures, the entire first page of responses is an argument over whether RAID is backup...<nobr> <wbr></nobr>/....</p><p>Here's an ON-TOPIC RESPONSE! Horrors! Take away my EXCELLENT KARMA for this breach of<nobr> <wbr></nobr>/. protocol!</p><p>I have a client who needed backup for a lot of big video files. We bought an enclosure from PC Pitstop, eight bays each holding 750GB SATA hard drives (1TB wasn't really around last year when we got it) attached to two eSATA cards in the PC controlling the enclosure. We spent a month futzing around trying to get the enclosures to be seen. I forget who made the eSATA controller cards but they sucked - or the enclosure chips sucked.</p><p>So we turned to Burley, the guys who make enclosures for Macs mostly, but they work with PCs, too. These guys know their stuff. They told us not to use OEM hard drives in enclosures because some OEM drives you buy are dumped on the market and don't QUITE work with enclosures. They said use retail hard drives only. They also sell very good controller cards. The enclosure we got from them has worked fine for the last year and a half until last week when one of the drives went dead - no surprise. They aren't cheap, but they are well made and support is very good. I had both email and phone conversations with the Burley folks and they provide good support.</p><p>We also in the last couple months bought two MicroNet 4-drive eSATA enclosures with 1TB drives from Newegg for use on a Mac Pro. That was a huge mistake, since the drivers simply weren't seen by the Mac at all. Apparently MicroNet didn't bother to test the drivers when Mac OS X 10.5 came out and couldn't be bothered to provide support for that. So we attached the enclosures to a Windows PC and they work OK, although occasionally one or more of the drives will disappear and generate "drive not ready for access" messages in the Windows event logs.</p><p>Later, we decided to use those enclosures for iSCSI storage served up to the video lab. So I took one of the video lab PCs that were being replaced by iMacs and installed OpenFiler, the open source storage server run on Linux. The latest Rpath Linux kernel saw the drives and the enclosure no problem. I configured the iSCSI setup and everything seems to be working fine. And interestingly, none of the drives have gone offline like they did with Windows - which means it was Windows fault, not the drives. So now I can install an iSCSI client on the two iMacs - except Apple doesn't HAVE a Mac OS X iSCSI client, once again demonstrating how Apple isn't ready for the enterprise, since Linux has had them for years - fortunately there's a free Mac iSCSI client from another company - and serve up 1.8TB of iSCSI storage to each iMac.</p><p>So my advice is: choose your enclosures and the drives in them and the controller cards carefully. Take notice of what Silicon Image chipsets are involved, since SI pretty much dominates the market for those things and they're not the smartest tech company in the world. Make sure you get retail disks for use in the enclosures. Make sure you can return what you bought for refund or replacement because this stuff is not yet "set and forget".</p></htmltext>
<tokenext>OP asks questions about external eSATA enclosures , the entire first page of responses is an argument over whether RAID is backup... /....Here 's an ON-TOPIC RESPONSE !
Horrors ! Take away my EXCELLENT KARMA for this breach of / .
protocol ! I have a client who needed backup for a lot of big video files .
We bought an enclosure from PC Pitstop , eight bays each holding 750GB SATA hard drives ( 1TB was n't really around last year when we got it ) attached to two eSATA cards in the PC controlling the enclosure .
We spent a month futzing around trying to get the enclosures to be seen .
I forget who made the eSATA controller cards but they sucked - or the enclosure chips sucked.So we turned to Burley , the guys who make enclosures for Macs mostly , but they work with PCs , too .
These guys know their stuff .
They told us not to use OEM hard drives in enclosures because some OEM drives you buy are dumped on the market and do n't QUITE work with enclosures .
They said use retail hard drives only .
They also sell very good controller cards .
The enclosure we got from them has worked fine for the last year and a half until last week when one of the drives went dead - no surprise .
They are n't cheap , but they are well made and support is very good .
I had both email and phone conversations with the Burley folks and they provide good support.We also in the last couple months bought two MicroNet 4-drive eSATA enclosures with 1TB drives from Newegg for use on a Mac Pro .
That was a huge mistake , since the drivers simply were n't seen by the Mac at all .
Apparently MicroNet did n't bother to test the drivers when Mac OS X 10.5 came out and could n't be bothered to provide support for that .
So we attached the enclosures to a Windows PC and they work OK , although occasionally one or more of the drives will disappear and generate " drive not ready for access " messages in the Windows event logs.Later , we decided to use those enclosures for iSCSI storage served up to the video lab .
So I took one of the video lab PCs that were being replaced by iMacs and installed OpenFiler , the open source storage server run on Linux .
The latest Rpath Linux kernel saw the drives and the enclosure no problem .
I configured the iSCSI setup and everything seems to be working fine .
And interestingly , none of the drives have gone offline like they did with Windows - which means it was Windows fault , not the drives .
So now I can install an iSCSI client on the two iMacs - except Apple does n't HAVE a Mac OS X iSCSI client , once again demonstrating how Apple is n't ready for the enterprise , since Linux has had them for years - fortunately there 's a free Mac iSCSI client from another company - and serve up 1.8TB of iSCSI storage to each iMac.So my advice is : choose your enclosures and the drives in them and the controller cards carefully .
Take notice of what Silicon Image chipsets are involved , since SI pretty much dominates the market for those things and they 're not the smartest tech company in the world .
Make sure you get retail disks for use in the enclosures .
Make sure you can return what you bought for refund or replacement because this stuff is not yet " set and forget " .</tokentext>
<sentencetext>OP asks questions about external eSATA enclosures, the entire first page of responses is an argument over whether RAID is backup... /....Here's an ON-TOPIC RESPONSE!
Horrors! Take away my EXCELLENT KARMA for this breach of /.
protocol!I have a client who needed backup for a lot of big video files.
We bought an enclosure from PC Pitstop, eight bays each holding 750GB SATA hard drives (1TB wasn't really around last year when we got it) attached to two eSATA cards in the PC controlling the enclosure.
We spent a month futzing around trying to get the enclosures to be seen.
I forget who made the eSATA controller cards but they sucked - or the enclosure chips sucked.So we turned to Burley, the guys who make enclosures for Macs mostly, but they work with PCs, too.
These guys know their stuff.
They told us not to use OEM hard drives in enclosures because some OEM drives you buy are dumped on the market and don't QUITE work with enclosures.
They said use retail hard drives only.
They also sell very good controller cards.
The enclosure we got from them has worked fine for the last year and a half until last week when one of the drives went dead - no surprise.
They aren't cheap, but they are well made and support is very good.
I had both email and phone conversations with the Burley folks and they provide good support.We also in the last couple months bought two MicroNet 4-drive eSATA enclosures with 1TB drives from Newegg for use on a Mac Pro.
That was a huge mistake, since the drivers simply weren't seen by the Mac at all.
Apparently MicroNet didn't bother to test the drivers when Mac OS X 10.5 came out and couldn't be bothered to provide support for that.
So we attached the enclosures to a Windows PC and they work OK, although occasionally one or more of the drives will disappear and generate "drive not ready for access" messages in the Windows event logs.Later, we decided to use those enclosures for iSCSI storage served up to the video lab.
So I took one of the video lab PCs that were being replaced by iMacs and installed OpenFiler, the open source storage server run on Linux.
The latest Rpath Linux kernel saw the drives and the enclosure no problem.
I configured the iSCSI setup and everything seems to be working fine.
And interestingly, none of the drives have gone offline like they did with Windows - which means it was Windows fault, not the drives.
So now I can install an iSCSI client on the two iMacs - except Apple doesn't HAVE a Mac OS X iSCSI client, once again demonstrating how Apple isn't ready for the enterprise, since Linux has had them for years - fortunately there's a free Mac iSCSI client from another company - and serve up 1.8TB of iSCSI storage to each iMac.So my advice is: choose your enclosures and the drives in them and the controller cards carefully.
Take notice of what Silicon Image chipsets are involved, since SI pretty much dominates the market for those things and they're not the smartest tech company in the world.
Make sure you get retail disks for use in the enclosures.
Make sure you can return what you bought for refund or replacement because this stuff is not yet "set and forget".</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430099</id>
	<title>Re:I stopped reading the summary</title>
	<author>deebeed</author>
	<datestamp>1245669540000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext>I started laughing when he said WHS, BACKUP and data<nobr> <wbr></nobr>... storage oh please<nobr> <wbr></nobr>;)

WHS CORRUPTED DATA FOR A WHOLE YEAR AND MS KNEW ABOUT IT. Do not trust that thing.

PLEASE!</htmltext>
<tokenext>I started laughing when he said WHS , BACKUP and data ... storage oh please ; ) WHS CORRUPTED DATA FOR A WHOLE YEAR AND MS KNEW ABOUT IT .
Do not trust that thing .
PLEASE !</tokentext>
<sentencetext>I started laughing when he said WHS, BACKUP and data ... storage oh please ;)

WHS CORRUPTED DATA FOR A WHOLE YEAR AND MS KNEW ABOUT IT.
Do not trust that thing.
PLEASE!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430599</id>
	<title>Re:Wut</title>
	<author>ls671</author>
	<datestamp>1245671280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Thanks for the tip, I have never used external enclosures with "shitty controllers" but I have been tempted by them. I've  only used file/backups servers that I would setup myself with computers running Linux.</p><p>Have you actually tried any of these external enclosures with "shitty controllers" ?</p><p>Details on problems would be fun to hear about...</p></htmltext>
<tokenext>Thanks for the tip , I have never used external enclosures with " shitty controllers " but I have been tempted by them .
I 've only used file/backups servers that I would setup myself with computers running Linux.Have you actually tried any of these external enclosures with " shitty controllers " ? Details on problems would be fun to hear about.. .</tokentext>
<sentencetext>Thanks for the tip, I have never used external enclosures with "shitty controllers" but I have been tempted by them.
I've  only used file/backups servers that I would setup myself with computers running Linux.Have you actually tried any of these external enclosures with "shitty controllers" ?Details on problems would be fun to hear about...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429835</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429815</id>
	<title>It depends</title>
	<author>Anonymous</author>
	<datestamp>1245668400000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Do you want an erection with that backup?</p></htmltext>
<tokenext>Do you want an erection with that backup ?</tokentext>
<sentencetext>Do you want an erection with that backup?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429873</id>
	<title>Posting in Spanish Now?</title>
	<author>Anonymous</author>
	<datestamp>1245668580000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Where is JBOD?  How the hell should I know?</p></htmltext>
<tokenext>Where is JBOD ?
How the hell should I know ?</tokentext>
<sentencetext>Where is JBOD?
How the hell should I know?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430551</id>
	<title>Be careful about your hardware and software</title>
	<author>ballyhoo</author>
	<datestamp>1245671160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you're going to to this, you really need to be very careful about your choice of hardware and software.  You need to avoid anything which isn't AHCI 1.3 compliant, as previous versions of the AHCI specification defined only a single FIS register per port, which effectively means that the controller card has to serialise all commands to the port multiplier.  So even if you've got a port multiplier with a pile of separate disks, your throughput is going to be trash because the host operating system can only talk to a single disk at any one time.  AHCI 1.3 fixes this and allows the host operating system to talk to multiple drives simultaneously.</p><p>You also need to be careful in your choice of software driver and operating system.  Most of the free unix clones have some form of support for port multipliers these days, but this support is not really optimised towards high performance from sensible hardware yet.  NCQ (native command queueing) is really important for performance here. I'll guess that with Windows drivers, you just won't know in advance, because the drivers aren't open source and you just can't tell what's going on inside them.</p><p>As previous people mentioned, it's important to configure multiple disks like this in some form of redundant mode.  If you have a single volume spread across 5 disks, your risk of failure is going to be 5 times more likely than for a single disk, and the consequences of losing that data is 5 times worse than that of a single disk.</p></htmltext>
<tokenext>If you 're going to to this , you really need to be very careful about your choice of hardware and software .
You need to avoid anything which is n't AHCI 1.3 compliant , as previous versions of the AHCI specification defined only a single FIS register per port , which effectively means that the controller card has to serialise all commands to the port multiplier .
So even if you 've got a port multiplier with a pile of separate disks , your throughput is going to be trash because the host operating system can only talk to a single disk at any one time .
AHCI 1.3 fixes this and allows the host operating system to talk to multiple drives simultaneously.You also need to be careful in your choice of software driver and operating system .
Most of the free unix clones have some form of support for port multipliers these days , but this support is not really optimised towards high performance from sensible hardware yet .
NCQ ( native command queueing ) is really important for performance here .
I 'll guess that with Windows drivers , you just wo n't know in advance , because the drivers are n't open source and you just ca n't tell what 's going on inside them.As previous people mentioned , it 's important to configure multiple disks like this in some form of redundant mode .
If you have a single volume spread across 5 disks , your risk of failure is going to be 5 times more likely than for a single disk , and the consequences of losing that data is 5 times worse than that of a single disk .</tokentext>
<sentencetext>If you're going to to this, you really need to be very careful about your choice of hardware and software.
You need to avoid anything which isn't AHCI 1.3 compliant, as previous versions of the AHCI specification defined only a single FIS register per port, which effectively means that the controller card has to serialise all commands to the port multiplier.
So even if you've got a port multiplier with a pile of separate disks, your throughput is going to be trash because the host operating system can only talk to a single disk at any one time.
AHCI 1.3 fixes this and allows the host operating system to talk to multiple drives simultaneously.You also need to be careful in your choice of software driver and operating system.
Most of the free unix clones have some form of support for port multipliers these days, but this support is not really optimised towards high performance from sensible hardware yet.
NCQ (native command queueing) is really important for performance here.
I'll guess that with Windows drivers, you just won't know in advance, because the drivers aren't open source and you just can't tell what's going on inside them.As previous people mentioned, it's important to configure multiple disks like this in some form of redundant mode.
If you have a single volume spread across 5 disks, your risk of failure is going to be 5 times more likely than for a single disk, and the consequences of losing that data is 5 times worse than that of a single disk.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429909</id>
	<title>lime-technology.com</title>
	<author>Anonymous</author>
	<datestamp>1245668760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>lime-technology.com</p></htmltext>
<tokenext>lime-technology.com</tokentext>
<sentencetext>lime-technology.com</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431849</id>
	<title>Re:I stopped reading the summary</title>
	<author>Anonymous</author>
	<datestamp>1245675960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Raid1 doesn't recover from the "oops I didn't mean to remove that file" error.  Therefore, not a backup.</p></htmltext>
<tokenext>Raid1 does n't recover from the " oops I did n't mean to remove that file " error .
Therefore , not a backup .</tokentext>
<sentencetext>Raid1 doesn't recover from the "oops I didn't mean to remove that file" error.
Therefore, not a backup.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429677</id>
	<title>Addonics Storage Tower</title>
	<author>Anonymous</author>
	<datestamp>1245667860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This is my #1 choice</p><p>http://www.addonics.com/products/raid\_system/ast4.asp</p></htmltext>
<tokenext>This is my # 1 choicehttp : //www.addonics.com/products/raid \ _system/ast4.asp</tokentext>
<sentencetext>This is my #1 choicehttp://www.addonics.com/products/raid\_system/ast4.asp</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429609</id>
	<title>first post?</title>
	<author>Anonymous</author>
	<datestamp>1245667560000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>yes?</htmltext>
<tokenext>yes ?</tokentext>
<sentencetext>yes?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28437323</id>
	<title>Re:I stopped reading the summary</title>
	<author>MistrBlank</author>
	<datestamp>1245763440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Damnit, there's no to mod for "ignorant" or "misinformation"</p></htmltext>
<tokenext>Damnit , there 's no to mod for " ignorant " or " misinformation "</tokentext>
<sentencetext>Damnit, there's no to mod for "ignorant" or "misinformation"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28443303</id>
	<title>Re:I stopped reading the summary</title>
	<author>PalmKiller</author>
	<datestamp>1245787860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>In related news, a geek was crying all in his cheerios this morning when his raid controller went apeshit during the wee hours and wrote garbage over his only backup.</htmltext>
<tokenext>In related news , a geek was crying all in his cheerios this morning when his raid controller went apeshit during the wee hours and wrote garbage over his only backup .</tokentext>
<sentencetext>In related news, a geek was crying all in his cheerios this morning when his raid controller went apeshit during the wee hours and wrote garbage over his only backup.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431237</id>
	<title>Old AT (pre-ATX) case</title>
	<author>metallurge</author>
	<datestamp>1245673500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>The old AT cases had a power supply with a mechanical power switch, rather than a soft-switch like ATX power supplies. Old AT cases and power supplies should be just about free, just strip out the old motherboard and you have a decent, inexpensive solution. Like someone else said, just get long SATA cables, and run them directly to the drives. You can bundle them together with zip ties periodically down the length, or use wire loom if you want something a bit neater. You may need molex-to-SATA power adapters, but those are very cheap and reliable. If you pick the right case, it will have plenty of drive bays and cooling capacity.
<br> <br>
Or, you can use one of those 4\_3.5"\_drives-in-3\_5.25"\_bays solutions if you need even more space and cooling capacity beyond what is already in your case. Even a small mid-tower case should support at least 6 drives using one of these.
<br> <br>
Pick up a spare AT power supply while you are at it, and you will have a very reliable, well-cooled, very cheap solution.</htmltext>
<tokenext>The old AT cases had a power supply with a mechanical power switch , rather than a soft-switch like ATX power supplies .
Old AT cases and power supplies should be just about free , just strip out the old motherboard and you have a decent , inexpensive solution .
Like someone else said , just get long SATA cables , and run them directly to the drives .
You can bundle them together with zip ties periodically down the length , or use wire loom if you want something a bit neater .
You may need molex-to-SATA power adapters , but those are very cheap and reliable .
If you pick the right case , it will have plenty of drive bays and cooling capacity .
Or , you can use one of those 4 \ _3.5 " \ _drives-in-3 \ _5.25 " \ _bays solutions if you need even more space and cooling capacity beyond what is already in your case .
Even a small mid-tower case should support at least 6 drives using one of these .
Pick up a spare AT power supply while you are at it , and you will have a very reliable , well-cooled , very cheap solution .</tokentext>
<sentencetext>The old AT cases had a power supply with a mechanical power switch, rather than a soft-switch like ATX power supplies.
Old AT cases and power supplies should be just about free, just strip out the old motherboard and you have a decent, inexpensive solution.
Like someone else said, just get long SATA cables, and run them directly to the drives.
You can bundle them together with zip ties periodically down the length, or use wire loom if you want something a bit neater.
You may need molex-to-SATA power adapters, but those are very cheap and reliable.
If you pick the right case, it will have plenty of drive bays and cooling capacity.
Or, you can use one of those 4\_3.5"\_drives-in-3\_5.25"\_bays solutions if you need even more space and cooling capacity beyond what is already in your case.
Even a small mid-tower case should support at least 6 drives using one of these.
Pick up a spare AT power supply while you are at it, and you will have a very reliable, well-cooled, very cheap solution.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429653</id>
	<title>ESata</title>
	<author>Anonymous</author>
	<datestamp>1245667740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>JailBait Of Death?</p></htmltext>
<tokenext>JailBait Of Death ?</tokentext>
<sentencetext>JailBait Of Death?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429667</id>
	<title>Not quite what you want...</title>
	<author>Facegarden</author>
	<datestamp>1245667800000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>This isn't quite what you want, but I have a $30 6 drive caddy (with 4 drives atm) and a $70 4 port internal SATA card. I just run long SATA cables to it, but it was cheaper than any single-cable solution i found, so that may not be a bad way to go.</p><p>One thing I noticed though was that I actually have enough room for all 9 of my hard drives inside my case! I may migrate them in.</p><p>And yes, before you say it, that is certainly quite a bit of porn!<br>-Taylor</p></htmltext>
<tokenext>This is n't quite what you want , but I have a $ 30 6 drive caddy ( with 4 drives atm ) and a $ 70 4 port internal SATA card .
I just run long SATA cables to it , but it was cheaper than any single-cable solution i found , so that may not be a bad way to go.One thing I noticed though was that I actually have enough room for all 9 of my hard drives inside my case !
I may migrate them in.And yes , before you say it , that is certainly quite a bit of porn ! -Taylor</tokentext>
<sentencetext>This isn't quite what you want, but I have a $30 6 drive caddy (with 4 drives atm) and a $70 4 port internal SATA card.
I just run long SATA cables to it, but it was cheaper than any single-cable solution i found, so that may not be a bad way to go.One thing I noticed though was that I actually have enough room for all 9 of my hard drives inside my case!
I may migrate them in.And yes, before you say it, that is certainly quite a bit of porn!-Taylor</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430349</id>
	<title>Re:Why?</title>
	<author>caseih</author>
	<datestamp>1245670440000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>No that's not correct.  JBOD is just that.  Just a bunch of disks.  Has nothing to do with redundancy (or lack of redundancy).  What you do with them is completely up to you.  You can implement a RAID-Z with them on solaris (which is actually faster on my Enterprise-class disk array than the built-in RAID-6 in hardware!), Linux RAID-5, RAID-10, or whatever.   Except for issues of battery-backed caching, I have come to the opinion that for most low- to middle-end storage needs, a large JBOD and software RAID is the way to go.</p></htmltext>
<tokenext>No that 's not correct .
JBOD is just that .
Just a bunch of disks .
Has nothing to do with redundancy ( or lack of redundancy ) .
What you do with them is completely up to you .
You can implement a RAID-Z with them on solaris ( which is actually faster on my Enterprise-class disk array than the built-in RAID-6 in hardware !
) , Linux RAID-5 , RAID-10 , or whatever .
Except for issues of battery-backed caching , I have come to the opinion that for most low- to middle-end storage needs , a large JBOD and software RAID is the way to go .</tokentext>
<sentencetext>No that's not correct.
JBOD is just that.
Just a bunch of disks.
Has nothing to do with redundancy (or lack of redundancy).
What you do with them is completely up to you.
You can implement a RAID-Z with them on solaris (which is actually faster on my Enterprise-class disk array than the built-in RAID-6 in hardware!
), Linux RAID-5, RAID-10, or whatever.
Except for issues of battery-backed caching, I have come to the opinion that for most low- to middle-end storage needs, a large JBOD and software RAID is the way to go.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429855</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28434979</id>
	<title>Re:I stopped reading the summary</title>
	<author>funwithBSD</author>
	<datestamp>1245694320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I assumed he was asking "What do I use for external drives FOR a backup."</p><p>In which case he is a moron for attaching it directly to the system. Or for it being in the same building if that is what he is really after, a backup.</p><p>If it is attached to the same system, or in the building/house, what you have is a *copy* of your data, not a backup.<br>
&nbsp;</p></htmltext>
<tokenext>I assumed he was asking " What do I use for external drives FOR a backup .
" In which case he is a moron for attaching it directly to the system .
Or for it being in the same building if that is what he is really after , a backup.If it is attached to the same system , or in the building/house , what you have is a * copy * of your data , not a backup .
 </tokentext>
<sentencetext>I assumed he was asking "What do I use for external drives FOR a backup.
"In which case he is a moron for attaching it directly to the system.
Or for it being in the same building if that is what he is really after, a backup.If it is attached to the same system, or in the building/house, what you have is a *copy* of your data, not a backup.
 </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28470157</id>
	<title>the user friendly solution</title>
	<author>edrawr</author>
	<datestamp>1245959880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Flash drives, lots of flash drives.

Drag and Drop. Come on...</htmltext>
<tokenext>Flash drives , lots of flash drives .
Drag and Drop .
Come on.. .</tokentext>
<sentencetext>Flash drives, lots of flash drives.
Drag and Drop.
Come on...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429813</id>
	<title>I use the Mediasonic ProBox BUT...</title>
	<author>rei\_slashdot</author>
	<datestamp>1245668400000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>...I can't get the manufacturer to acknowledge or confirm that there is problem when copying between hard drives in the same enclosure. Windows hangs and eventually the Event Logs show "device is not connected" or some sort of issue. Copying between drive and the motherboard's SATA drives works fine but it always hangs/times-out/becomes inaccessible after a random amount of transfer.

It's surprisingly well put-together without looking tacky and well-priced but this copying issue between drives inside it is a pain. Transfer between drives inside it seem to work without a hitch using slower USB.

<a href="http://mediasonicinc.com/store/product\_info.php?products\_id=150" title="mediasonicinc.com" rel="nofollow">http://mediasonicinc.com/store/product\_info.php?products\_id=150</a> [mediasonicinc.com]

It syncs with your PC's power so if the PC goes off, the box goes to sleep and wakes up when the PC power is restored.</htmltext>
<tokenext>...I ca n't get the manufacturer to acknowledge or confirm that there is problem when copying between hard drives in the same enclosure .
Windows hangs and eventually the Event Logs show " device is not connected " or some sort of issue .
Copying between drive and the motherboard 's SATA drives works fine but it always hangs/times-out/becomes inaccessible after a random amount of transfer .
It 's surprisingly well put-together without looking tacky and well-priced but this copying issue between drives inside it is a pain .
Transfer between drives inside it seem to work without a hitch using slower USB .
http : //mediasonicinc.com/store/product \ _info.php ? products \ _id = 150 [ mediasonicinc.com ] It syncs with your PC 's power so if the PC goes off , the box goes to sleep and wakes up when the PC power is restored .</tokentext>
<sentencetext>...I can't get the manufacturer to acknowledge or confirm that there is problem when copying between hard drives in the same enclosure.
Windows hangs and eventually the Event Logs show "device is not connected" or some sort of issue.
Copying between drive and the motherboard's SATA drives works fine but it always hangs/times-out/becomes inaccessible after a random amount of transfer.
It's surprisingly well put-together without looking tacky and well-priced but this copying issue between drives inside it is a pain.
Transfer between drives inside it seem to work without a hitch using slower USB.
http://mediasonicinc.com/store/product\_info.php?products\_id=150 [mediasonicinc.com]

It syncs with your PC's power so if the PC goes off, the box goes to sleep and wakes up when the PC power is restored.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28432759
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430965
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429895
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430927
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429871
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28444257
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430349
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429855
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431849
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430181
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429689
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430599
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429835
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28432993
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431237
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430727
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430203
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430921
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429895
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430917
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429855
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429879
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429933
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429683
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430307
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431155
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429855
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28437323
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433169
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430091
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430159
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28591211
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28443827
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430203
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28434215
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430349
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429855
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28434979
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28456347
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429895
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429891
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430897
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28435513
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431237
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429875
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430099
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433527
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431847
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430789
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429835
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28444293
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430983
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429835
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431123
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_22_2134225_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28443303
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429895
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28456347
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430965
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430921
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429683
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429933
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429853
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433127
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429651
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28432759
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429753
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430927
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430897
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429879
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430203
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430727
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28443827
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28591211
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429871
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28443303
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431849
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430099
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430159
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430091
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433169
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431123
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429891
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28434979
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430307
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429875
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28437323
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28444293
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431237
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28435513
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28432993
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429667
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429855
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431155
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430917
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430349
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28444257
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28434215
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431915
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429755
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28458851
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429689
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430181
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28429835
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430599
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430789
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28431847
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433527
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430983
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28433521
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430269
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_22_2134225.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_22_2134225.28430551
</commentlist>
</conversation>
