<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_03_06_1650232</id>
	<title>Wear Leveling, RAID Can Wipe Out SSD Advantage</title>
	<author>Soulskill</author>
	<datestamp>1267896720000</datestamp>
	<htmltext>storagedude writes <i>"This article discusses using solid state disks in enterprise storage networks. A couple of problems noted by the author: wear leveling can eat up most of a drive's bandwidth and make write performance no faster than a hard drive, and <a href="http://www.enterprisestorageforum.com/technology/article.php/3869031">using SSDs with RAID controllers brings up its own set of problems</a>. 'Even the highest-performance RAID controllers today cannot support the IOPS  of just three of the fastest SSDs. I am not talking about a disk tray; I am talking about the whole RAID controller. If you want full performance of expensive SSDs, you need to take your $50,000 or $100,000 RAID controller and not overpopulate it with too many drives. In fact, most vendors today have between 16 and 60 drives in a disk tray and you cannot even populate a whole tray. Add to this that some RAID vendor's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost.'"</i></htmltext>
<tokenext>storagedude writes " This article discusses using solid state disks in enterprise storage networks .
A couple of problems noted by the author : wear leveling can eat up most of a drive 's bandwidth and make write performance no faster than a hard drive , and using SSDs with RAID controllers brings up its own set of problems .
'Even the highest-performance RAID controllers today can not support the IOPS of just three of the fastest SSDs .
I am not talking about a disk tray ; I am talking about the whole RAID controller .
If you want full performance of expensive SSDs , you need to take your $ 50,000 or $ 100,000 RAID controller and not overpopulate it with too many drives .
In fact , most vendors today have between 16 and 60 drives in a disk tray and you can not even populate a whole tray .
Add to this that some RAID vendor 's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost .
' "</tokentext>
<sentencetext>storagedude writes "This article discusses using solid state disks in enterprise storage networks.
A couple of problems noted by the author: wear leveling can eat up most of a drive's bandwidth and make write performance no faster than a hard drive, and using SSDs with RAID controllers brings up its own set of problems.
'Even the highest-performance RAID controllers today cannot support the IOPS  of just three of the fastest SSDs.
I am not talking about a disk tray; I am talking about the whole RAID controller.
If you want full performance of expensive SSDs, you need to take your $50,000 or $100,000 RAID controller and not overpopulate it with too many drives.
In fact, most vendors today have between 16 and 60 drives in a disk tray and you cannot even populate a whole tray.
Add to this that some RAID vendor's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost.
'"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381718</id>
	<title>Oops.  I forgot to plan the array</title>
	<author>Anonymous</author>
	<datestamp>1267900980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>He's got a point - the embedded RAID controllers in boxes like the HP MSA70 just aren't up to the challenge of sustaining the IOPS of SSDs.  They weren't designed for that, so you can't get a million I/Os per second by accident.  You have to know what you're doing and build out an architecture that can support it.
</p><p>OTOH: Who pays 100K for one of those?  That has to be including the Enterprise 120GB SSD's at $4k each, right?
</p><p>What 200k IOPs <a href="http://www.techpowerup.com/img/10-03-02/patriot2.jpg" title="techpowerup.com" rel="nofollow">might look like</a> [techpowerup.com] (not mine).</p></htmltext>
<tokenext>He 's got a point - the embedded RAID controllers in boxes like the HP MSA70 just are n't up to the challenge of sustaining the IOPS of SSDs .
They were n't designed for that , so you ca n't get a million I/Os per second by accident .
You have to know what you 're doing and build out an architecture that can support it .
OTOH : Who pays 100K for one of those ?
That has to be including the Enterprise 120GB SSD 's at $ 4k each , right ?
What 200k IOPs might look like [ techpowerup.com ] ( not mine ) .</tokentext>
<sentencetext>He's got a point - the embedded RAID controllers in boxes like the HP MSA70 just aren't up to the challenge of sustaining the IOPS of SSDs.
They weren't designed for that, so you can't get a million I/Os per second by accident.
You have to know what you're doing and build out an architecture that can support it.
OTOH: Who pays 100K for one of those?
That has to be including the Enterprise 120GB SSD's at $4k each, right?
What 200k IOPs might look like [techpowerup.com] (not mine).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382656</id>
	<title>Re:Duh</title>
	<author>asdf7890</author>
	<datestamp>1267907880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Yes, but the word inexpensive is being used in a relative sense here - the idea being that (ignoring RAID0 which doesn't actually match the definition at all due to not offering any redundancy) a full set of drives including a couple of spares would cost less than any single device that offered the same capacity and long-term reliability. And the expense isn't just talking about the cost of the physical drive - if you ask a manufacturer to <i>guarantee</i> a high level of reliability they will in turn ask a higher price for the device (both to cover R&amp;D on making it more reliable and to cover insurance for in case it fails too early and you require replacement and/or compensation). Even if the individual devices in the array are very expensive, they are probably not so compared to a any single device that claims the same capacity and longevity properties.</htmltext>
<tokenext>Yes , but the word inexpensive is being used in a relative sense here - the idea being that ( ignoring RAID0 which does n't actually match the definition at all due to not offering any redundancy ) a full set of drives including a couple of spares would cost less than any single device that offered the same capacity and long-term reliability .
And the expense is n't just talking about the cost of the physical drive - if you ask a manufacturer to guarantee a high level of reliability they will in turn ask a higher price for the device ( both to cover R&amp;D on making it more reliable and to cover insurance for in case it fails too early and you require replacement and/or compensation ) .
Even if the individual devices in the array are very expensive , they are probably not so compared to a any single device that claims the same capacity and longevity properties .</tokentext>
<sentencetext>Yes, but the word inexpensive is being used in a relative sense here - the idea being that (ignoring RAID0 which doesn't actually match the definition at all due to not offering any redundancy) a full set of drives including a couple of spares would cost less than any single device that offered the same capacity and long-term reliability.
And the expense isn't just talking about the cost of the physical drive - if you ask a manufacturer to guarantee a high level of reliability they will in turn ask a higher price for the device (both to cover R&amp;D on making it more reliable and to cover insurance for in case it fails too early and you require replacement and/or compensation).
Even if the individual devices in the array are very expensive, they are probably not so compared to a any single device that claims the same capacity and longevity properties.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381690</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381690</id>
	<title>Duh</title>
	<author>Anonymous</author>
	<datestamp>1267900740000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>RAID means "Redundant Array of <b>Inexpensive</b> Disks".</p></htmltext>
<tokenext>RAID means " Redundant Array of Inexpensive Disks " .</tokentext>
<sentencetext>RAID means "Redundant Array of Inexpensive Disks".</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382030</id>
	<title>Re:Little Flawed study.</title>
	<author>Anonymous</author>
	<datestamp>1267904160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>However subsystems are going to be designed to work with SSD that has much higher access times.</p></div><p>However subsystems are going to be designed to work with SSD that has much <b> <i>lower</i> </b> access times.<br> <br>There, fixed that for you. It is actually amazing how many times that type of error is made when people are typing. Things like, "this machine has much higher boot times!" when talking about a faster machine.</p></div>
	</htmltext>
<tokenext>However subsystems are going to be designed to work with SSD that has much higher access times.However subsystems are going to be designed to work with SSD that has much lower access times .
There , fixed that for you .
It is actually amazing how many times that type of error is made when people are typing .
Things like , " this machine has much higher boot times !
" when talking about a faster machine .</tokentext>
<sentencetext>However subsystems are going to be designed to work with SSD that has much higher access times.However subsystems are going to be designed to work with SSD that has much  lower  access times.
There, fixed that for you.
It is actually amazing how many times that type of error is made when people are typing.
Things like, "this machine has much higher boot times!
" when talking about a faster machine.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381688</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382214</id>
	<title>Re:This study seems deeply confused in a specific</title>
	<author>LoRdTAW</author>
	<datestamp>1267905300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>All I want to know is who is making RAID cards that cost $50,000 to $100,000? Or is he describing a complete system and calling it a RAID card?</p></htmltext>
<tokenext>All I want to know is who is making RAID cards that cost $ 50,000 to $ 100,000 ?
Or is he describing a complete system and calling it a RAID card ?</tokentext>
<sentencetext>All I want to know is who is making RAID cards that cost $50,000 to $100,000?
Or is he describing a complete system and calling it a RAID card?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381800</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381972</id>
	<title>Raid controllers obsolete?</title>
	<author>vlm</author>
	<datestamp>1267903560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Even the highest-performance RAID controllers today cannot support the IOPS of just three of the fastest SSDs.</p></div><p>In the old days, raid controllers were faster than doing it in software.</p><p>Now a days, aren't software controllers faster than hardware?  So, just do software raid?  In my very unscientific tests of SSDs I have not been able to max out the server CPU when running bonnie++ so I guess software can handle it better?</p><p>Even worse, it seems difficult to purchase "real hardware raid" cards since marketing departments have flooded the market with essentially multiport win-SATA cards that require weird drivers because they're non-standard?</p></div>
	</htmltext>
<tokenext>Even the highest-performance RAID controllers today can not support the IOPS of just three of the fastest SSDs.In the old days , raid controllers were faster than doing it in software.Now a days , are n't software controllers faster than hardware ?
So , just do software raid ?
In my very unscientific tests of SSDs I have not been able to max out the server CPU when running bonnie + + so I guess software can handle it better ? Even worse , it seems difficult to purchase " real hardware raid " cards since marketing departments have flooded the market with essentially multiport win-SATA cards that require weird drivers because they 're non-standard ?</tokentext>
<sentencetext>Even the highest-performance RAID controllers today cannot support the IOPS of just three of the fastest SSDs.In the old days, raid controllers were faster than doing it in software.Now a days, aren't software controllers faster than hardware?
So, just do software raid?
In my very unscientific tests of SSDs I have not been able to max out the server CPU when running bonnie++ so I guess software can handle it better?Even worse, it seems difficult to purchase "real hardware raid" cards since marketing departments have flooded the market with essentially multiport win-SATA cards that require weird drivers because they're non-standard?
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381740</id>
	<title>Only A Matter of Time</title>
	<author>WrongSizeGlass</author>
	<datestamp>1267901160000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext>Scaling works both ways. Often technology that benefits larger installations or enterprise environments gets scaled down to the desktop after being fine tuned. It's not uncommon for technology that benefits desktop or smaller implementations to scale up to eventually benefit the 'big boys'. This is simply a case of the laptop getting the technology first as it was the most logical place for it to get traction. Give SSD's a little time and they'll work their way into RAID as well as other server solutions.</htmltext>
<tokenext>Scaling works both ways .
Often technology that benefits larger installations or enterprise environments gets scaled down to the desktop after being fine tuned .
It 's not uncommon for technology that benefits desktop or smaller implementations to scale up to eventually benefit the 'big boys' .
This is simply a case of the laptop getting the technology first as it was the most logical place for it to get traction .
Give SSD 's a little time and they 'll work their way into RAID as well as other server solutions .</tokentext>
<sentencetext>Scaling works both ways.
Often technology that benefits larger installations or enterprise environments gets scaled down to the desktop after being fine tuned.
It's not uncommon for technology that benefits desktop or smaller implementations to scale up to eventually benefit the 'big boys'.
This is simply a case of the laptop getting the technology first as it was the most logical place for it to get traction.
Give SSD's a little time and they'll work their way into RAID as well as other server solutions.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381752</id>
	<title>This is a rhetorical question, right?</title>
	<author>Anonymous</author>
	<datestamp>1267901280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>:)</p></htmltext>
<tokenext>: )</tokentext>
<sentencetext>:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31383194</id>
	<title>Re:Correction:</title>
	<author>BikeHelmet</author>
	<datestamp>1267868100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I agree. You shouldn't be using consumer grade SSDs for servers - unless it's a game server or something. (Ex: TF2)</p><p>Do you know why RE (RAID Edition)  HDDs exist? They strip out all the write recovery and stuff, which could mess up speeds, IOPS, and seek times, and instead streamline the drives for performance predictability. That makes it far easier for RAID controllers to manage dozens of them.</p><p>SSDs have a similar thing going. You're an enterprise and need massive IOPS? Buy enterprise-level SSDs - like the ioDrive, with built-in RAID capabilities, piped right through the PCIe bus. Magnitudes faster than a consumer grade SSD, and magnitudes more efficient. The IOPS you get vs CPU usage is amazing. Toss a couple together, and you can literally get hundreds of thousands of IOPS with gigabytes per second of read/write bandwidth. It'll hammer your CPU, but CPUs are cheap compared to these RAID cards.</p><p>You're an enterprise. Buy enterprise level stuff. Don't just go with "Intel" because you heard Intel SSDs are the fastest. They aren't. They're just the best affordable ones for us little guys.</p></htmltext>
<tokenext>I agree .
You should n't be using consumer grade SSDs for servers - unless it 's a game server or something .
( Ex : TF2 ) Do you know why RE ( RAID Edition ) HDDs exist ?
They strip out all the write recovery and stuff , which could mess up speeds , IOPS , and seek times , and instead streamline the drives for performance predictability .
That makes it far easier for RAID controllers to manage dozens of them.SSDs have a similar thing going .
You 're an enterprise and need massive IOPS ?
Buy enterprise-level SSDs - like the ioDrive , with built-in RAID capabilities , piped right through the PCIe bus .
Magnitudes faster than a consumer grade SSD , and magnitudes more efficient .
The IOPS you get vs CPU usage is amazing .
Toss a couple together , and you can literally get hundreds of thousands of IOPS with gigabytes per second of read/write bandwidth .
It 'll hammer your CPU , but CPUs are cheap compared to these RAID cards.You 're an enterprise .
Buy enterprise level stuff .
Do n't just go with " Intel " because you heard Intel SSDs are the fastest .
They are n't .
They 're just the best affordable ones for us little guys .</tokentext>
<sentencetext>I agree.
You shouldn't be using consumer grade SSDs for servers - unless it's a game server or something.
(Ex: TF2)Do you know why RE (RAID Edition)  HDDs exist?
They strip out all the write recovery and stuff, which could mess up speeds, IOPS, and seek times, and instead streamline the drives for performance predictability.
That makes it far easier for RAID controllers to manage dozens of them.SSDs have a similar thing going.
You're an enterprise and need massive IOPS?
Buy enterprise-level SSDs - like the ioDrive, with built-in RAID capabilities, piped right through the PCIe bus.
Magnitudes faster than a consumer grade SSD, and magnitudes more efficient.
The IOPS you get vs CPU usage is amazing.
Toss a couple together, and you can literally get hundreds of thousands of IOPS with gigabytes per second of read/write bandwidth.
It'll hammer your CPU, but CPUs are cheap compared to these RAID cards.You're an enterprise.
Buy enterprise level stuff.
Don't just go with "Intel" because you heard Intel SSDs are the fastest.
They aren't.
They're just the best affordable ones for us little guys.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381694</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31383260</id>
	<title>Re:Raid controllers obsolete?</title>
	<author>rrohbeck</author>
	<datestamp>1267868580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just my thought. Hardware RAID adds latency and limits throughput if you use SSDs. On the other hand, server CPUs often have cycles to spare and are much faster than the CPU on the RAID controller. I've yet to see the dual quad cores with hyperthreading going over 40\% in our servers.<br>Now all we need is a VFS layer that smartly decides where to store files and/or uses a fast disk as a cache to a slower disk. Like a unionfs with automatic migration?</p></htmltext>
<tokenext>Just my thought .
Hardware RAID adds latency and limits throughput if you use SSDs .
On the other hand , server CPUs often have cycles to spare and are much faster than the CPU on the RAID controller .
I 've yet to see the dual quad cores with hyperthreading going over 40 \ % in our servers.Now all we need is a VFS layer that smartly decides where to store files and/or uses a fast disk as a cache to a slower disk .
Like a unionfs with automatic migration ?</tokentext>
<sentencetext>Just my thought.
Hardware RAID adds latency and limits throughput if you use SSDs.
On the other hand, server CPUs often have cycles to spare and are much faster than the CPU on the RAID controller.
I've yet to see the dual quad cores with hyperthreading going over 40\% in our servers.Now all we need is a VFS layer that smartly decides where to store files and/or uses a fast disk as a cache to a slower disk.
Like a unionfs with automatic migration?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381972</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381800</id>
	<title>This study seems deeply confused in a specific way</title>
	<author>fuzzyfuzzyfungus</author>
	<datestamp>1267901700000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>This study seems to have a very bad case of "unconsciously idealizing the status quo and working from there". For instance: <br> <br>

"Even the highest-performance RAID controllers today cannot support the IOPS of just three of the fastest SSDs. I am not talking about a disk tray; I am talking about the whole RAID controller. If you want full performance of expensive SSDs, you need to take your $50,000 or $100,000 RAID controller and not overpopulate it with too many drives. In fact, most vendors today have between 16 and 60 drives in a disk tray and you cannot even populate a whole tray. Add to this that some RAID vendor's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost."<br> <br>

That sounds pretty dire. And, it does in fact mean that SSDs won't be neat drop-in  replacements for some legacy infrastructures. However, step back for a minute: Why did traditional systems have 50k or 100k RAID controllers connected to large numbers of HDDs? Mostly because the IOPs on an HDD, even a 15K RPM monster, sucked horribly. If 3 SSDs can swamp a RAID controller that could handle 60 drives, that is an overwhelmingly good thing. In fact, you might be able to ditch the pricey raid controller entirely, or move to a much smaller one, if 3 SDDs can do the work of 60HDDs.<br> <br>

Now, for systems where bulk storage capacity is the point of the exercise, the ability to hang tray after tray full of disks off the RAID controller is necessary. However, that isn't the place where you would be buying expensive SSDs. Even the SSD vendors aren't even pretending that SSDs can cut it as capacity kings. For systems that are judged by their IOPS, though, the fact that the tradition involved hanging huge numbers (of often mostly empty, reading and writing only to the parts of the platter with the best access times) HDDs off extremely expensive RAID controllers shows that the past sucked, not that SSDs are bad.<br> <br>

For the obligatory car analogy: shortly after the d&#233;but of the automobile, manufacturers of horse-drawn carriages noted the fatal flaw of the new technology: "With a horse drawn carriage, a single buggy whip will server to keep you moving for months, even years with the right horses. If you try to power your car with buggy whips, though, you could end up burning several buggy whips <i>per mile</i>, at huge expense, just to keep the engine running..."</htmltext>
<tokenext>This study seems to have a very bad case of " unconsciously idealizing the status quo and working from there " .
For instance : " Even the highest-performance RAID controllers today can not support the IOPS of just three of the fastest SSDs .
I am not talking about a disk tray ; I am talking about the whole RAID controller .
If you want full performance of expensive SSDs , you need to take your $ 50,000 or $ 100,000 RAID controller and not overpopulate it with too many drives .
In fact , most vendors today have between 16 and 60 drives in a disk tray and you can not even populate a whole tray .
Add to this that some RAID vendor 's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost .
" That sounds pretty dire .
And , it does in fact mean that SSDs wo n't be neat drop-in replacements for some legacy infrastructures .
However , step back for a minute : Why did traditional systems have 50k or 100k RAID controllers connected to large numbers of HDDs ?
Mostly because the IOPs on an HDD , even a 15K RPM monster , sucked horribly .
If 3 SSDs can swamp a RAID controller that could handle 60 drives , that is an overwhelmingly good thing .
In fact , you might be able to ditch the pricey raid controller entirely , or move to a much smaller one , if 3 SDDs can do the work of 60HDDs .
Now , for systems where bulk storage capacity is the point of the exercise , the ability to hang tray after tray full of disks off the RAID controller is necessary .
However , that is n't the place where you would be buying expensive SSDs .
Even the SSD vendors are n't even pretending that SSDs can cut it as capacity kings .
For systems that are judged by their IOPS , though , the fact that the tradition involved hanging huge numbers ( of often mostly empty , reading and writing only to the parts of the platter with the best access times ) HDDs off extremely expensive RAID controllers shows that the past sucked , not that SSDs are bad .
For the obligatory car analogy : shortly after the d   but of the automobile , manufacturers of horse-drawn carriages noted the fatal flaw of the new technology : " With a horse drawn carriage , a single buggy whip will server to keep you moving for months , even years with the right horses .
If you try to power your car with buggy whips , though , you could end up burning several buggy whips per mile , at huge expense , just to keep the engine running... "</tokentext>
<sentencetext>This study seems to have a very bad case of "unconsciously idealizing the status quo and working from there".
For instance:  

"Even the highest-performance RAID controllers today cannot support the IOPS of just three of the fastest SSDs.
I am not talking about a disk tray; I am talking about the whole RAID controller.
If you want full performance of expensive SSDs, you need to take your $50,000 or $100,000 RAID controller and not overpopulate it with too many drives.
In fact, most vendors today have between 16 and 60 drives in a disk tray and you cannot even populate a whole tray.
Add to this that some RAID vendor's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost.
" 

That sounds pretty dire.
And, it does in fact mean that SSDs won't be neat drop-in  replacements for some legacy infrastructures.
However, step back for a minute: Why did traditional systems have 50k or 100k RAID controllers connected to large numbers of HDDs?
Mostly because the IOPs on an HDD, even a 15K RPM monster, sucked horribly.
If 3 SSDs can swamp a RAID controller that could handle 60 drives, that is an overwhelmingly good thing.
In fact, you might be able to ditch the pricey raid controller entirely, or move to a much smaller one, if 3 SDDs can do the work of 60HDDs.
Now, for systems where bulk storage capacity is the point of the exercise, the ability to hang tray after tray full of disks off the RAID controller is necessary.
However, that isn't the place where you would be buying expensive SSDs.
Even the SSD vendors aren't even pretending that SSDs can cut it as capacity kings.
For systems that are judged by their IOPS, though, the fact that the tradition involved hanging huge numbers (of often mostly empty, reading and writing only to the parts of the platter with the best access times) HDDs off extremely expensive RAID controllers shows that the past sucked, not that SSDs are bad.
For the obligatory car analogy: shortly after the début of the automobile, manufacturers of horse-drawn carriages noted the fatal flaw of the new technology: "With a horse drawn carriage, a single buggy whip will server to keep you moving for months, even years with the right horses.
If you try to power your car with buggy whips, though, you could end up burning several buggy whips per mile, at huge expense, just to keep the engine running..."</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31403802</id>
	<title>Re:ZFS sidesteps the whole RAID controller problem</title>
	<author>jotaeleemeese</author>
	<datestamp>1268079240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Look at Sun, er... Oracle's  storage solutions.</p><p>They designed some of their "open storage" offerings specifically to speed file system  meta data with SSDs and ther rest of the data with regular disks.</p><p>The interesting thing to note is that you could do all this yourself (the open storage moniker is not gratuitous) but they have done all the heavy lifting already, so if you have the money it is a good option.</p></htmltext>
<tokenext>Look at Sun , er... Oracle 's storage solutions.They designed some of their " open storage " offerings specifically to speed file system meta data with SSDs and ther rest of the data with regular disks.The interesting thing to note is that you could do all this yourself ( the open storage moniker is not gratuitous ) but they have done all the heavy lifting already , so if you have the money it is a good option .</tokentext>
<sentencetext>Look at Sun, er... Oracle's  storage solutions.They designed some of their "open storage" offerings specifically to speed file system  meta data with SSDs and ther rest of the data with regular disks.The interesting thing to note is that you could do all this yourself (the open storage moniker is not gratuitous) but they have done all the heavy lifting already, so if you have the money it is a good option.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381962</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381912</id>
	<title>why not skip wear leveling</title>
	<author>Anonymous</author>
	<datestamp>1267902960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>and use something along the lines of "http://en.wikipedia.org/wiki/UBIFS"</p><p>Because SSDs aren't spinning platter drives, what if we skip the part in making the SSDs try to impersonate them.</p><p>Thoughts?</p></htmltext>
<tokenext>and use something along the lines of " http : //en.wikipedia.org/wiki/UBIFS " Because SSDs are n't spinning platter drives , what if we skip the part in making the SSDs try to impersonate them.Thoughts ?</tokentext>
<sentencetext>and use something along the lines of "http://en.wikipedia.org/wiki/UBIFS"Because SSDs aren't spinning platter drives, what if we skip the part in making the SSDs try to impersonate them.Thoughts?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31384324</id>
	<title>Re:RAID = Speed?</title>
	<author>Tsiangkun</author>
	<datestamp>1267876920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No, data redundancy is why I have to go through my users data and collapse identical files to a single copy.

RAID provides a backup against a hard drive failing.

RAID won't protect you from a controller that goes bad and starts writing bogus bits.</htmltext>
<tokenext>No , data redundancy is why I have to go through my users data and collapse identical files to a single copy .
RAID provides a backup against a hard drive failing .
RAID wo n't protect you from a controller that goes bad and starts writing bogus bits .</tokentext>
<sentencetext>No, data redundancy is why I have to go through my users data and collapse identical files to a single copy.
RAID provides a backup against a hard drive failing.
RAID won't protect you from a controller that goes bad and starts writing bogus bits.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382102</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381962</id>
	<title>ZFS sidesteps the whole RAID controller problem</title>
	<author>haemish</author>
	<datestamp>1267903500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>If you use ZFS with SSDs, it scales very nicely.  There isn't a bottleneck at a raid controller.  You can slam a pile of controllers into a chassis if you have bandwidth problems because you've bought 100 SSDs - by having the RAID management outside the controller, ZFS can unify the whole lot in one giant high performance array.</p></htmltext>
<tokenext>If you use ZFS with SSDs , it scales very nicely .
There is n't a bottleneck at a raid controller .
You can slam a pile of controllers into a chassis if you have bandwidth problems because you 've bought 100 SSDs - by having the RAID management outside the controller , ZFS can unify the whole lot in one giant high performance array .</tokentext>
<sentencetext>If you use ZFS with SSDs, it scales very nicely.
There isn't a bottleneck at a raid controller.
You can slam a pile of controllers into a chassis if you have bandwidth problems because you've bought 100 SSDs - by having the RAID management outside the controller, ZFS can unify the whole lot in one giant high performance array.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31386506</id>
	<title>Get rid of the raid controller, it's to slow</title>
	<author>hlge</author>
	<datestamp>1267897020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Why keep the RAID controller at all, it's likely the slowest CPU you have in your system anyway. SSDs and smart SW based RAID aware filesystems allows build a new type of storage HW, with no need of a dedicated RADID controller. You can already today with OpenSolaris and combination of SATA drives and a few SSDs build storage solution with very good performance. And if you need the pure SSDs IOPs and low access time, just replace your spinning drives with SSDs for even better performance. Your host CPU/CPUs will have a lot better chance of keeping up with your SSD based RAID.</htmltext>
<tokenext>Why keep the RAID controller at all , it 's likely the slowest CPU you have in your system anyway .
SSDs and smart SW based RAID aware filesystems allows build a new type of storage HW , with no need of a dedicated RADID controller .
You can already today with OpenSolaris and combination of SATA drives and a few SSDs build storage solution with very good performance .
And if you need the pure SSDs IOPs and low access time , just replace your spinning drives with SSDs for even better performance .
Your host CPU/CPUs will have a lot better chance of keeping up with your SSD based RAID .</tokentext>
<sentencetext>Why keep the RAID controller at all, it's likely the slowest CPU you have in your system anyway.
SSDs and smart SW based RAID aware filesystems allows build a new type of storage HW, with no need of a dedicated RADID controller.
You can already today with OpenSolaris and combination of SATA drives and a few SSDs build storage solution with very good performance.
And if you need the pure SSDs IOPs and low access time, just replace your spinning drives with SSDs for even better performance.
Your host CPU/CPUs will have a lot better chance of keeping up with your SSD based RAID.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381762</id>
	<title>OT: "fast performance" is redundant</title>
	<author>noidentity</author>
	<datestamp>1267901340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>wear leveling can eat up most of a drive's bandwidth and make <b>write performance no faster than a hard drive</b></p></div>
</blockquote><p>It's not the performance that's no faster, it's the writing. So he should either say "...and make writes no faster than a hard drive's" or "...and make write performance no better than a hard drive's". Whenever I read this kind of redundancy, I can't help but imagine the author having trouble with indirection in a programming language, writing things like foo\_ptr &gt; *bar\_ptr.</p></div>
	</htmltext>
<tokenext>wear leveling can eat up most of a drive 's bandwidth and make write performance no faster than a hard drive It 's not the performance that 's no faster , it 's the writing .
So he should either say " ...and make writes no faster than a hard drive 's " or " ...and make write performance no better than a hard drive 's " .
Whenever I read this kind of redundancy , I ca n't help but imagine the author having trouble with indirection in a programming language , writing things like foo \ _ptr &gt; * bar \ _ptr .</tokentext>
<sentencetext>wear leveling can eat up most of a drive's bandwidth and make write performance no faster than a hard drive
It's not the performance that's no faster, it's the writing.
So he should either say "...and make writes no faster than a hard drive's" or "...and make write performance no better than a hard drive's".
Whenever I read this kind of redundancy, I can't help but imagine the author having trouble with indirection in a programming language, writing things like foo\_ptr &gt; *bar\_ptr.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382102</id>
	<title>RAID = Speed?</title>
	<author>TangoMargarine</author>
	<datestamp>1267904580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I suppose it would be more important for enterprises, but personally, I wouldn't see speed as the primary purpose of having a RAID setup. Obviously it wouldn't be cool if it was really slow, but isn't data redundancy the primary purpose?</htmltext>
<tokenext>I suppose it would be more important for enterprises , but personally , I would n't see speed as the primary purpose of having a RAID setup .
Obviously it would n't be cool if it was really slow , but is n't data redundancy the primary purpose ?</tokentext>
<sentencetext>I suppose it would be more important for enterprises, but personally, I wouldn't see speed as the primary purpose of having a RAID setup.
Obviously it wouldn't be cool if it was really slow, but isn't data redundancy the primary purpose?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382050</id>
	<title>Re:Oops. I forgot to plan the array</title>
	<author>rubycodez</author>
	<datestamp>1267904280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>MSA are low performance crap anyway. here's a quarter kid, get yourself an EVA</p></htmltext>
<tokenext>MSA are low performance crap anyway .
here 's a quarter kid , get yourself an EVA</tokentext>
<sentencetext>MSA are low performance crap anyway.
here's a quarter kid, get yourself an EVA</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382294</id>
	<title>Fusion IO = better than SSD + RAID</title>
	<author>Anonymous</author>
	<datestamp>1267905840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>http://www.fusionio.com/</p><p>We used these to solve a problem with a horrendously mismanaged (but exceedingly crucial) MySQL DB. We compared solutions on a dollar-per-IOPS basis and these came out ahead by far. For about $17k we got 320GB of space but at well over 100,000 IOPS. The fastest arrays we could cram into a server would only reach into the low tens of thousands.</p></htmltext>
<tokenext>http : //www.fusionio.com/We used these to solve a problem with a horrendously mismanaged ( but exceedingly crucial ) MySQL DB .
We compared solutions on a dollar-per-IOPS basis and these came out ahead by far .
For about $ 17k we got 320GB of space but at well over 100,000 IOPS .
The fastest arrays we could cram into a server would only reach into the low tens of thousands .</tokentext>
<sentencetext>http://www.fusionio.com/We used these to solve a problem with a horrendously mismanaged (but exceedingly crucial) MySQL DB.
We compared solutions on a dollar-per-IOPS basis and these came out ahead by far.
For about $17k we got 320GB of space but at well over 100,000 IOPS.
The fastest arrays we could cram into a server would only reach into the low tens of thousands.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381812</id>
	<title>In other news...</title>
	<author>Anonymous</author>
	<datestamp>1267901940000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>... researchers have found that putting a Formula One engine into a Mack truck wipes out the advantages of the 19,000 rpm.</p></htmltext>
<tokenext>... researchers have found that putting a Formula One engine into a Mack truck wipes out the advantages of the 19,000 rpm .</tokentext>
<sentencetext>... researchers have found that putting a Formula One engine into a Mack truck wipes out the advantages of the 19,000 rpm.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381980</id>
	<title>Oye</title>
	<author>Anonymous</author>
	<datestamp>1267903680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Firstly, "$50,000 or $100,000 RAID controller"? I think the author means Storage Array. Regular RAID controllers cost nowhere near that number. In fact, most enterprise Storage Arrays cost far more than "$50,000 or $100,000".</p><p>Secondly, they are also typically only certified for vendor provided disks (at $ludicrous), which seldom include SSDs as offering.</p><p>Thirdly, no one in their right mind is going to be using very expensive SSDs for sequential load applications, which regular disks are perfectly capable of for a fraction of the price. The only load that makes sense at that price point for the enterprise are database applications and others that utilize heavy random i/o workloads. Once you have that type of load, the performance of each SSD is going to be a fraction of the top sequential speed, but still far faster than a regular disk.</p><p>The article is FUD.</p></htmltext>
<tokenext>Firstly , " $ 50,000 or $ 100,000 RAID controller " ?
I think the author means Storage Array .
Regular RAID controllers cost nowhere near that number .
In fact , most enterprise Storage Arrays cost far more than " $ 50,000 or $ 100,000 " .Secondly , they are also typically only certified for vendor provided disks ( at $ ludicrous ) , which seldom include SSDs as offering.Thirdly , no one in their right mind is going to be using very expensive SSDs for sequential load applications , which regular disks are perfectly capable of for a fraction of the price .
The only load that makes sense at that price point for the enterprise are database applications and others that utilize heavy random i/o workloads .
Once you have that type of load , the performance of each SSD is going to be a fraction of the top sequential speed , but still far faster than a regular disk.The article is FUD .</tokentext>
<sentencetext>Firstly, "$50,000 or $100,000 RAID controller"?
I think the author means Storage Array.
Regular RAID controllers cost nowhere near that number.
In fact, most enterprise Storage Arrays cost far more than "$50,000 or $100,000".Secondly, they are also typically only certified for vendor provided disks (at $ludicrous), which seldom include SSDs as offering.Thirdly, no one in their right mind is going to be using very expensive SSDs for sequential load applications, which regular disks are perfectly capable of for a fraction of the price.
The only load that makes sense at that price point for the enterprise are database applications and others that utilize heavy random i/o workloads.
Once you have that type of load, the performance of each SSD is going to be a fraction of the top sequential speed, but still far faster than a regular disk.The article is FUD.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31386968</id>
	<title>Sucky raid controllers.</title>
	<author>bored</author>
	<datestamp>1267902660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, most of the raid controllers out there can't even keep up with the throughput of a half dozen modern magnetic drives either. Our application eats bandwidth, and its been a real struggle to find controllers that can sustain higher transfer rates. Getting much more than 1GB/sec out of a RAID box is pretty much impossible no matter the cost. In the end we use fairly low end RAID disks ganged together via high end FC switches and let our software do the stripping across them. AKA we gang a bunch of 600MB/sec RAID arrays together to get multiple GB/sec.</p></htmltext>
<tokenext>Well , most of the raid controllers out there ca n't even keep up with the throughput of a half dozen modern magnetic drives either .
Our application eats bandwidth , and its been a real struggle to find controllers that can sustain higher transfer rates .
Getting much more than 1GB/sec out of a RAID box is pretty much impossible no matter the cost .
In the end we use fairly low end RAID disks ganged together via high end FC switches and let our software do the stripping across them .
AKA we gang a bunch of 600MB/sec RAID arrays together to get multiple GB/sec .</tokentext>
<sentencetext>Well, most of the raid controllers out there can't even keep up with the throughput of a half dozen modern magnetic drives either.
Our application eats bandwidth, and its been a real struggle to find controllers that can sustain higher transfer rates.
Getting much more than 1GB/sec out of a RAID box is pretty much impossible no matter the cost.
In the end we use fairly low end RAID disks ganged together via high end FC switches and let our software do the stripping across them.
AKA we gang a bunch of 600MB/sec RAID arrays together to get multiple GB/sec.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381784</id>
	<title>Wear?</title>
	<author>l3ert</author>
	<datestamp>1267901580000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>The usually employed term is 'gear'. And what the hell is SSD? I hope the article doesn't mean SSC, that place is trivial now, even at level 70. No reasons to wipe a raid there.</htmltext>
<tokenext>The usually employed term is 'gear' .
And what the hell is SSD ?
I hope the article does n't mean SSC , that place is trivial now , even at level 70 .
No reasons to wipe a raid there .</tokentext>
<sentencetext>The usually employed term is 'gear'.
And what the hell is SSD?
I hope the article doesn't mean SSC, that place is trivial now, even at level 70.
No reasons to wipe a raid there.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381882</id>
	<title>Hold on now...</title>
	<author>chronosan</author>
	<datestamp>1267902600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>That guy from Samsung (?) who had a billion SSDs RAIDed up for a demo didn't seem to be doing too bad... right?</htmltext>
<tokenext>That guy from Samsung ( ?
) who had a billion SSDs RAIDed up for a demo did n't seem to be doing too bad... right ?</tokentext>
<sentencetext>That guy from Samsung (?
) who had a billion SSDs RAIDed up for a demo didn't seem to be doing too bad... right?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381760</id>
	<title>Seek time</title>
	<author>Anonymous</author>
	<datestamp>1267901280000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>The real advantage of solid state storage is seek time, not read/write times. They don't beat conventional drives by much at sustained IO. Maybe this will change in the future. RAID just isn't meant for SSD devices. RAID is a fix for the unreliable nature of magnetic disks.</p></htmltext>
<tokenext>The real advantage of solid state storage is seek time , not read/write times .
They do n't beat conventional drives by much at sustained IO .
Maybe this will change in the future .
RAID just is n't meant for SSD devices .
RAID is a fix for the unreliable nature of magnetic disks .</tokentext>
<sentencetext>The real advantage of solid state storage is seek time, not read/write times.
They don't beat conventional drives by much at sustained IO.
Maybe this will change in the future.
RAID just isn't meant for SSD devices.
RAID is a fix for the unreliable nature of magnetic disks.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382168</id>
	<title>Re:Seek time</title>
	<author>Rockoon</author>
	<datestamp>1267905000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>They don't beat conventional drives by much at sustained IO.</p> </div><p>
umm, err?<br>
<br>
Which platter drive did you have in mind that performs similar to a high performance SSD's? Even Seagates 15K Cheetah only pushes 100 to 150MB/sec sustained read and write. The latest performance SSD's (such as the SATA2 Colossus) are have sustained writes at "only" 220MB/sec and with better performance (260MB/sec) literally everywhere else.</p></div>
	</htmltext>
<tokenext>They do n't beat conventional drives by much at sustained IO .
umm , err ?
Which platter drive did you have in mind that performs similar to a high performance SSD 's ?
Even Seagates 15K Cheetah only pushes 100 to 150MB/sec sustained read and write .
The latest performance SSD 's ( such as the SATA2 Colossus ) are have sustained writes at " only " 220MB/sec and with better performance ( 260MB/sec ) literally everywhere else .</tokentext>
<sentencetext>They don't beat conventional drives by much at sustained IO.
umm, err?
Which platter drive did you have in mind that performs similar to a high performance SSD's?
Even Seagates 15K Cheetah only pushes 100 to 150MB/sec sustained read and write.
The latest performance SSD's (such as the SATA2 Colossus) are have sustained writes at "only" 220MB/sec and with better performance (260MB/sec) literally everywhere else.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381760</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382336</id>
	<title>RAID for what?</title>
	<author>Anonymous</author>
	<datestamp>1267906020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If using RAID for mirroring drives, well, you must also consider the fail rate of drives, as it is all about fault tolerance, no? It is reported that SSDs are far more durable, so the question should be, what does it take to match the fault tolerance of HDD RAID with an SSD RAID, and only after that, can we truly compare the pros and cons of their performance sacrifices.</p><p>On a side note, you can now get a sony laptop that comes equipped with a RAID 0 quad SSD drive.<br><a href="http://www.sonystyle.com/webapp/wcs/stores/servlet/CategoryDisplay?catalogId=10551&amp;storeId=10151&amp;langId=-1&amp;categoryId=8198552921644570897" title="sonystyle.com">http://www.sonystyle.com/webapp/wcs/stores/servlet/CategoryDisplay?catalogId=10551&amp;storeId=10151&amp;langId=-1&amp;categoryId=8198552921644570897</a> [sonystyle.com]</p><p>I assume you would only do this with SSDs, given that they have a much lower failure rate than HDDs.</p></htmltext>
<tokenext>If using RAID for mirroring drives , well , you must also consider the fail rate of drives , as it is all about fault tolerance , no ?
It is reported that SSDs are far more durable , so the question should be , what does it take to match the fault tolerance of HDD RAID with an SSD RAID , and only after that , can we truly compare the pros and cons of their performance sacrifices.On a side note , you can now get a sony laptop that comes equipped with a RAID 0 quad SSD drive.http : //www.sonystyle.com/webapp/wcs/stores/servlet/CategoryDisplay ? catalogId = 10551&amp;storeId = 10151&amp;langId = -1&amp;categoryId = 8198552921644570897 [ sonystyle.com ] I assume you would only do this with SSDs , given that they have a much lower failure rate than HDDs .</tokentext>
<sentencetext>If using RAID for mirroring drives, well, you must also consider the fail rate of drives, as it is all about fault tolerance, no?
It is reported that SSDs are far more durable, so the question should be, what does it take to match the fault tolerance of HDD RAID with an SSD RAID, and only after that, can we truly compare the pros and cons of their performance sacrifices.On a side note, you can now get a sony laptop that comes equipped with a RAID 0 quad SSD drive.http://www.sonystyle.com/webapp/wcs/stores/servlet/CategoryDisplay?catalogId=10551&amp;storeId=10151&amp;langId=-1&amp;categoryId=8198552921644570897 [sonystyle.com]I assume you would only do this with SSDs, given that they have a much lower failure rate than HDDs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382158</id>
	<title>Bandwidth limit doesn't "wipe out" SSD advantage</title>
	<author>Anonymous</author>
	<datestamp>1267904940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Bandwidth is the limiting factor for some SSD RAIDs today, but it doesn't "wipe out the advantage" of SSDs.  8 mirrored pairs of 15K RPM hard drives would have about 150*8=1200 random writes a second.  A *single* second-generation Intel X-25M has 6,000 write IOPS a second, and a single 6Gbps SATA RAID connection can handle at least 60K IOPS assuming 4k blocks and every block gets sent out twice (software RAID).</p><p>The way to deal with wear leveling, and the otherSSD controller problems the linked article raises, is to get an SSD with a good controller and large write cache; Intel has the best, then Indilinx.  (You can see for yourself by looking at the SSD performance charts at Tom's Hardware or any number of comparisons out there.  Note that controller maker != brand on the SSD box; you have to Google a bit.)  The good SSDs aren't much more expensive per gig than the JMicron ones, so there isn't much excuse.</p><p>And sure, it would be great if RAID cards understood SSDs' nonstandard SMART statistics and used them to autoprovision spares for the drives most likely to fail next, but if you really need thousands more IOPS -- i.e., your database is crashing under crazy load -- and the cost doesn't stop you, then a little thing like hardware autoprovisioning of spares won't stop you.</p><p>Why on earth does the article even mention RAID-5 or 6 with SSDs?  If you want SSDs or even 15K disks, you certainly don't want RAID-5 or 6, because your RAID performance will be limited by the speed of the parity disks.  End of story.</p><p>Finally, as other commenters mentioned, enterprise disk interfaces are certainly gonna catch up as disks get faster.</p><p>The article's tone sounds like your basic kneejerk contrarianism -- "everyone says SSDs are great; here's why they're wrong" -- but it's mostly just incomplete (and, as always, posted to Slashdot with an even more fragmentary/contrarian/exaggerated summary) rather than outright wrong; you should certainly think about your SSD controller maker and RAID card before your company goes and shells out 200 Benjamins for big fast SSD arrays for your main and backup DB servers.  But reports of the death of enterprise SSDs have been greatly exaggerated.</p><p>In other news, if you *really* want a reason to consider holding off on SSDs, weigh their cost-effectiveness against the other ways to keep your app running nicely under load: getting more RAM or paying employees to add caching and tune their DB accesses, or maybe even doing scale-out with tons of DB servers (which has plenty of expense of its own in development time).  What's right for you mostly depends on the size of your data and working set, your workload, and how expensive scale-out and optimizations would be on the software side.</p></htmltext>
<tokenext>Bandwidth is the limiting factor for some SSD RAIDs today , but it does n't " wipe out the advantage " of SSDs .
8 mirrored pairs of 15K RPM hard drives would have about 150 * 8 = 1200 random writes a second .
A * single * second-generation Intel X-25M has 6,000 write IOPS a second , and a single 6Gbps SATA RAID connection can handle at least 60K IOPS assuming 4k blocks and every block gets sent out twice ( software RAID ) .The way to deal with wear leveling , and the otherSSD controller problems the linked article raises , is to get an SSD with a good controller and large write cache ; Intel has the best , then Indilinx .
( You can see for yourself by looking at the SSD performance charts at Tom 's Hardware or any number of comparisons out there .
Note that controller maker ! = brand on the SSD box ; you have to Google a bit .
) The good SSDs are n't much more expensive per gig than the JMicron ones , so there is n't much excuse.And sure , it would be great if RAID cards understood SSDs ' nonstandard SMART statistics and used them to autoprovision spares for the drives most likely to fail next , but if you really need thousands more IOPS -- i.e. , your database is crashing under crazy load -- and the cost does n't stop you , then a little thing like hardware autoprovisioning of spares wo n't stop you.Why on earth does the article even mention RAID-5 or 6 with SSDs ?
If you want SSDs or even 15K disks , you certainly do n't want RAID-5 or 6 , because your RAID performance will be limited by the speed of the parity disks .
End of story.Finally , as other commenters mentioned , enterprise disk interfaces are certainly gon na catch up as disks get faster.The article 's tone sounds like your basic kneejerk contrarianism -- " everyone says SSDs are great ; here 's why they 're wrong " -- but it 's mostly just incomplete ( and , as always , posted to Slashdot with an even more fragmentary/contrarian/exaggerated summary ) rather than outright wrong ; you should certainly think about your SSD controller maker and RAID card before your company goes and shells out 200 Benjamins for big fast SSD arrays for your main and backup DB servers .
But reports of the death of enterprise SSDs have been greatly exaggerated.In other news , if you * really * want a reason to consider holding off on SSDs , weigh their cost-effectiveness against the other ways to keep your app running nicely under load : getting more RAM or paying employees to add caching and tune their DB accesses , or maybe even doing scale-out with tons of DB servers ( which has plenty of expense of its own in development time ) .
What 's right for you mostly depends on the size of your data and working set , your workload , and how expensive scale-out and optimizations would be on the software side .</tokentext>
<sentencetext>Bandwidth is the limiting factor for some SSD RAIDs today, but it doesn't "wipe out the advantage" of SSDs.
8 mirrored pairs of 15K RPM hard drives would have about 150*8=1200 random writes a second.
A *single* second-generation Intel X-25M has 6,000 write IOPS a second, and a single 6Gbps SATA RAID connection can handle at least 60K IOPS assuming 4k blocks and every block gets sent out twice (software RAID).The way to deal with wear leveling, and the otherSSD controller problems the linked article raises, is to get an SSD with a good controller and large write cache; Intel has the best, then Indilinx.
(You can see for yourself by looking at the SSD performance charts at Tom's Hardware or any number of comparisons out there.
Note that controller maker != brand on the SSD box; you have to Google a bit.
)  The good SSDs aren't much more expensive per gig than the JMicron ones, so there isn't much excuse.And sure, it would be great if RAID cards understood SSDs' nonstandard SMART statistics and used them to autoprovision spares for the drives most likely to fail next, but if you really need thousands more IOPS -- i.e., your database is crashing under crazy load -- and the cost doesn't stop you, then a little thing like hardware autoprovisioning of spares won't stop you.Why on earth does the article even mention RAID-5 or 6 with SSDs?
If you want SSDs or even 15K disks, you certainly don't want RAID-5 or 6, because your RAID performance will be limited by the speed of the parity disks.
End of story.Finally, as other commenters mentioned, enterprise disk interfaces are certainly gonna catch up as disks get faster.The article's tone sounds like your basic kneejerk contrarianism -- "everyone says SSDs are great; here's why they're wrong" -- but it's mostly just incomplete (and, as always, posted to Slashdot with an even more fragmentary/contrarian/exaggerated summary) rather than outright wrong; you should certainly think about your SSD controller maker and RAID card before your company goes and shells out 200 Benjamins for big fast SSD arrays for your main and backup DB servers.
But reports of the death of enterprise SSDs have been greatly exaggerated.In other news, if you *really* want a reason to consider holding off on SSDs, weigh their cost-effectiveness against the other ways to keep your app running nicely under load: getting more RAM or paying employees to add caching and tune their DB accesses, or maybe even doing scale-out with tons of DB servers (which has plenty of expense of its own in development time).
What's right for you mostly depends on the size of your data and working set, your workload, and how expensive scale-out and optimizations would be on the software side.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382256</id>
	<title>Re:ZFS sidesteps the whole RAID controller problem</title>
	<author>Anonymous</author>
	<datestamp>1267905600000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>If you use ZFS with SSDs, it scales very nicely.  There isn't a bottleneck at a raid controller.  You can slam a pile of controllers into a chassis if you have bandwidth problems because you've bought 100 SSDs - by having the RAID management outside the controller, ZFS can unify the whole lot in one giant high performance array.</p></div><p>If performance is that critical, you'd be foolish to use ZFS.  Get a real high-performance file system.  One that's also mature and can actually be recovered if it ever does fail catastrophically.  (Yes, ZFS can fail catastrophically.  Just Google "ZFS data loss"...)</p><p>If you want to stay with Sun, use QFS.  You can even use the same filesystems as an HSM, because SAMFS is really just QFS with tapes (don't use disk archives unless you've got more money than sense...).</p><p>Or you can use IBM's GPFS.</p><p>If you really want to see a fast and HUGE file system, use QFS or GPFS and put the metadata on SSDs and the contents on lots of big SATA drives.  Yes, SATA.  Because when you start getting into trays and trays full of disks attached to RAID controllers, arrays that consist of FC or SAS drives aren't much if any faster than arrays that consist of SATA drives.  But the FC/SAS arrays ARE much smaller AND more expensive.</p><p>Both QFS and GPFS beat the living snot out of ZFS on performance.  And no, NOTHING free comes close.  And nothing proprietary, either, although an uncrippled XFS on Irix might do it, if you could get real Irix running on up-to-date hardware.  (Yes, the XFS in Linux is crippleware...)</p></div>
	</htmltext>
<tokenext>If you use ZFS with SSDs , it scales very nicely .
There is n't a bottleneck at a raid controller .
You can slam a pile of controllers into a chassis if you have bandwidth problems because you 've bought 100 SSDs - by having the RAID management outside the controller , ZFS can unify the whole lot in one giant high performance array.If performance is that critical , you 'd be foolish to use ZFS .
Get a real high-performance file system .
One that 's also mature and can actually be recovered if it ever does fail catastrophically .
( Yes , ZFS can fail catastrophically .
Just Google " ZFS data loss " ... ) If you want to stay with Sun , use QFS .
You can even use the same filesystems as an HSM , because SAMFS is really just QFS with tapes ( do n't use disk archives unless you 've got more money than sense... ) .Or you can use IBM 's GPFS.If you really want to see a fast and HUGE file system , use QFS or GPFS and put the metadata on SSDs and the contents on lots of big SATA drives .
Yes , SATA .
Because when you start getting into trays and trays full of disks attached to RAID controllers , arrays that consist of FC or SAS drives are n't much if any faster than arrays that consist of SATA drives .
But the FC/SAS arrays ARE much smaller AND more expensive.Both QFS and GPFS beat the living snot out of ZFS on performance .
And no , NOTHING free comes close .
And nothing proprietary , either , although an uncrippled XFS on Irix might do it , if you could get real Irix running on up-to-date hardware .
( Yes , the XFS in Linux is crippleware... )</tokentext>
<sentencetext>If you use ZFS with SSDs, it scales very nicely.
There isn't a bottleneck at a raid controller.
You can slam a pile of controllers into a chassis if you have bandwidth problems because you've bought 100 SSDs - by having the RAID management outside the controller, ZFS can unify the whole lot in one giant high performance array.If performance is that critical, you'd be foolish to use ZFS.
Get a real high-performance file system.
One that's also mature and can actually be recovered if it ever does fail catastrophically.
(Yes, ZFS can fail catastrophically.
Just Google "ZFS data loss"...)If you want to stay with Sun, use QFS.
You can even use the same filesystems as an HSM, because SAMFS is really just QFS with tapes (don't use disk archives unless you've got more money than sense...).Or you can use IBM's GPFS.If you really want to see a fast and HUGE file system, use QFS or GPFS and put the metadata on SSDs and the contents on lots of big SATA drives.
Yes, SATA.
Because when you start getting into trays and trays full of disks attached to RAID controllers, arrays that consist of FC or SAS drives aren't much if any faster than arrays that consist of SATA drives.
But the FC/SAS arrays ARE much smaller AND more expensive.Both QFS and GPFS beat the living snot out of ZFS on performance.
And no, NOTHING free comes close.
And nothing proprietary, either, although an uncrippled XFS on Irix might do it, if you could get real Irix running on up-to-date hardware.
(Yes, the XFS in Linux is crippleware...)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381962</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381932</id>
	<title>Software RAID?</title>
	<author>MikeUW</author>
	<datestamp>1267903140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So does anyone know if this applies to software RAID configurations?</p><p>Just curious...</p></htmltext>
<tokenext>So does anyone know if this applies to software RAID configurations ? Just curious.. .</tokentext>
<sentencetext>So does anyone know if this applies to software RAID configurations?Just curious...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382248</id>
	<title>Re:Seek time</title>
	<author>rcamans</author>
	<datestamp>1267905540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>RAID is meant to increase throughput and reliability. Single drives did not have anywhere as much throughput as an array of drives on a good RAID controller. . But RAID controllers were designed expecting msec seek times, and SSDs have usec seek times. So RAID  controllers need redesign for the faster seek times.</p></htmltext>
<tokenext>RAID is meant to increase throughput and reliability .
Single drives did not have anywhere as much throughput as an array of drives on a good RAID controller .
. But RAID controllers were designed expecting msec seek times , and SSDs have usec seek times .
So RAID controllers need redesign for the faster seek times .</tokentext>
<sentencetext>RAID is meant to increase throughput and reliability.
Single drives did not have anywhere as much throughput as an array of drives on a good RAID controller.
. But RAID controllers were designed expecting msec seek times, and SSDs have usec seek times.
So RAID  controllers need redesign for the faster seek times.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381760</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381922</id>
	<title>Re:Oops. I forgot to plan the array</title>
	<author>jd2112</author>
	<datestamp>1267903020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>OTOH: Who pays 100K for one of those? That has to be including the Enterprise 120GB SSD's at $4k each, right?</p> </div><p>
That $100K gets you more than bare drives, You get the flexibility to carve out partitions however you like, configuring them for maximum performance or whatever level of redundancy you need. You get snapshot backups, offsite replication, etc.(At additional cost of course...)<br> <br>
And, of course you also get the letters 'E', 'M', and 'C'.</p></div>
	</htmltext>
<tokenext>OTOH : Who pays 100K for one of those ?
That has to be including the Enterprise 120GB SSD 's at $ 4k each , right ?
That $ 100K gets you more than bare drives , You get the flexibility to carve out partitions however you like , configuring them for maximum performance or whatever level of redundancy you need .
You get snapshot backups , offsite replication , etc .
( At additional cost of course... ) And , of course you also get the letters 'E ' , 'M ' , and 'C' .</tokentext>
<sentencetext>OTOH: Who pays 100K for one of those?
That has to be including the Enterprise 120GB SSD's at $4k each, right?
That $100K gets you more than bare drives, You get the flexibility to carve out partitions however you like, configuring them for maximum performance or whatever level of redundancy you need.
You get snapshot backups, offsite replication, etc.
(At additional cost of course...) 
And, of course you also get the letters 'E', 'M', and 'C'.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381694</id>
	<title>Correction:</title>
	<author>raving griff</author>
	<datestamp>1267900800000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Wear Leveling, RAID Can Wipe Out SSD Advantage <b>for enterprise.</b> </p><p>While it may not be efficient to slap together a platter of 16 SSDs, it <b>is</b> worthwhile to upgrade personal computers to use an SSD.</p></htmltext>
<tokenext>Wear Leveling , RAID Can Wipe Out SSD Advantage for enterprise .
While it may not be efficient to slap together a platter of 16 SSDs , it is worthwhile to upgrade personal computers to use an SSD .</tokentext>
<sentencetext>Wear Leveling, RAID Can Wipe Out SSD Advantage for enterprise.
While it may not be efficient to slap together a platter of 16 SSDs, it is worthwhile to upgrade personal computers to use an SSD.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382358</id>
	<title>Re:Little Flawed study.</title>
	<author>Z00L00K</author>
	<datestamp>1267906260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Even if the bottleneck moves from disk to controller the overall performance will improve. So it's not that SSD:s are bad, it's just that the controllers needs to keep up with them.</p><p>On the other hand - raid controllers are used for reliability and not just for performance. And in many cases it's a tradeoff - large reliable storage is one thing while high performance is another. Sometimes you want both and then it gets expensive, but if you can live with just one of the alternatives you will get off relatively easy.</p><p>And if you really want performance enhancement you may want to look into a mix of SSD:s and ordinary disks. It depends on the actual solution how you can tune it for best performance.</p></htmltext>
<tokenext>Even if the bottleneck moves from disk to controller the overall performance will improve .
So it 's not that SSD : s are bad , it 's just that the controllers needs to keep up with them.On the other hand - raid controllers are used for reliability and not just for performance .
And in many cases it 's a tradeoff - large reliable storage is one thing while high performance is another .
Sometimes you want both and then it gets expensive , but if you can live with just one of the alternatives you will get off relatively easy.And if you really want performance enhancement you may want to look into a mix of SSD : s and ordinary disks .
It depends on the actual solution how you can tune it for best performance .</tokentext>
<sentencetext>Even if the bottleneck moves from disk to controller the overall performance will improve.
So it's not that SSD:s are bad, it's just that the controllers needs to keep up with them.On the other hand - raid controllers are used for reliability and not just for performance.
And in many cases it's a tradeoff - large reliable storage is one thing while high performance is another.
Sometimes you want both and then it gets expensive, but if you can live with just one of the alternatives you will get off relatively easy.And if you really want performance enhancement you may want to look into a mix of SSD:s and ordinary disks.
It depends on the actual solution how you can tune it for best performance.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381688</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31383476</id>
	<title>Well you know it is confused</title>
	<author>Sycraft-fu</author>
	<datestamp>1267870200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just based on the face that it says "$50,000 or $100,000 RAID controller." Ummm what? Where the hell do you spend that kind of money on a RAID controller? A RAID controller for a few disks is a couple hundred bucks at most. For high end controllers you are talking a few thousands. Like Adaptec's 5805Z which has a dual core 1.2GHz chip on it for all the RAID calculations and supports up to 256 disks. Cost? About $1000 from Adaptec. Or how about the 3Ware 9690SA-8E, 8 external SAS connectors for shelves with 128 disk support. Going for about $700 online.</p><p>So anyone who's trying to pretend like RAID controllers cost 5-6 figures is just making shit up. Yes, you can pay that much for a NAS, but you aren't paying for a RAID controller. You are paying for a computer with custom OS, controllers, shelves, disks, monitoring and so on. A complete solution, in other words. Also, if you are spending that kind of money, it is a really serious NAS. We bought a NetApp 2020 and it didn't cost $50,000.</p><p>Then, as you say, it is not bad to hit the performance limits. While on a small scale you may be mostly buying RAID for performance reasons, that isn't the reason on a large scale. The reason is space. We got our NetApp because we need a lot of reliable central storage for our department. Yes, it needs to have reasonable performance as well, but really the network is the limit there, not the NAS. The point of it is that it holds a ton of disks. So, if we filled it full of SSDs and those were higher performance than it could handle, we'd not care. Performance with magnetic disks is already as good as we need it to be.</p></htmltext>
<tokenext>Just based on the face that it says " $ 50,000 or $ 100,000 RAID controller .
" Ummm what ?
Where the hell do you spend that kind of money on a RAID controller ?
A RAID controller for a few disks is a couple hundred bucks at most .
For high end controllers you are talking a few thousands .
Like Adaptec 's 5805Z which has a dual core 1.2GHz chip on it for all the RAID calculations and supports up to 256 disks .
Cost ? About $ 1000 from Adaptec .
Or how about the 3Ware 9690SA-8E , 8 external SAS connectors for shelves with 128 disk support .
Going for about $ 700 online.So anyone who 's trying to pretend like RAID controllers cost 5-6 figures is just making shit up .
Yes , you can pay that much for a NAS , but you are n't paying for a RAID controller .
You are paying for a computer with custom OS , controllers , shelves , disks , monitoring and so on .
A complete solution , in other words .
Also , if you are spending that kind of money , it is a really serious NAS .
We bought a NetApp 2020 and it did n't cost $ 50,000.Then , as you say , it is not bad to hit the performance limits .
While on a small scale you may be mostly buying RAID for performance reasons , that is n't the reason on a large scale .
The reason is space .
We got our NetApp because we need a lot of reliable central storage for our department .
Yes , it needs to have reasonable performance as well , but really the network is the limit there , not the NAS .
The point of it is that it holds a ton of disks .
So , if we filled it full of SSDs and those were higher performance than it could handle , we 'd not care .
Performance with magnetic disks is already as good as we need it to be .</tokentext>
<sentencetext>Just based on the face that it says "$50,000 or $100,000 RAID controller.
" Ummm what?
Where the hell do you spend that kind of money on a RAID controller?
A RAID controller for a few disks is a couple hundred bucks at most.
For high end controllers you are talking a few thousands.
Like Adaptec's 5805Z which has a dual core 1.2GHz chip on it for all the RAID calculations and supports up to 256 disks.
Cost? About $1000 from Adaptec.
Or how about the 3Ware 9690SA-8E, 8 external SAS connectors for shelves with 128 disk support.
Going for about $700 online.So anyone who's trying to pretend like RAID controllers cost 5-6 figures is just making shit up.
Yes, you can pay that much for a NAS, but you aren't paying for a RAID controller.
You are paying for a computer with custom OS, controllers, shelves, disks, monitoring and so on.
A complete solution, in other words.
Also, if you are spending that kind of money, it is a really serious NAS.
We bought a NetApp 2020 and it didn't cost $50,000.Then, as you say, it is not bad to hit the performance limits.
While on a small scale you may be mostly buying RAID for performance reasons, that isn't the reason on a large scale.
The reason is space.
We got our NetApp because we need a lot of reliable central storage for our department.
Yes, it needs to have reasonable performance as well, but really the network is the limit there, not the NAS.
The point of it is that it holds a ton of disks.
So, if we filled it full of SSDs and those were higher performance than it could handle, we'd not care.
Performance with magnetic disks is already as good as we need it to be.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381800</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381688</id>
	<title>Little Flawed study.</title>
	<author>OS24Ever</author>
	<datestamp>1267900740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>This assumes that RAID controller manufacturers won't be making any changes though.</p><p>RAID for years has relied on millisecond access times.  So why spend a lot of money on an ASIC &amp; Subsystem that can go faster?   So taking a RAID card designed for slow (relatively) spinning disks and attaching them to SSD of course the RAID card is going to be a bottleneck.</p><p>However subsystems are going to be designed to work with SSD that has much higher access times.   When that happens, this so called 'bottleneck' is gone.   You know every major disk subsystem vendor is working on these. Sounds like a disk vendor is sponsoring 'studies' to convince people not to invest in SSD technologies now knowing that a lot of companies are looking at big purchases this year because of the age of equipment after the downturn.</p></htmltext>
<tokenext>This assumes that RAID controller manufacturers wo n't be making any changes though.RAID for years has relied on millisecond access times .
So why spend a lot of money on an ASIC &amp; Subsystem that can go faster ?
So taking a RAID card designed for slow ( relatively ) spinning disks and attaching them to SSD of course the RAID card is going to be a bottleneck.However subsystems are going to be designed to work with SSD that has much higher access times .
When that happens , this so called 'bottleneck ' is gone .
You know every major disk subsystem vendor is working on these .
Sounds like a disk vendor is sponsoring 'studies ' to convince people not to invest in SSD technologies now knowing that a lot of companies are looking at big purchases this year because of the age of equipment after the downturn .</tokentext>
<sentencetext>This assumes that RAID controller manufacturers won't be making any changes though.RAID for years has relied on millisecond access times.
So why spend a lot of money on an ASIC &amp; Subsystem that can go faster?
So taking a RAID card designed for slow (relatively) spinning disks and attaching them to SSD of course the RAID card is going to be a bottleneck.However subsystems are going to be designed to work with SSD that has much higher access times.
When that happens, this so called 'bottleneck' is gone.
You know every major disk subsystem vendor is working on these.
Sounds like a disk vendor is sponsoring 'studies' to convince people not to invest in SSD technologies now knowing that a lot of companies are looking at big purchases this year because of the age of equipment after the downturn.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31385794</id>
	<title>Re:ZFS sidesteps the whole RAID controller problem</title>
	<author>blackraven14250</author>
	<datestamp>1267889400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>OMFG you used alot of acronyms WTF man?</htmltext>
<tokenext>OMFG you used alot of acronyms WTF man ?</tokentext>
<sentencetext>OMFG you used alot of acronyms WTF man?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382256</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31403758</id>
	<title>Anonymous Coward.</title>
	<author>jotaeleemeese</author>
	<datestamp>1268079060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Trolling since 1998?</p></htmltext>
<tokenext>Trolling since 1998 ?</tokentext>
<sentencetext>Trolling since 1998?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382256</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382010</id>
	<title>Obvious</title>
	<author>anza</author>
	<datestamp>1267904040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Any idiot newb knows this. Whenever you raid and are not the right level, you invariably get wiped out.

Duh.</htmltext>
<tokenext>Any idiot newb knows this .
Whenever you raid and are not the right level , you invariably get wiped out .
Duh .</tokentext>
<sentencetext>Any idiot newb knows this.
Whenever you raid and are not the right level, you invariably get wiped out.
Duh.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31386234</id>
	<title>More than a little flawed</title>
	<author>Ropati</author>
	<datestamp>1267893720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Henry Newman may know SSD drives but he doesn't know enterprise storage.  Henry, enterprise shops don't talk about MB/s unless they are streaming video or working on their laptop.</p><p>All IO in the a storage networked enterprise are random.  Most important IOs are usually small block (databases).   There is no concept of MB/s of bandwidth except to gauge channel capacity.  Any one who does enterprise storage works in IOPS.  SSD drives smoke for random IOPS to the tune of 50x for writes and 200x for reads (MLC vs same size 15k RPM drives).  These are significant numbers.   Even if we lost 1/2 the write IOPS to wear leveling, that would be 25x faster.  Want your database to scream.</p><p>RAID controllers will only be able to do RAID 10.  Most RAID controllers can do RAID 10 in their sleep.  The bottle neck will now be the channels in and out of the controllers.  The first roll out of SSD storage in the enterprise will be direct attached SSD trays to bus attached controllers with the most external channels (bandwidth).</p><p>SSD drives are going to choke SAN channels.  In a couple of years when administrators want to network their SSD drives there will be a really big push to get better pipes in the SAN.  I wonder if inifiniband will get back in the mix?</p><p>This kind of disruptive technology keeps us employed.</p></htmltext>
<tokenext>Henry Newman may know SSD drives but he does n't know enterprise storage .
Henry , enterprise shops do n't talk about MB/s unless they are streaming video or working on their laptop.All IO in the a storage networked enterprise are random .
Most important IOs are usually small block ( databases ) .
There is no concept of MB/s of bandwidth except to gauge channel capacity .
Any one who does enterprise storage works in IOPS .
SSD drives smoke for random IOPS to the tune of 50x for writes and 200x for reads ( MLC vs same size 15k RPM drives ) .
These are significant numbers .
Even if we lost 1/2 the write IOPS to wear leveling , that would be 25x faster .
Want your database to scream.RAID controllers will only be able to do RAID 10 .
Most RAID controllers can do RAID 10 in their sleep .
The bottle neck will now be the channels in and out of the controllers .
The first roll out of SSD storage in the enterprise will be direct attached SSD trays to bus attached controllers with the most external channels ( bandwidth ) .SSD drives are going to choke SAN channels .
In a couple of years when administrators want to network their SSD drives there will be a really big push to get better pipes in the SAN .
I wonder if inifiniband will get back in the mix ? This kind of disruptive technology keeps us employed .</tokentext>
<sentencetext>Henry Newman may know SSD drives but he doesn't know enterprise storage.
Henry, enterprise shops don't talk about MB/s unless they are streaming video or working on their laptop.All IO in the a storage networked enterprise are random.
Most important IOs are usually small block (databases).
There is no concept of MB/s of bandwidth except to gauge channel capacity.
Any one who does enterprise storage works in IOPS.
SSD drives smoke for random IOPS to the tune of 50x for writes and 200x for reads (MLC vs same size 15k RPM drives).
These are significant numbers.
Even if we lost 1/2 the write IOPS to wear leveling, that would be 25x faster.
Want your database to scream.RAID controllers will only be able to do RAID 10.
Most RAID controllers can do RAID 10 in their sleep.
The bottle neck will now be the channels in and out of the controllers.
The first roll out of SSD storage in the enterprise will be direct attached SSD trays to bus attached controllers with the most external channels (bandwidth).SSD drives are going to choke SAN channels.
In a couple of years when administrators want to network their SSD drives there will be a really big push to get better pipes in the SAN.
I wonder if inifiniband will get back in the mix?This kind of disruptive technology keeps us employed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381688</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381650</id>
	<title>fp</title>
	<author>Anonymous</author>
	<datestamp>1267900440000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>CmdrTaco can raid and wear out a 12-year old boy's ass in under 30 minutes.</htmltext>
<tokenext>CmdrTaco can raid and wear out a 12-year old boy 's ass in under 30 minutes .</tokentext>
<sentencetext>CmdrTaco can raid and wear out a 12-year old boy's ass in under 30 minutes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31383396</id>
	<title>That Assumes SSD's will only be Flash</title>
	<author>Anonymous</author>
	<datestamp>1267869600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>How about PCM (Phase Change Memory)?<br>No wear leveling, much longer life predicted, possible even higher density.<br>Most of the big players see as their future.</p></htmltext>
<tokenext>How about PCM ( Phase Change Memory ) ? No wear leveling , much longer life predicted , possible even higher density.Most of the big players see as their future .</tokentext>
<sentencetext>How about PCM (Phase Change Memory)?No wear leveling, much longer life predicted, possible even higher density.Most of the big players see as their future.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382290</id>
	<title>ditch the controller</title>
	<author>bl8n8r</author>
	<datestamp>1267905780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>kernel based software raid or zfs gives much better raid performance IMHO. The only reason I use hw raid is to make administration simpler.  I think there is much more benefit to be had letting the os govern partition boundaries, chunk size and stripe alignment. Not to mention the dismal firmware upgrades supplied by closed source offerings.</htmltext>
<tokenext>kernel based software raid or zfs gives much better raid performance IMHO .
The only reason I use hw raid is to make administration simpler .
I think there is much more benefit to be had letting the os govern partition boundaries , chunk size and stripe alignment .
Not to mention the dismal firmware upgrades supplied by closed source offerings .</tokentext>
<sentencetext>kernel based software raid or zfs gives much better raid performance IMHO.
The only reason I use hw raid is to make administration simpler.
I think there is much more benefit to be had letting the os govern partition boundaries, chunk size and stripe alignment.
Not to mention the dismal firmware upgrades supplied by closed source offerings.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381968</id>
	<title>Warcraft</title>
	<author>Anonymous</author>
	<datestamp>1267903560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Was I the only one who saw the words leveling, raid, and wipe, and spent several seconds thinking the story was somehow related to WoW?</p></htmltext>
<tokenext>Was I the only one who saw the words leveling , raid , and wipe , and spent several seconds thinking the story was somehow related to WoW ?</tokentext>
<sentencetext>Was I the only one who saw the words leveling, raid, and wipe, and spent several seconds thinking the story was somehow related to WoW?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31384776</id>
	<title>Re:Correction:</title>
	<author>bertok</author>
	<datestamp>1267880700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Wear Leveling, RAID Can Wipe Out SSD Advantage <b>for enterprise.</b> </p><p>While it may not be efficient to slap together a platter of 16 SSDs, it <b>is</b> worthwhile to upgrade personal computers to use an SSD.</p></div><p>If there's a benefit, why wouldn't you upgrade your enterprise servers too?</p><p>We just built a "lab server" running ESXi 4, and instead of a SAN, we used 2x SSDs in a (stripe) RAID. The controller was some low-end LSI chip based one.</p><p>That thing was blazing fast -- faster than any SAN I have ever seen, and we were hitting it <i>hard</i>. Think six users simultaneously building VMs, installing operating systems, running backups AND restores, and even running database defrags.</p><p>It's possible that we weren't quite getting 'peak' performance from the SSDs, but nobody cared because were were still getting ridiculously good performance for 1/10th the cost of even a very low-end SAN.</p><p>Why wouldn't "enterprise" users want that kind of price/performance improvement?</p></div>
	</htmltext>
<tokenext>Wear Leveling , RAID Can Wipe Out SSD Advantage for enterprise .
While it may not be efficient to slap together a platter of 16 SSDs , it is worthwhile to upgrade personal computers to use an SSD.If there 's a benefit , why would n't you upgrade your enterprise servers too ? We just built a " lab server " running ESXi 4 , and instead of a SAN , we used 2x SSDs in a ( stripe ) RAID .
The controller was some low-end LSI chip based one.That thing was blazing fast -- faster than any SAN I have ever seen , and we were hitting it hard .
Think six users simultaneously building VMs , installing operating systems , running backups AND restores , and even running database defrags.It 's possible that we were n't quite getting 'peak ' performance from the SSDs , but nobody cared because were were still getting ridiculously good performance for 1/10th the cost of even a very low-end SAN.Why would n't " enterprise " users want that kind of price/performance improvement ?</tokentext>
<sentencetext>Wear Leveling, RAID Can Wipe Out SSD Advantage for enterprise.
While it may not be efficient to slap together a platter of 16 SSDs, it is worthwhile to upgrade personal computers to use an SSD.If there's a benefit, why wouldn't you upgrade your enterprise servers too?We just built a "lab server" running ESXi 4, and instead of a SAN, we used 2x SSDs in a (stripe) RAID.
The controller was some low-end LSI chip based one.That thing was blazing fast -- faster than any SAN I have ever seen, and we were hitting it hard.
Think six users simultaneously building VMs, installing operating systems, running backups AND restores, and even running database defrags.It's possible that we weren't quite getting 'peak' performance from the SSDs, but nobody cared because were were still getting ridiculously good performance for 1/10th the cost of even a very low-end SAN.Why wouldn't "enterprise" users want that kind of price/performance improvement?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381694</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31385562</id>
	<title>Re:RAID = Speed?</title>
	<author>Thundersnatch</author>
	<datestamp>1267886820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Obviously it wouldn't be cool if it was really slow, but isn't data redundancy the primary purpose?</p></div><p>Not really, performance has been the driver for the growth of companies like EMC. 99.999\% availability is an absolute requirement as well. Say I have built a banking system. It absolutely, positively must be able to handle 30000 transactions per second, and those transactions require data from over 20 TB of account data. The only way to do that with magnetic disks us by lashing a fuck-ton of them together in parallel. Until very recently, only high-end arrays from EMC, hitachi, etc. could do this sort if thing.</p></div>
	</htmltext>
<tokenext>Obviously it would n't be cool if it was really slow , but is n't data redundancy the primary purpose ? Not really , performance has been the driver for the growth of companies like EMC .
99.999 \ % availability is an absolute requirement as well .
Say I have built a banking system .
It absolutely , positively must be able to handle 30000 transactions per second , and those transactions require data from over 20 TB of account data .
The only way to do that with magnetic disks us by lashing a fuck-ton of them together in parallel .
Until very recently , only high-end arrays from EMC , hitachi , etc .
could do this sort if thing .</tokentext>
<sentencetext>Obviously it wouldn't be cool if it was really slow, but isn't data redundancy the primary purpose?Not really, performance has been the driver for the growth of companies like EMC.
99.999\% availability is an absolute requirement as well.
Say I have built a banking system.
It absolutely, positively must be able to handle 30000 transactions per second, and those transactions require data from over 20 TB of account data.
The only way to do that with magnetic disks us by lashing a fuck-ton of them together in parallel.
Until very recently, only high-end arrays from EMC, hitachi, etc.
could do this sort if thing.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382102</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31384506</id>
	<title>Re:Raid controllers obsolete?</title>
	<author>Anonymous</author>
	<datestamp>1267878420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Try doing software RAID on a shared file system.  And I don't mean NFS "shared", I mean multiple-host shared access to the actual SCSI target LUNs.</p></htmltext>
<tokenext>Try doing software RAID on a shared file system .
And I do n't mean NFS " shared " , I mean multiple-host shared access to the actual SCSI target LUNs .</tokentext>
<sentencetext>Try doing software RAID on a shared file system.
And I don't mean NFS "shared", I mean multiple-host shared access to the actual SCSI target LUNs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381972</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382168
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381760
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31384506
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381972
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382214
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381800
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31403802
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381962
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31403758
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382256
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381962
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31385562
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382102
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382656
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381690
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31385794
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382256
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381962
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31384324
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382102
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382248
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381760
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31383260
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381972
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381922
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381718
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31384776
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381694
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382050
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381718
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31386234
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381688
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382030
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381688
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31383194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381694
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31383476
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381800
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_06_1650232_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382358
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381688
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381882
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381962
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31403802
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382256
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31403758
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31385794
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382102
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31385562
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31384324
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381932
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381800
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31383476
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382214
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381784
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381812
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382010
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381972
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31384506
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31383260
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381760
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382248
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382168
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381688
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382030
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31386234
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382358
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381694
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31384776
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31383194
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381718
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382050
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381922
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31381690
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382656
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382336
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_06_1650232.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_06_1650232.31382294
</commentlist>
</conversation>
