<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_05_27_2210226</id>
	<title>SATA 3.0 Release Paves the Way To 6Gb/sec Devices</title>
	<author>samzenpus</author>
	<datestamp>1243420740000</datestamp>
	<htmltext>An anonymous reader writes <i>"The Serial ATA International Organization (SATA-IO) has just <a href="http://techfragments.com/news/823/Hardware/SATA\_3-0\_Released\_Paving\_The\_Way\_To\_6Gbsec\_SATA\_Devices.html">released the new Serial ATA Revision 3.0 specification</a>. With the new 3.0 specification, the path has been paved to enable future devices to transfer up to 6Gb/sec as well as provide enhancements to support multimedia applications. Like other SATA specifications, the 3.0 specification is backward compatible with earlier SATA products and devices. This makes it easy for motherboard manufactures to go ahead and upgrade to the new specification without having to worry about its customers' legacy SATA devices. This should make adoption of the new specification fast, like previous adoptions of SATA 2.0 (or 3Gb/sec) technology."</i></htmltext>
<tokenext>An anonymous reader writes " The Serial ATA International Organization ( SATA-IO ) has just released the new Serial ATA Revision 3.0 specification .
With the new 3.0 specification , the path has been paved to enable future devices to transfer up to 6Gb/sec as well as provide enhancements to support multimedia applications .
Like other SATA specifications , the 3.0 specification is backward compatible with earlier SATA products and devices .
This makes it easy for motherboard manufactures to go ahead and upgrade to the new specification without having to worry about its customers ' legacy SATA devices .
This should make adoption of the new specification fast , like previous adoptions of SATA 2.0 ( or 3Gb/sec ) technology .
"</tokentext>
<sentencetext>An anonymous reader writes "The Serial ATA International Organization (SATA-IO) has just released the new Serial ATA Revision 3.0 specification.
With the new 3.0 specification, the path has been paved to enable future devices to transfer up to 6Gb/sec as well as provide enhancements to support multimedia applications.
Like other SATA specifications, the 3.0 specification is backward compatible with earlier SATA products and devices.
This makes it easy for motherboard manufactures to go ahead and upgrade to the new specification without having to worry about its customers' legacy SATA devices.
This should make adoption of the new specification fast, like previous adoptions of SATA 2.0 (or 3Gb/sec) technology.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116679</id>
	<title>Re:What is the point?</title>
	<author>BeardedChimp</author>
	<datestamp>1243425180000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>Exactly, it's not like technology advances or <a href="http://www.tomshardware.com/reviews/15-years-of-hard-drive-history,1368-7.html" title="tomshardware.com" rel="nofollow">anything.</a> [tomshardware.com]</htmltext>
<tokenext>Exactly , it 's not like technology advances or anything .
[ tomshardware.com ]</tokentext>
<sentencetext>Exactly, it's not like technology advances or anything.
[tomshardware.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117845</id>
	<title>Question for wrath0fb0b</title>
	<author>Anonymous</author>
	<datestamp>1243432740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><div class="quote"><p><b>" The hard part in multi-threading IO-intensive apps has quite a bit more to do with latency issues and atomicity guarantees (the complete lack thereof) rather than the inability of the storage device to do 2 things at once (which, for a physical disk, is impossible anyway, meaning that it would have to back-convert into a serial process anyway)."</b> - by Wrath0fb0b (302444) on Wednesday May 27, @07:27PM (#28116993)</p></div><p>Great post by the by, found it informative (&amp; IF I could "mod you up" as an A/C, I would) - still, I have a question:</p><p><b>Wouldn't the use of Solid-State disks help "offset" the latency portion of this restrictions you note, since they tremendously reduce latency (seek/access mainly)?</b></p><p>(Thanks for the answer - you actually seem QUITE knowledgeable on this subject material, &amp; that's a rarity in my book (even on THIS site, which is one of the better ones, as far as the "technical talent" around this place, vs. other forums online, imo @ least))</p><p>Sincerely,</p><p>APK</p><p>P.S.=&gt; Some background as to WHY I ask this:  I am an "avid user" for many years, of a "True SSD" here (not FLASH ram based, which is slower on writes &amp; yes, perhaps write-back caching CAN offset that, but I don't like the idea of wear levelling being needed to offset their short lifespan, vs. the type I use), since late 2002!</p><p>It's called a CENATEK "RocketDrive" (PCI 2.2 bus, PC-133 SDRAM, 2gb (can be spanned into 16gb between 4 of these units))... Thus, why I ask the question above, since I like these units quite a lot &amp; they can be used for a plethora of things!</p><p>See - @ home at least, I use for things like:</p><p>1.) Pagefile.sys placement<br>2.) ALL logging from the OS &amp; applications (where possible on both &amp; it is largely, e.g.-&gt; Windows Event Logs &amp; far more which I went into here -&gt; <a href="http://hardware.slashdot.org/comments.pl?sid=1100343&amp;cid=26573497" title="slashdot.org" rel="nofollow">http://hardware.slashdot.org/comments.pl?sid=1100343&amp;cid=26573497</a> [slashdot.org] in far more detail)<br>3.) \%temp\% &amp; \%tmp\% system-wide environment variable alterations to its 2nd partition<br>4.) Webbrowser cache location (IE, FireFox, &amp; Opera)</p><p>These not only access/seek faster, but also "offload" my main OS + Programs bearing C: drive here, making it, in essence, faster (since it is NOT burdened by those duties &amp; far more... AND, it helps stop fragmentation imposed by the clutter of those operations files as well!)</p><p>&amp; more...</p><p>(It is widely known that SSD's are tremendous for databased work. &amp; in various work environs, I have had such luck in using them for this type of application, mainly websites that are database driven, OR, for SQLServer or Oracle DB device placement (for temp ops/scratch tables only sometimes, since SQLServer since v. 6.5 or 7.0 began using System RAM for that afaik).</p><p>Also, it's been shown as effective for DB work, such as seen here from this review -&gt; <a href="http://techreport.com/articles.x/9312/7" title="techreport.com" rel="nofollow">http://techreport.com/articles.x/9312/7</a> [techreport.com] &amp; my own work back in 2001-2002 for SuperSpeed.com @ Ms Tech-Ed (SQLServer Performance Enhancement finalist 2 yrs. in a row, albeit, using a software-based mirroring back to HDD ware they produce called SuperDisk)...</p><p>AND, it works!</p><p>Anyhow/anyways, back onto my question above?</p><p>Well - I ask this, because I am looking to purchase a Gigabyte IRAM soon (faster bus in SATA 2.0, DDR Ram also vs. the unit I have now for my 2nd machine I am putting together lately)... but, if this new bus spec is coming?</p><p>Well, I would like to see a PCI-e implementation of such a unit, &amp; I wonder if it will also offset that which you state is a "stumbling block" in regards to multithreaded application design (which I have been doing since roughly 1996 in both shareware/freeware &amp; commercial apps + work apps, no stranger to it here, but I usually do what is known as "coarse multithreaded design" (where the data being worked on by separate threads is discrete &amp; separate datasets worked on by EACH independent thread of execution (rather than what I call 'fine grained multithreading' which is far harder, &amp; where you have 2-N threads working on the SAME dataset (tried it, ran into a LOT of locking conditions/race conditions))).</p><p>(I would like a PCI-e based unit, especially one based on this SATA 3.0 spec, as most likely it will have a larger bus bandwidth &amp; what-not, than the CENATEK RocketDrive I use, OR, the Gigabyte IRAM... &amp; the DDRDriveX1 was SUPPOSED to be such a unit, but it never made it into commercial production, too bad, see here -&gt; <a href="http://www.ddrdrive.com/" title="ddrdrive.com" rel="nofollow">http://www.ddrdrive.com/</a> [ddrdrive.com] but, with the advent of this new SATA 3.0 bus spec, maybe one day soon, we WILL see such an SSD!) apk</p></div>
	</htmltext>
<tokenext>" The hard part in multi-threading IO-intensive apps has quite a bit more to do with latency issues and atomicity guarantees ( the complete lack thereof ) rather than the inability of the storage device to do 2 things at once ( which , for a physical disk , is impossible anyway , meaning that it would have to back-convert into a serial process anyway ) .
" - by Wrath0fb0b ( 302444 ) on Wednesday May 27 , @ 07 : 27PM ( # 28116993 ) Great post by the by , found it informative ( &amp; IF I could " mod you up " as an A/C , I would ) - still , I have a question : Would n't the use of Solid-State disks help " offset " the latency portion of this restrictions you note , since they tremendously reduce latency ( seek/access mainly ) ?
( Thanks for the answer - you actually seem QUITE knowledgeable on this subject material , &amp; that 's a rarity in my book ( even on THIS site , which is one of the better ones , as far as the " technical talent " around this place , vs. other forums online , imo @ least ) ) Sincerely,APKP.S. = &gt; Some background as to WHY I ask this : I am an " avid user " for many years , of a " True SSD " here ( not FLASH ram based , which is slower on writes &amp; yes , perhaps write-back caching CAN offset that , but I do n't like the idea of wear levelling being needed to offset their short lifespan , vs. the type I use ) , since late 2002 ! It 's called a CENATEK " RocketDrive " ( PCI 2.2 bus , PC-133 SDRAM , 2gb ( can be spanned into 16gb between 4 of these units ) ) ... Thus , why I ask the question above , since I like these units quite a lot &amp; they can be used for a plethora of things ! See - @ home at least , I use for things like : 1 .
) Pagefile.sys placement2 .
) ALL logging from the OS &amp; applications ( where possible on both &amp; it is largely , e.g.- &gt; Windows Event Logs &amp; far more which I went into here - &gt; http : //hardware.slashdot.org/comments.pl ? sid = 1100343&amp;cid = 26573497 [ slashdot.org ] in far more detail ) 3 .
) \ % temp \ % &amp; \ % tmp \ % system-wide environment variable alterations to its 2nd partition4 .
) Webbrowser cache location ( IE , FireFox , &amp; Opera ) These not only access/seek faster , but also " offload " my main OS + Programs bearing C : drive here , making it , in essence , faster ( since it is NOT burdened by those duties &amp; far more... AND , it helps stop fragmentation imposed by the clutter of those operations files as well !
) &amp; more... ( It is widely known that SSD 's are tremendous for databased work .
&amp; in various work environs , I have had such luck in using them for this type of application , mainly websites that are database driven , OR , for SQLServer or Oracle DB device placement ( for temp ops/scratch tables only sometimes , since SQLServer since v. 6.5 or 7.0 began using System RAM for that afaik ) .Also , it 's been shown as effective for DB work , such as seen here from this review - &gt; http : //techreport.com/articles.x/9312/7 [ techreport.com ] &amp; my own work back in 2001-2002 for SuperSpeed.com @ Ms Tech-Ed ( SQLServer Performance Enhancement finalist 2 yrs .
in a row , albeit , using a software-based mirroring back to HDD ware they produce called SuperDisk ) ...AND , it works ! Anyhow/anyways , back onto my question above ? Well - I ask this , because I am looking to purchase a Gigabyte IRAM soon ( faster bus in SATA 2.0 , DDR Ram also vs. the unit I have now for my 2nd machine I am putting together lately ) ... but , if this new bus spec is coming ? Well , I would like to see a PCI-e implementation of such a unit , &amp; I wonder if it will also offset that which you state is a " stumbling block " in regards to multithreaded application design ( which I have been doing since roughly 1996 in both shareware/freeware &amp; commercial apps + work apps , no stranger to it here , but I usually do what is known as " coarse multithreaded design " ( where the data being worked on by separate threads is discrete &amp; separate datasets worked on by EACH independent thread of execution ( rather than what I call 'fine grained multithreading ' which is far harder , &amp; where you have 2-N threads working on the SAME dataset ( tried it , ran into a LOT of locking conditions/race conditions ) ) ) .
( I would like a PCI-e based unit , especially one based on this SATA 3.0 spec , as most likely it will have a larger bus bandwidth &amp; what-not , than the CENATEK RocketDrive I use , OR , the Gigabyte IRAM... &amp; the DDRDriveX1 was SUPPOSED to be such a unit , but it never made it into commercial production , too bad , see here - &gt; http : //www.ddrdrive.com/ [ ddrdrive.com ] but , with the advent of this new SATA 3.0 bus spec , maybe one day soon , we WILL see such an SSD !
) apk</tokentext>
<sentencetext>" The hard part in multi-threading IO-intensive apps has quite a bit more to do with latency issues and atomicity guarantees (the complete lack thereof) rather than the inability of the storage device to do 2 things at once (which, for a physical disk, is impossible anyway, meaning that it would have to back-convert into a serial process anyway).
" - by Wrath0fb0b (302444) on Wednesday May 27, @07:27PM (#28116993)Great post by the by, found it informative (&amp; IF I could "mod you up" as an A/C, I would) - still, I have a question:Wouldn't the use of Solid-State disks help "offset" the latency portion of this restrictions you note, since they tremendously reduce latency (seek/access mainly)?
(Thanks for the answer - you actually seem QUITE knowledgeable on this subject material, &amp; that's a rarity in my book (even on THIS site, which is one of the better ones, as far as the "technical talent" around this place, vs. other forums online, imo @ least))Sincerely,APKP.S.=&gt; Some background as to WHY I ask this:  I am an "avid user" for many years, of a "True SSD" here (not FLASH ram based, which is slower on writes &amp; yes, perhaps write-back caching CAN offset that, but I don't like the idea of wear levelling being needed to offset their short lifespan, vs. the type I use), since late 2002!It's called a CENATEK "RocketDrive" (PCI 2.2 bus, PC-133 SDRAM, 2gb (can be spanned into 16gb between 4 of these units))... Thus, why I ask the question above, since I like these units quite a lot &amp; they can be used for a plethora of things!See - @ home at least, I use for things like:1.
) Pagefile.sys placement2.
) ALL logging from the OS &amp; applications (where possible on both &amp; it is largely, e.g.-&gt; Windows Event Logs &amp; far more which I went into here -&gt; http://hardware.slashdot.org/comments.pl?sid=1100343&amp;cid=26573497 [slashdot.org] in far more detail)3.
) \%temp\% &amp; \%tmp\% system-wide environment variable alterations to its 2nd partition4.
) Webbrowser cache location (IE, FireFox, &amp; Opera)These not only access/seek faster, but also "offload" my main OS + Programs bearing C: drive here, making it, in essence, faster (since it is NOT burdened by those duties &amp; far more... AND, it helps stop fragmentation imposed by the clutter of those operations files as well!
)&amp; more...(It is widely known that SSD's are tremendous for databased work.
&amp; in various work environs, I have had such luck in using them for this type of application, mainly websites that are database driven, OR, for SQLServer or Oracle DB device placement (for temp ops/scratch tables only sometimes, since SQLServer since v. 6.5 or 7.0 began using System RAM for that afaik).Also, it's been shown as effective for DB work, such as seen here from this review -&gt; http://techreport.com/articles.x/9312/7 [techreport.com] &amp; my own work back in 2001-2002 for SuperSpeed.com @ Ms Tech-Ed (SQLServer Performance Enhancement finalist 2 yrs.
in a row, albeit, using a software-based mirroring back to HDD ware they produce called SuperDisk)...AND, it works!Anyhow/anyways, back onto my question above?Well - I ask this, because I am looking to purchase a Gigabyte IRAM soon (faster bus in SATA 2.0, DDR Ram also vs. the unit I have now for my 2nd machine I am putting together lately)... but, if this new bus spec is coming?Well, I would like to see a PCI-e implementation of such a unit, &amp; I wonder if it will also offset that which you state is a "stumbling block" in regards to multithreaded application design (which I have been doing since roughly 1996 in both shareware/freeware &amp; commercial apps + work apps, no stranger to it here, but I usually do what is known as "coarse multithreaded design" (where the data being worked on by separate threads is discrete &amp; separate datasets worked on by EACH independent thread of execution (rather than what I call 'fine grained multithreading' which is far harder, &amp; where you have 2-N threads working on the SAME dataset (tried it, ran into a LOT of locking conditions/race conditions))).
(I would like a PCI-e based unit, especially one based on this SATA 3.0 spec, as most likely it will have a larger bus bandwidth &amp; what-not, than the CENATEK RocketDrive I use, OR, the Gigabyte IRAM... &amp; the DDRDriveX1 was SUPPOSED to be such a unit, but it never made it into commercial production, too bad, see here -&gt; http://www.ddrdrive.com/ [ddrdrive.com] but, with the advent of this new SATA 3.0 bus spec, maybe one day soon, we WILL see such an SSD!
) apk
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116993</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119681</id>
	<title>Re:Why just double?</title>
	<author>Cochonou</author>
	<datestamp>1243450140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>SAS also jumped from 3 Gb/s to 6 Gb/s last year. So, it seems to be pretty much the norm for hard drives interfaces.</htmltext>
<tokenext>SAS also jumped from 3 Gb/s to 6 Gb/s last year .
So , it seems to be pretty much the norm for hard drives interfaces .</tokentext>
<sentencetext>SAS also jumped from 3 Gb/s to 6 Gb/s last year.
So, it seems to be pretty much the norm for hard drives interfaces.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118067</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777</id>
	<title>I hope they make the plug stronger</title>
	<author>Anonymous</author>
	<datestamp>1243425780000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>I've lost 3 drives due to plugs breaking off into the SATA ports on the 3.5" drives</p></htmltext>
<tokenext>I 've lost 3 drives due to plugs breaking off into the SATA ports on the 3.5 " drives</tokentext>
<sentencetext>I've lost 3 drives due to plugs breaking off into the SATA ports on the 3.5" drives</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117195</id>
	<title>SATA 3.0?</title>
	<author>Anonymous</author>
	<datestamp>1243428420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'm still using MFM, you insensitive clod!</p></htmltext>
<tokenext>I 'm still using MFM , you insensitive clod !</tokentext>
<sentencetext>I'm still using MFM, you insensitive clod!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116909</id>
	<title>Re:Only one problem with this:</title>
	<author>Anonymous</author>
	<datestamp>1243426500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>how about they put some equivalent effort into speeding up the actual output of devices that use the interface?</i></p><p>SSDs use the interface, and they're getting close to hitting the 300MBps throughput mark (maximum after sata overhead).</p><p>There are also several external raid enclosures that use eSATA and appear as a single high-throughput drive to the onboard sata controller.</p></htmltext>
<tokenext>how about they put some equivalent effort into speeding up the actual output of devices that use the interface ? SSDs use the interface , and they 're getting close to hitting the 300MBps throughput mark ( maximum after sata overhead ) .There are also several external raid enclosures that use eSATA and appear as a single high-throughput drive to the onboard sata controller .</tokentext>
<sentencetext>how about they put some equivalent effort into speeding up the actual output of devices that use the interface?SSDs use the interface, and they're getting close to hitting the 300MBps throughput mark (maximum after sata overhead).There are also several external raid enclosures that use eSATA and appear as a single high-throughput drive to the onboard sata controller.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116895</id>
	<title>Re:Only one problem with this:</title>
	<author>vadim\_t</author>
	<datestamp>1243426440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>For hard disks, making them much faster isn't really possible. The disk needs to spin faster, or the information needs to be packed more tightly. Currently advances are mostly in the packing, but aren't reaching yet even SATA II levels.</p><p>Hard disks will get a slight benefit though because they have a cache and they can transfer data from or to it faster than the platter can handle.</p><p>For SSDs, even exceeding SATA 3 is perfectly possible by simply internally parallelizing requests. Also, for SSDs, the interface's latency is probably a fairly significant part of the time it takes to service a request.</p></htmltext>
<tokenext>For hard disks , making them much faster is n't really possible .
The disk needs to spin faster , or the information needs to be packed more tightly .
Currently advances are mostly in the packing , but are n't reaching yet even SATA II levels.Hard disks will get a slight benefit though because they have a cache and they can transfer data from or to it faster than the platter can handle.For SSDs , even exceeding SATA 3 is perfectly possible by simply internally parallelizing requests .
Also , for SSDs , the interface 's latency is probably a fairly significant part of the time it takes to service a request .</tokentext>
<sentencetext>For hard disks, making them much faster isn't really possible.
The disk needs to spin faster, or the information needs to be packed more tightly.
Currently advances are mostly in the packing, but aren't reaching yet even SATA II levels.Hard disks will get a slight benefit though because they have a cache and they can transfer data from or to it faster than the platter can handle.For SSDs, even exceeding SATA 3 is perfectly possible by simply internally parallelizing requests.
Also, for SSDs, the interface's latency is probably a fairly significant part of the time it takes to service a request.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116543</id>
	<title>Theoretical != Real World speeds</title>
	<author>Anonymous</author>
	<datestamp>1243424640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>0</modscore>
	<htmltext><p>It's a pity that while SATA 2.0 has a theoretical speed of 3GB/sec the real world speeds are around 20-25MB/sec.</p></htmltext>
<tokenext>It 's a pity that while SATA 2.0 has a theoretical speed of 3GB/sec the real world speeds are around 20-25MB/sec .</tokentext>
<sentencetext>It's a pity that while SATA 2.0 has a theoretical speed of 3GB/sec the real world speeds are around 20-25MB/sec.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118559</id>
	<title>Re:isn't it time for</title>
	<author>Guspaz</author>
	<datestamp>1243438740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Yes. I do. My single drive has an average sustained transfer rate of 230MB/s. A SATA1 bus would severely constrain the performance of my drive (an Intel x25-m).</p><p>There are numerous other SSDs on the market whose manufacturers focused on higher sustained performance rather than random access performance that already hit the 300MB/s wall of SATA2. And I expect that Intel's next series of drives will do the same. SATA2 is woefully unprepared for the very near future, let alone the present; it's slow enough to already be constraining high-end performance.</p></htmltext>
<tokenext>Yes .
I do .
My single drive has an average sustained transfer rate of 230MB/s .
A SATA1 bus would severely constrain the performance of my drive ( an Intel x25-m ) .There are numerous other SSDs on the market whose manufacturers focused on higher sustained performance rather than random access performance that already hit the 300MB/s wall of SATA2 .
And I expect that Intel 's next series of drives will do the same .
SATA2 is woefully unprepared for the very near future , let alone the present ; it 's slow enough to already be constraining high-end performance .</tokentext>
<sentencetext>Yes.
I do.
My single drive has an average sustained transfer rate of 230MB/s.
A SATA1 bus would severely constrain the performance of my drive (an Intel x25-m).There are numerous other SSDs on the market whose manufacturers focused on higher sustained performance rather than random access performance that already hit the 300MB/s wall of SATA2.
And I expect that Intel's next series of drives will do the same.
SATA2 is woefully unprepared for the very near future, let alone the present; it's slow enough to already be constraining high-end performance.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118329</id>
	<title>Re:isn't it time for</title>
	<author>Anonymous</author>
	<datestamp>1243436940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>My SSD saturates my SATA1/150 interface.</p></htmltext>
<tokenext>My SSD saturates my SATA1/150 interface .</tokentext>
<sentencetext>My SSD saturates my SATA1/150 interface.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120669</id>
	<title>Re:isn't it time for</title>
	<author>nausicaa</author>
	<datestamp>1243504800000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Well, no, not a SINGLE disk.. But, hey, I'm using a backplane/port-multiplier combo that allows me to connect 5 drives to a single SATA-connector..<br>(I think someone actually mentioned something like this, far above in the earlier comments)</p><p>Besides, having interfaces be ahead of the drives, performance-wise, is not a bad thing, it's actually a very good idea, so that drives can advance without hitting the roof..</p></htmltext>
<tokenext>Well , no , not a SINGLE disk.. But , hey , I 'm using a backplane/port-multiplier combo that allows me to connect 5 drives to a single SATA-connector.. ( I think someone actually mentioned something like this , far above in the earlier comments ) Besides , having interfaces be ahead of the drives , performance-wise , is not a bad thing , it 's actually a very good idea , so that drives can advance without hitting the roof. .</tokentext>
<sentencetext>Well, no, not a SINGLE disk.. But, hey, I'm using a backplane/port-multiplier combo that allows me to connect 5 drives to a single SATA-connector..(I think someone actually mentioned something like this, far above in the earlier comments)Besides, having interfaces be ahead of the drives, performance-wise, is not a bad thing, it's actually a very good idea, so that drives can advance without hitting the roof..</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118281</id>
	<title>Re:isn't it time for</title>
	<author>gmuslera</author>
	<datestamp>1243436520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>What about one of <a href="http://techreport.com/articles.x/16255" title="techreport.com">this ones</a> [techreport.com]? They could probably take good advantage of a 6 Gb/sec speed</htmltext>
<tokenext>What about one of this ones [ techreport.com ] ?
They could probably take good advantage of a 6 Gb/sec speed</tokentext>
<sentencetext>What about one of this ones [techreport.com]?
They could probably take good advantage of a 6 Gb/sec speed</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117281</id>
	<title>Re:isn't it time for</title>
	<author>Ilgaz</author>
	<datestamp>1243428900000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Yea, while swearing at Apple 24/7 for giving SATA1 with Quad G5 Workstation (most expensive G5), I purchased a very nice performing Western Digital Caviar 1TB drive having 32MB cache. It took a while to figure that I can't really saturate SATA1 bus, even with "fill with zeros" (format) of OS X, it went up to 140MB/sec. Of course, Apple expects me to buy a ATTO like high end card if I need more bandwidth.</p><p>What matters is SSD, that is why they release the spec right now. If you have enough money to setup a very high end (not toy-like) SSD right now, you will see SATA2 is the bottleneck. People were already talking about a different standard or even getting rid of SATA alltogether for them.</p></htmltext>
<tokenext>Yea , while swearing at Apple 24/7 for giving SATA1 with Quad G5 Workstation ( most expensive G5 ) , I purchased a very nice performing Western Digital Caviar 1TB drive having 32MB cache .
It took a while to figure that I ca n't really saturate SATA1 bus , even with " fill with zeros " ( format ) of OS X , it went up to 140MB/sec .
Of course , Apple expects me to buy a ATTO like high end card if I need more bandwidth.What matters is SSD , that is why they release the spec right now .
If you have enough money to setup a very high end ( not toy-like ) SSD right now , you will see SATA2 is the bottleneck .
People were already talking about a different standard or even getting rid of SATA alltogether for them .</tokentext>
<sentencetext>Yea, while swearing at Apple 24/7 for giving SATA1 with Quad G5 Workstation (most expensive G5), I purchased a very nice performing Western Digital Caviar 1TB drive having 32MB cache.
It took a while to figure that I can't really saturate SATA1 bus, even with "fill with zeros" (format) of OS X, it went up to 140MB/sec.
Of course, Apple expects me to buy a ATTO like high end card if I need more bandwidth.What matters is SSD, that is why they release the spec right now.
If you have enough money to setup a very high end (not toy-like) SSD right now, you will see SATA2 is the bottleneck.
People were already talking about a different standard or even getting rid of SATA alltogether for them.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785</id>
	<title>Only one problem with this:</title>
	<author>macraig</author>
	<datestamp>1243425780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>we don't even have any actual 3.0Gbps disk drives yet.  They're upgrading the <i>interface yet again when we have barely even got to the point of saturating the one from TWO generations ago (with magnetic media anyway).</i></p><p><i>The industry has largely been selling SATA II devices to unwitting consumers based on the perceived promise of 3GBps performance, which of course no one has been getting.</i></p><p><i>Instead of obsessing over the interface like this, how about they put some equivalent effort into speeding up the actual output of devices that use the interface?</i></p></htmltext>
<tokenext>we do n't even have any actual 3.0Gbps disk drives yet .
They 're upgrading the interface yet again when we have barely even got to the point of saturating the one from TWO generations ago ( with magnetic media anyway ) .The industry has largely been selling SATA II devices to unwitting consumers based on the perceived promise of 3GBps performance , which of course no one has been getting.Instead of obsessing over the interface like this , how about they put some equivalent effort into speeding up the actual output of devices that use the interface ?</tokentext>
<sentencetext>we don't even have any actual 3.0Gbps disk drives yet.
They're upgrading the interface yet again when we have barely even got to the point of saturating the one from TWO generations ago (with magnetic media anyway).The industry has largely been selling SATA II devices to unwitting consumers based on the perceived promise of 3GBps performance, which of course no one has been getting.Instead of obsessing over the interface like this, how about they put some equivalent effort into speeding up the actual output of devices that use the interface?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117049</id>
	<title>3.0</title>
	<author>Anonymous</author>
	<datestamp>1243427520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Damn marketing junk. WTF is 3.0? Why not 3?</p></htmltext>
<tokenext>Damn marketing junk .
WTF is 3.0 ?
Why not 3 ?</tokentext>
<sentencetext>Damn marketing junk.
WTF is 3.0?
Why not 3?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117311</id>
	<title>Re:Ah!</title>
	<author>Ilgaz</author>
	<datestamp>1243429140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You don't need to buy new system. Well, even if you need SATA3 bandwidth, regular companies will release interface cards which will be better performing than "coming in mainboard" ones. I'd prefer a cache having, dedicated and configurable SATA card instead of that dumb chip on mainboard anytime.</p><p>Of course if you talk about laptop, it is a different matter.</p></htmltext>
<tokenext>You do n't need to buy new system .
Well , even if you need SATA3 bandwidth , regular companies will release interface cards which will be better performing than " coming in mainboard " ones .
I 'd prefer a cache having , dedicated and configurable SATA card instead of that dumb chip on mainboard anytime.Of course if you talk about laptop , it is a different matter .</tokentext>
<sentencetext>You don't need to buy new system.
Well, even if you need SATA3 bandwidth, regular companies will release interface cards which will be better performing than "coming in mainboard" ones.
I'd prefer a cache having, dedicated and configurable SATA card instead of that dumb chip on mainboard anytime.Of course if you talk about laptop, it is a different matter.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116615</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116775</id>
	<title>This is NOT SATA 3.0, children! Smarten up.</title>
	<author>Anonymous</author>
	<datestamp>1243425720000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>http://www.serialata.org/developers/naming\_guidelines.asp</p><p>Here's a clue: If you have to post a web page explaining the proper way to refer to your products, your products are poorly named.</p><p>Here's another clue: If there's a shorter/easier/faster way to refer to your product, people are going to go with that. Insisting that they do otherwise indicates delusions of grandeur.</p><p>Get the hell over it already.</p></htmltext>
<tokenext>http : //www.serialata.org/developers/naming \ _guidelines.aspHere 's a clue : If you have to post a web page explaining the proper way to refer to your products , your products are poorly named.Here 's another clue : If there 's a shorter/easier/faster way to refer to your product , people are going to go with that .
Insisting that they do otherwise indicates delusions of grandeur.Get the hell over it already .</tokentext>
<sentencetext>http://www.serialata.org/developers/naming\_guidelines.aspHere's a clue: If you have to post a web page explaining the proper way to refer to your products, your products are poorly named.Here's another clue: If there's a shorter/easier/faster way to refer to your product, people are going to go with that.
Insisting that they do otherwise indicates delusions of grandeur.Get the hell over it already.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28127743</id>
	<title>SATA 3.0 better have a better connector</title>
	<author>Anonymous</author>
	<datestamp>1243540380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>My SATA drive connector (power) keeps broken off - who the hell came up with these crappy connector?</p></htmltext>
<tokenext>My SATA drive connector ( power ) keeps broken off - who the hell came up with these crappy connector ?</tokentext>
<sentencetext>My SATA drive connector (power) keeps broken off - who the hell came up with these crappy connector?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116589</id>
	<title>Re:Theoretical != Real World speeds</title>
	<author>0100010001010011</author>
	<datestamp>1243424820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Gb!=GB. Divide by 8.</p><p>And you should beck your drive settings. My old IDE drives beat 20MB/s. I just checked my newest SATA drive and I got 113MB/sec in hdparm.</p></htmltext>
<tokenext>Gb ! = GB .
Divide by 8.And you should beck your drive settings .
My old IDE drives beat 20MB/s .
I just checked my newest SATA drive and I got 113MB/sec in hdparm .</tokentext>
<sentencetext>Gb!=GB.
Divide by 8.And you should beck your drive settings.
My old IDE drives beat 20MB/s.
I just checked my newest SATA drive and I got 113MB/sec in hdparm.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116543</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28123803</id>
	<title>Re:Theoretical != Real World speeds</title>
	<author>Anonymous</author>
	<datestamp>1243526220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I had been starting to doubt myself lately that there was an actual difference between B and b; seems nobody knows that any more.  Thanks for the reassurance!</p></htmltext>
<tokenext>I had been starting to doubt myself lately that there was an actual difference between B and b ; seems nobody knows that any more .
Thanks for the reassurance !</tokentext>
<sentencetext>I had been starting to doubt myself lately that there was an actual difference between B and b; seems nobody knows that any more.
Thanks for the reassurance!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116653</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117001</id>
	<title>Re:What is the point?</title>
	<author>Jah-Wren Ryel</author>
	<datestamp>1243427280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>No current hard disk or even SSD can do 3Gb/sec so what is the point?</p></div><p> <a href="http://gizmodo.com/5168424/fusion+io-iodrive-duo-is-the-worlds-fastest-ssd" title="gizmodo.com">Oh yeah?</a> [gizmodo.com]</p></div>
	</htmltext>
<tokenext>No current hard disk or even SSD can do 3Gb/sec so what is the point ?
Oh yeah ?
[ gizmodo.com ]</tokentext>
<sentencetext>No current hard disk or even SSD can do 3Gb/sec so what is the point?
Oh yeah?
[gizmodo.com]
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116569</id>
	<title>SSD</title>
	<author>Anonymous</author>
	<datestamp>1243424760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If my understanding of the technology is correct, the seek time on most hard drives already limits drive access speed to typically be slower than 3Gb/sec.  Would this rely on a transition to Solid State Drives for any noticeable difference in performance?</p></htmltext>
<tokenext>If my understanding of the technology is correct , the seek time on most hard drives already limits drive access speed to typically be slower than 3Gb/sec .
Would this rely on a transition to Solid State Drives for any noticeable difference in performance ?</tokentext>
<sentencetext>If my understanding of the technology is correct, the seek time on most hard drives already limits drive access speed to typically be slower than 3Gb/sec.
Would this rely on a transition to Solid State Drives for any noticeable difference in performance?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117465</id>
	<title>Not impressed</title>
	<author>SpitfireSMS</author>
	<datestamp>1243430220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This is only a margin faster than the new USB 3.0 spec, at 4.9Gbits...</p><p>I see more headway being made in the flash storage area.<br>I really doubt hard drives as we know it will last another couple years.<br>With SSDs and flash being faster, it only makes sense</p></htmltext>
<tokenext>This is only a margin faster than the new USB 3.0 spec , at 4.9Gbits...I see more headway being made in the flash storage area.I really doubt hard drives as we know it will last another couple years.With SSDs and flash being faster , it only makes sense</tokentext>
<sentencetext>This is only a margin faster than the new USB 3.0 spec, at 4.9Gbits...I see more headway being made in the flash storage area.I really doubt hard drives as we know it will last another couple years.With SSDs and flash being faster, it only makes sense</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118163</id>
	<title>6 Gbit/s, not Gb(yte)/s...</title>
	<author>Anonymous</author>
	<datestamp>1243435440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Jackasses.</p></htmltext>
<tokenext>Jackasses .</tokentext>
<sentencetext>Jackasses.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28129453</id>
	<title>Not fast enough I suspect</title>
	<author>BloodyIron</author>
	<datestamp>1243502280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>6Gb/s is pretty good, but I suspect as SSD's accelerate in development that the 6Gb/s limit will be reached before the next generation of SATA. How long has SATA 3Gb/s been around? Quite a while...</p></htmltext>
<tokenext>6Gb/s is pretty good , but I suspect as SSD 's accelerate in development that the 6Gb/s limit will be reached before the next generation of SATA .
How long has SATA 3Gb/s been around ?
Quite a while.. .</tokentext>
<sentencetext>6Gb/s is pretty good, but I suspect as SSD's accelerate in development that the 6Gb/s limit will be reached before the next generation of SATA.
How long has SATA 3Gb/s been around?
Quite a while...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28125003</id>
	<title>Re:Ah!</title>
	<author>default luser</author>
	<datestamp>1243531200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yup, SATA 3.0 plus USB 3 are both going to be released in-bulk soon.  I'd definitely buy a system with these blazing new I/O options.</p></htmltext>
<tokenext>Yup , SATA 3.0 plus USB 3 are both going to be released in-bulk soon .
I 'd definitely buy a system with these blazing new I/O options .</tokentext>
<sentencetext>Yup, SATA 3.0 plus USB 3 are both going to be released in-bulk soon.
I'd definitely buy a system with these blazing new I/O options.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116615</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116695</id>
	<title>Worth noting</title>
	<author>earnest murderer</author>
	<datestamp>1243425240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>The spec as we have seen with most other transfer specs have little to do with real world device designs. Hardware interfaces (much less devices) languish in the "has to cost less than x per part" hell... But you bet your ass they'll put a SATA 3.0 up to 6GB per second label even though the actual device isn't designed to transfer more than a fifth (peak) of the spec. data rate.</p></htmltext>
<tokenext>The spec as we have seen with most other transfer specs have little to do with real world device designs .
Hardware interfaces ( much less devices ) languish in the " has to cost less than x per part " hell... But you bet your ass they 'll put a SATA 3.0 up to 6GB per second label even though the actual device is n't designed to transfer more than a fifth ( peak ) of the spec .
data rate .</tokentext>
<sentencetext>The spec as we have seen with most other transfer specs have little to do with real world device designs.
Hardware interfaces (much less devices) languish in the "has to cost less than x per part" hell... But you bet your ass they'll put a SATA 3.0 up to 6GB per second label even though the actual device isn't designed to transfer more than a fifth (peak) of the spec.
data rate.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995</id>
	<title>Re:isn't it time for</title>
	<author>Anonymous</author>
	<datestamp>1243427220000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Why? Do you have a hard drive that can even saturate a SATA I bus?</p></htmltext>
<tokenext>Why ?
Do you have a hard drive that can even saturate a SATA I bus ?</tokentext>
<sentencetext>Why?
Do you have a hard drive that can even saturate a SATA I bus?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521</id>
	<title>isn't it time for</title>
	<author>Anonymous</author>
	<datestamp>1243424580000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext>isn't it about time for us to switch to SAS? (Serial Attached SCSI)</htmltext>
<tokenext>is n't it about time for us to switch to SAS ?
( Serial Attached SCSI )</tokentext>
<sentencetext>isn't it about time for us to switch to SAS?
(Serial Attached SCSI)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118747</id>
	<title>Re:Theoretical != Real World speeds</title>
	<author>symbolset</author>
	<datestamp>1243440360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What you're looking for is "SATA Expander". It allows you to connect several drives to one 3Gb/s SATA port.  The drives, the expander and the controller must all be compatible and support this mode.  But yes, you can saturate a single SATA port using only spinning drives if you want to.</p></htmltext>
<tokenext>What you 're looking for is " SATA Expander " .
It allows you to connect several drives to one 3Gb/s SATA port .
The drives , the expander and the controller must all be compatible and support this mode .
But yes , you can saturate a single SATA port using only spinning drives if you want to .</tokentext>
<sentencetext>What you're looking for is "SATA Expander".
It allows you to connect several drives to one 3Gb/s SATA port.
The drives, the expander and the controller must all be compatible and support this mode.
But yes, you can saturate a single SATA port using only spinning drives if you want to.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116653</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118433</id>
	<title>Re:isn't it time for</title>
	<author>AHuxley</author>
	<datestamp>1243437840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Who dares windows.<br>
from Qui audet adipiscitur</htmltext>
<tokenext>Who dares windows .
from Qui audet adipiscitur</tokentext>
<sentencetext>Who dares windows.
from Qui audet adipiscitur</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117379</id>
	<title>Re:I hope they make the plug stronger</title>
	<author>grommit</author>
	<datestamp>1243429620000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>Maybe you should stop using a hammer when plugging in a new hard drive?</p></htmltext>
<tokenext>Maybe you should stop using a hammer when plugging in a new hard drive ?</tokentext>
<sentencetext>Maybe you should stop using a hammer when plugging in a new hard drive?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120205</id>
	<title>Re:isn't it time for</title>
	<author>Anonymous</author>
	<datestamp>1243543020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yes I have. They're called SSD. I'm currently at 220MB/s, soon 250MB/s and maybe bigger.</p><p>I personally don't care at all about the SATA 3.0 spec, because the time it is implemented in motherboards, SSDs will allready be ready to have that spec as a limit.</p><p>enhancing by a factor 2 all what? 2-3 years? that's pathetic.</p><p>why not a spec for 12Gb/sec yet, now? why not 60Gb/sec? imagine we would have 10Mb/s LAN, then 20Mb/s, then 40Mb/s. happily, it was powers of 10, so we have 1000Mb/s now.</p><p>this spec will be outdated when it's finally implemented. ssd's will move more and more to pcie connections directly, and 1GB/s ssd's will be affordable high end by the end of the year (they exist even now).</p><p>a spec should be something future proof. this spec isn't. ssds and hdd-raids can allready surpass that spec easily.</p></htmltext>
<tokenext>Yes I have .
They 're called SSD .
I 'm currently at 220MB/s , soon 250MB/s and maybe bigger.I personally do n't care at all about the SATA 3.0 spec , because the time it is implemented in motherboards , SSDs will allready be ready to have that spec as a limit.enhancing by a factor 2 all what ?
2-3 years ?
that 's pathetic.why not a spec for 12Gb/sec yet , now ?
why not 60Gb/sec ?
imagine we would have 10Mb/s LAN , then 20Mb/s , then 40Mb/s .
happily , it was powers of 10 , so we have 1000Mb/s now.this spec will be outdated when it 's finally implemented .
ssd 's will move more and more to pcie connections directly , and 1GB/s ssd 's will be affordable high end by the end of the year ( they exist even now ) .a spec should be something future proof .
this spec is n't .
ssds and hdd-raids can allready surpass that spec easily .</tokentext>
<sentencetext>Yes I have.
They're called SSD.
I'm currently at 220MB/s, soon 250MB/s and maybe bigger.I personally don't care at all about the SATA 3.0 spec, because the time it is implemented in motherboards, SSDs will allready be ready to have that spec as a limit.enhancing by a factor 2 all what?
2-3 years?
that's pathetic.why not a spec for 12Gb/sec yet, now?
why not 60Gb/sec?
imagine we would have 10Mb/s LAN, then 20Mb/s, then 40Mb/s.
happily, it was powers of 10, so we have 1000Mb/s now.this spec will be outdated when it's finally implemented.
ssd's will move more and more to pcie connections directly, and 1GB/s ssd's will be affordable high end by the end of the year (they exist even now).a spec should be something future proof.
this spec isn't.
ssds and hdd-raids can allready surpass that spec easily.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116741</id>
	<title>Re:What is the point?</title>
	<author>Wesley Felter</author>
	<datestamp>1243425480000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Current SSDs are very close to the SATA 2.0 limit and the performance of flash is about to double thanks to ONFI 2.0, so we can expect SSDs to quickly adopt SATA 3.0.</p></htmltext>
<tokenext>Current SSDs are very close to the SATA 2.0 limit and the performance of flash is about to double thanks to ONFI 2.0 , so we can expect SSDs to quickly adopt SATA 3.0 .</tokentext>
<sentencetext>Current SSDs are very close to the SATA 2.0 limit and the performance of flash is about to double thanks to ONFI 2.0, so we can expect SSDs to quickly adopt SATA 3.0.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117805</id>
	<title>I love hard drive technology.....</title>
	<author>ZosX</author>
	<datestamp>1243432500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Today at work a brand new 1TB seagate came in. I went over to my machine to breathe life back into it to find out that it was instead a 32 megabyte drive according to Windows. Immediately the cache sprang to mind. The drive actually is reporting the cache as the actual drive. Well...hell. At first I thought it was just DOA with corrupt firmware, but after some googling you can actually reset the size that the drive reports with LBA. Hopefully I won't have too many other problems. Not a big fan of the newer seagates, but my boss seems to be going for whatever is cheapest these days....<nobr> <wbr></nobr>:/</p><p>I would love to get away from complex mechanical drives as a storage medium. Can't someone just make some solid state cube that will hold a petabyte (no petabyte in mozilla's spell checker?? for shame!) and can withstand being written to millions of times?</p></htmltext>
<tokenext>Today at work a brand new 1TB seagate came in .
I went over to my machine to breathe life back into it to find out that it was instead a 32 megabyte drive according to Windows .
Immediately the cache sprang to mind .
The drive actually is reporting the cache as the actual drive .
Well...hell. At first I thought it was just DOA with corrupt firmware , but after some googling you can actually reset the size that the drive reports with LBA .
Hopefully I wo n't have too many other problems .
Not a big fan of the newer seagates , but my boss seems to be going for whatever is cheapest these days.... : /I would love to get away from complex mechanical drives as a storage medium .
Ca n't someone just make some solid state cube that will hold a petabyte ( no petabyte in mozilla 's spell checker ? ?
for shame !
) and can withstand being written to millions of times ?</tokentext>
<sentencetext>Today at work a brand new 1TB seagate came in.
I went over to my machine to breathe life back into it to find out that it was instead a 32 megabyte drive according to Windows.
Immediately the cache sprang to mind.
The drive actually is reporting the cache as the actual drive.
Well...hell. At first I thought it was just DOA with corrupt firmware, but after some googling you can actually reset the size that the drive reports with LBA.
Hopefully I won't have too many other problems.
Not a big fan of the newer seagates, but my boss seems to be going for whatever is cheapest these days.... :/I would love to get away from complex mechanical drives as a storage medium.
Can't someone just make some solid state cube that will hold a petabyte (no petabyte in mozilla's spell checker??
for shame!
) and can withstand being written to millions of times?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116993</id>
	<title>Re:isn't it time for</title>
	<author>Wrath0fb0b</author>
	<datestamp>1243427220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>The problem with parallel is that you can't crank up the clock speed because you have to make sure that the signal on each line is combined with the ones from the other lines that were sent at the same time. This limits how fast you can send the send the bits (if the time being bits is comparable to the skew time, the receiver will not be able to reliably reassemble the data) and how long the interconnect can be (skew being linearly amplified by length). It's not for nothing that PCI has been replaced with PCI-E, PATA with SATA, SCSI with SAS. USB and IEE1394 would be impossible with parallel. Serial communications are more reliable and more scalable (one big exception -- wireless RF, but that's not what we are discussing here).</p><p>Multiprocessing, incidentally, has nothing to do with it -- the software interface to a storage device hides all the implementation details (PATA/SATA, for instance) anyway. The hard part in multi-threading IO-intensive apps has quite a bit more to do with latency issues and atomicity guarantees (the complete lack thereof) rather than the inability of the storage device to do 2 things at once (which, for a physical disk, is impossible anyway, meaning that it would have to back-convert into a serial process anyway).</p></htmltext>
<tokenext>The problem with parallel is that you ca n't crank up the clock speed because you have to make sure that the signal on each line is combined with the ones from the other lines that were sent at the same time .
This limits how fast you can send the send the bits ( if the time being bits is comparable to the skew time , the receiver will not be able to reliably reassemble the data ) and how long the interconnect can be ( skew being linearly amplified by length ) .
It 's not for nothing that PCI has been replaced with PCI-E , PATA with SATA , SCSI with SAS .
USB and IEE1394 would be impossible with parallel .
Serial communications are more reliable and more scalable ( one big exception -- wireless RF , but that 's not what we are discussing here ) .Multiprocessing , incidentally , has nothing to do with it -- the software interface to a storage device hides all the implementation details ( PATA/SATA , for instance ) anyway .
The hard part in multi-threading IO-intensive apps has quite a bit more to do with latency issues and atomicity guarantees ( the complete lack thereof ) rather than the inability of the storage device to do 2 things at once ( which , for a physical disk , is impossible anyway , meaning that it would have to back-convert into a serial process anyway ) .</tokentext>
<sentencetext>The problem with parallel is that you can't crank up the clock speed because you have to make sure that the signal on each line is combined with the ones from the other lines that were sent at the same time.
This limits how fast you can send the send the bits (if the time being bits is comparable to the skew time, the receiver will not be able to reliably reassemble the data) and how long the interconnect can be (skew being linearly amplified by length).
It's not for nothing that PCI has been replaced with PCI-E, PATA with SATA, SCSI with SAS.
USB and IEE1394 would be impossible with parallel.
Serial communications are more reliable and more scalable (one big exception -- wireless RF, but that's not what we are discussing here).Multiprocessing, incidentally, has nothing to do with it -- the software interface to a storage device hides all the implementation details (PATA/SATA, for instance) anyway.
The hard part in multi-threading IO-intensive apps has quite a bit more to do with latency issues and atomicity guarantees (the complete lack thereof) rather than the inability of the storage device to do 2 things at once (which, for a physical disk, is impossible anyway, meaning that it would have to back-convert into a serial process anyway).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117523</id>
	<title>Re:Sata Smata</title>
	<author>morgan\_greywolf</author>
	<datestamp>1243430520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>We'll get off your lawn now.</p></htmltext>
<tokenext>We 'll get off your lawn now .</tokentext>
<sentencetext>We'll get off your lawn now.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116913</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116615</id>
	<title>Ah!</title>
	<author>vancondo</author>
	<datestamp>1243424880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>My bank account will be delighted that there's a reason for me to hold off buying a new system.<br> <br><nobr> <wbr></nobr>..that is until it see's me buying overpriced bleeding edge buggy gear again.<br> <br>
-<br>-- <b> <a href="http://vancouvercondo.info/" title="vancouvercondo.info" rel="nofollow">VCI</a> [vancouvercondo.info] </b></htmltext>
<tokenext>My bank account will be delighted that there 's a reason for me to hold off buying a new system .
..that is until it see 's me buying overpriced bleeding edge buggy gear again .
--- VCI [ vancouvercondo.info ]</tokentext>
<sentencetext>My bank account will be delighted that there's a reason for me to hold off buying a new system.
..that is until it see's me buying overpriced bleeding edge buggy gear again.
---  VCI [vancouvercondo.info] </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116937</id>
	<title>Stupid</title>
	<author>TheParadox2</author>
	<datestamp>1243426740000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>I think in a years time frame, we could see the 6 Gb/s passed with the way SSDs are going.  To make this standard is dumb.

If we're looking for speed, SATA 6Gb/s is not it and this ancient CHS scheme has to go to accommodate a better way to map, access and control data.  Ultimately, we need to have these devices understand &amp; control the file system.  (Trim does this for SSDs)

For example:

The OCZ vertex nearly saturates the 3Gb/s mark already.  They only way the drives 'fail' to accomplish this sustaining speed is with random writes, typically which occur when writing data to a spot marked as available when the NAND isn't zeroed, it either has to re-zero or move on.

If the drive knows that the OS is deleting a file (not marking the site, as available) then the drive can zero automatically without you noticing.

Its only in certain conditions, these drive don't Consistently perform at peak performance: Free space not consolidated, Free space not zeroed, Swap file creates random writing (slows performance), Indexing is now useless with<nobr> <wbr></nobr>.1 ms seek times. Using write filters, or something that converts random writes to sequential writes (through buffers, caches or drivers) greatly enhances speed, such as the MFT Software or even windows SteadyState for the devices.

I like the idea of the 'RAM socket' interface as someone stated above.  These devices i think work better in a parallel manner.  Most work like this internally anyway.</htmltext>
<tokenext>I think in a years time frame , we could see the 6 Gb/s passed with the way SSDs are going .
To make this standard is dumb .
If we 're looking for speed , SATA 6Gb/s is not it and this ancient CHS scheme has to go to accommodate a better way to map , access and control data .
Ultimately , we need to have these devices understand &amp; control the file system .
( Trim does this for SSDs ) For example : The OCZ vertex nearly saturates the 3Gb/s mark already .
They only way the drives 'fail ' to accomplish this sustaining speed is with random writes , typically which occur when writing data to a spot marked as available when the NAND is n't zeroed , it either has to re-zero or move on .
If the drive knows that the OS is deleting a file ( not marking the site , as available ) then the drive can zero automatically without you noticing .
Its only in certain conditions , these drive do n't Consistently perform at peak performance : Free space not consolidated , Free space not zeroed , Swap file creates random writing ( slows performance ) , Indexing is now useless with .1 ms seek times .
Using write filters , or something that converts random writes to sequential writes ( through buffers , caches or drivers ) greatly enhances speed , such as the MFT Software or even windows SteadyState for the devices .
I like the idea of the 'RAM socket ' interface as someone stated above .
These devices i think work better in a parallel manner .
Most work like this internally anyway .</tokentext>
<sentencetext>I think in a years time frame, we could see the 6 Gb/s passed with the way SSDs are going.
To make this standard is dumb.
If we're looking for speed, SATA 6Gb/s is not it and this ancient CHS scheme has to go to accommodate a better way to map, access and control data.
Ultimately, we need to have these devices understand &amp; control the file system.
(Trim does this for SSDs)

For example:

The OCZ vertex nearly saturates the 3Gb/s mark already.
They only way the drives 'fail' to accomplish this sustaining speed is with random writes, typically which occur when writing data to a spot marked as available when the NAND isn't zeroed, it either has to re-zero or move on.
If the drive knows that the OS is deleting a file (not marking the site, as available) then the drive can zero automatically without you noticing.
Its only in certain conditions, these drive don't Consistently perform at peak performance: Free space not consolidated, Free space not zeroed, Swap file creates random writing (slows performance), Indexing is now useless with .1 ms seek times.
Using write filters, or something that converts random writes to sequential writes (through buffers, caches or drivers) greatly enhances speed, such as the MFT Software or even windows SteadyState for the devices.
I like the idea of the 'RAM socket' interface as someone stated above.
These devices i think work better in a parallel manner.
Most work like this internally anyway.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116913</id>
	<title>Sata Smata</title>
	<author>nurb432</author>
	<datestamp>1243426560000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>What about us using MFM drives with removable platters?</p></htmltext>
<tokenext>What about us using MFM drives with removable platters ?</tokentext>
<sentencetext>What about us using MFM drives with removable platters?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28124531</id>
	<title>Re:isn't it time for</title>
	<author>Anonymous</author>
	<datestamp>1243529040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>USB and IEE1394 would be impossible with parallel.</p></div><p>You're right: It'd be the Universal Parallel Bus!</p></div>
	</htmltext>
<tokenext>USB and IEE1394 would be impossible with parallel.You 're right : It 'd be the Universal Parallel Bus !</tokentext>
<sentencetext>USB and IEE1394 would be impossible with parallel.You're right: It'd be the Universal Parallel Bus!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116993</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119309</id>
	<title>Re:isn't it time for</title>
	<author>rivaldufus</author>
	<datestamp>1243446540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>maybe the cache could...</htmltext>
<tokenext>maybe the cache could.. .</tokentext>
<sentencetext>maybe the cache could...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118763</id>
	<title>Re:SSD</title>
	<author>symbolset</author>
	<datestamp>1243440600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>For random I/O seek time can seriously degrade throughput.  Since these days we're talking about multiple VM's contending for disk, this is becoming an issue we need to be more aware of unless we're using Solid State Drives - because of course SSD's don't seek.</p></htmltext>
<tokenext>For random I/O seek time can seriously degrade throughput .
Since these days we 're talking about multiple VM 's contending for disk , this is becoming an issue we need to be more aware of unless we 're using Solid State Drives - because of course SSD 's do n't seek .</tokentext>
<sentencetext>For random I/O seek time can seriously degrade throughput.
Since these days we're talking about multiple VM's contending for disk, this is becoming an issue we need to be more aware of unless we're using Solid State Drives - because of course SSD's don't seek.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119613</id>
	<title>Re:I hope they make the plug stronger</title>
	<author>drizek</author>
	<datestamp>1243449540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Connectors should be made so that if something breaks, it should be the cable, not the device it is connected to!</p></htmltext>
<tokenext>Connectors should be made so that if something breaks , it should be the cable , not the device it is connected to !</tokentext>
<sentencetext>Connectors should be made so that if something breaks, it should be the cable, not the device it is connected to!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119293</id>
	<title>Re:isn't it time for</title>
	<author>jhol13</author>
	<datestamp>1243446420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes  <a href="http://www.sun.com/storage/flash/module.jsp" title="sun.com">http://www.sun.com/storage/flash/module.jsp</a> [sun.com].<br>It does not use SATA, but something JEDEC will(?) standardise.</p><p>I think it is very interesting idea, whether it will take off, I do not know.</p></htmltext>
<tokenext>Yes http : //www.sun.com/storage/flash/module.jsp [ sun.com ] .It does not use SATA , but something JEDEC will ( ?
) standardise.I think it is very interesting idea , whether it will take off , I do not know .</tokentext>
<sentencetext>Yes  http://www.sun.com/storage/flash/module.jsp [sun.com].It does not use SATA, but something JEDEC will(?
) standardise.I think it is very interesting idea, whether it will take off, I do not know.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119239</id>
	<title>Only twice as fast?</title>
	<author>Twinbee</author>
	<datestamp>1243445880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Is it just me or is only a double increase in speed a bit lame (3Gb for SATA 2 to 6Gb for SATA 3).</p></htmltext>
<tokenext>Is it just me or is only a double increase in speed a bit lame ( 3Gb for SATA 2 to 6Gb for SATA 3 ) .</tokentext>
<sentencetext>Is it just me or is only a double increase in speed a bit lame (3Gb for SATA 2 to 6Gb for SATA 3).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117429</id>
	<title>Re:I hope they make the plug stronger</title>
	<author>Anonymous</author>
	<datestamp>1243429980000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>Same here. I plugged in a drive a few weeks ago with a regular straight cable and bent the cable up to fit in the case and the connector promptly snapped off.</htmltext>
<tokenext>Same here .
I plugged in a drive a few weeks ago with a regular straight cable and bent the cable up to fit in the case and the connector promptly snapped off .</tokentext>
<sentencetext>Same here.
I plugged in a drive a few weeks ago with a regular straight cable and bent the cable up to fit in the case and the connector promptly snapped off.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119879</id>
	<title>Re:SSD</title>
	<author>Anonymous</author>
	<datestamp>1243452000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>And the lesson here: Don't install hard drives while driving.</p></htmltext>
<tokenext>And the lesson here : Do n't install hard drives while driving .</tokentext>
<sentencetext>And the lesson here: Don't install hard drives while driving.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118067</id>
	<title>Why just double?</title>
	<author>Anonymous</author>
	<datestamp>1243434420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>With networking usually the speed jumps about 10x each generation.  So why with the other common IO interface used in computers are they just doubling it with each generation?</p></htmltext>
<tokenext>With networking usually the speed jumps about 10x each generation .
So why with the other common IO interface used in computers are they just doubling it with each generation ?</tokentext>
<sentencetext>With networking usually the speed jumps about 10x each generation.
So why with the other common IO interface used in computers are they just doubling it with each generation?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117353</id>
	<title>FAQ says $400-500</title>
	<author>c-bo-licious</author>
	<datestamp>1243429440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>That website says $400-500</htmltext>
<tokenext>That website says $ 400-500</tokentext>
<sentencetext>That website says $400-500</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116981</id>
	<title>Re:isn't it time for</title>
	<author>kaiser423</author>
	<datestamp>1243427160000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>You do realize that at either end of a Parallel link you'd have to re-serialize right?  That's what PATA does.  So you still need the high clock rate regardless of how much you parallelize it on the wires.  That's extra hardware, and another piece the needs to be be really fast.  Then you also have issues with maintaining clocking integrity over parallel lines, which gets tricky at high data rates.
<br> <br>
Right now, our technology is better in going pure serial.  In the past, it was parallel.  It might swing back and forth a couple of times between the two in the future.  But make no mistake:  right now, on commodity hardware for drives connected via cables, serial is pulling ahead in the speed war.</htmltext>
<tokenext>You do realize that at either end of a Parallel link you 'd have to re-serialize right ?
That 's what PATA does .
So you still need the high clock rate regardless of how much you parallelize it on the wires .
That 's extra hardware , and another piece the needs to be be really fast .
Then you also have issues with maintaining clocking integrity over parallel lines , which gets tricky at high data rates .
Right now , our technology is better in going pure serial .
In the past , it was parallel .
It might swing back and forth a couple of times between the two in the future .
But make no mistake : right now , on commodity hardware for drives connected via cables , serial is pulling ahead in the speed war .</tokentext>
<sentencetext>You do realize that at either end of a Parallel link you'd have to re-serialize right?
That's what PATA does.
So you still need the high clock rate regardless of how much you parallelize it on the wires.
That's extra hardware, and another piece the needs to be be really fast.
Then you also have issues with maintaining clocking integrity over parallel lines, which gets tricky at high data rates.
Right now, our technology is better in going pure serial.
In the past, it was parallel.
It might swing back and forth a couple of times between the two in the future.
But make no mistake:  right now, on commodity hardware for drives connected via cables, serial is pulling ahead in the speed war.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117043</id>
	<title>Re:isn't it time for</title>
	<author>Nimey</author>
	<datestamp>1243427520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The problem with the parallel approach is the difficulty in ensuring parallel signals get to their destinations at the same time.</p><p>Maybe in the future we'll figure out how to take today's high signaling rates and parallelize them, but the engineering choices made right now are for good reasons.</p></htmltext>
<tokenext>The problem with the parallel approach is the difficulty in ensuring parallel signals get to their destinations at the same time.Maybe in the future we 'll figure out how to take today 's high signaling rates and parallelize them , but the engineering choices made right now are for good reasons .</tokentext>
<sentencetext>The problem with the parallel approach is the difficulty in ensuring parallel signals get to their destinations at the same time.Maybe in the future we'll figure out how to take today's high signaling rates and parallelize them, but the engineering choices made right now are for good reasons.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118841</id>
	<title>Re:Stupid</title>
	<author>symbolset</author>
	<datestamp>1243441320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The attach of the next generation of internal storage media is PCIe.  And the attach of next generation expansion storage media is... external PCIe.  2-3 years out flash storage will be cheaper per TB than spinning disc.  Sometime before that most folks will realize that millisecond latency is not as good as microsecond latency.  So...  Yeah, I agree with you.</p></htmltext>
<tokenext>The attach of the next generation of internal storage media is PCIe .
And the attach of next generation expansion storage media is... external PCIe .
2-3 years out flash storage will be cheaper per TB than spinning disc .
Sometime before that most folks will realize that millisecond latency is not as good as microsecond latency .
So... Yeah , I agree with you .</tokentext>
<sentencetext>The attach of the next generation of internal storage media is PCIe.
And the attach of next generation expansion storage media is... external PCIe.
2-3 years out flash storage will be cheaper per TB than spinning disc.
Sometime before that most folks will realize that millisecond latency is not as good as microsecond latency.
So...  Yeah, I agree with you.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116937</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119881</id>
	<title>Re:I hope they make the plug stronger</title>
	<author>ParanoiaBOTS</author>
	<datestamp>1243452000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Maybe you should stop using a hammer when plugging in a new hard drive?</p></div><p>They should probably also remove the anvil they are hanging off it once it is plugged in...Seriously how can you snap them?  If you have that much tension in the cable you are doing something wrong.</p></div>
	</htmltext>
<tokenext>Maybe you should stop using a hammer when plugging in a new hard drive ? They should probably also remove the anvil they are hanging off it once it is plugged in...Seriously how can you snap them ?
If you have that much tension in the cable you are doing something wrong .</tokentext>
<sentencetext>Maybe you should stop using a hammer when plugging in a new hard drive?They should probably also remove the anvil they are hanging off it once it is plugged in...Seriously how can you snap them?
If you have that much tension in the cable you are doing something wrong.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117379</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117501</id>
	<title>Nope, system designers want serial comms</title>
	<author>JayBat</author>
	<datestamp>1243430400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I'd say if it's bandwidth we're after, we shouldn't be reducing the number of signal lines.</p> </div><p>Nope, Package pins are expensive, cable connectors are expensive, board traces are expensive, cabling is expensive. On the other hand, silicon is cheap.<nobr> <wbr></nobr>:-)
</p><p>
A 6gbps serial link is straightforward to implement, if you know what you're doing, and there are probably a couple dozen design groups around the world that can do it.
</p><p>
Jay</p></div>
	</htmltext>
<tokenext>I 'd say if it 's bandwidth we 're after , we should n't be reducing the number of signal lines .
Nope , Package pins are expensive , cable connectors are expensive , board traces are expensive , cabling is expensive .
On the other hand , silicon is cheap .
: - ) A 6gbps serial link is straightforward to implement , if you know what you 're doing , and there are probably a couple dozen design groups around the world that can do it .
Jay</tokentext>
<sentencetext>I'd say if it's bandwidth we're after, we shouldn't be reducing the number of signal lines.
Nope, Package pins are expensive, cable connectors are expensive, board traces are expensive, cabling is expensive.
On the other hand, silicon is cheap.
:-)

A 6gbps serial link is straightforward to implement, if you know what you're doing, and there are probably a couple dozen design groups around the world that can do it.
Jay
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583</id>
	<title>What is the point?</title>
	<author>supervillain</author>
	<datestamp>1243424820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No current hard disk or even SSD can do 3Gb/sec so what is the point?</htmltext>
<tokenext>No current hard disk or even SSD can do 3Gb/sec so what is the point ?</tokentext>
<sentencetext>No current hard disk or even SSD can do 3Gb/sec so what is the point?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116641</id>
	<title>Awesome</title>
	<author>Anonymous</author>
	<datestamp>1243425000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Maybe now I can actually use my 320 GB external HD.</p><p>Transferring 4+ GB files took ages with SATA 2.0.</p></htmltext>
<tokenext>Maybe now I can actually use my 320 GB external HD.Transferring 4 + GB files took ages with SATA 2.0 .</tokentext>
<sentencetext>Maybe now I can actually use my 320 GB external HD.Transferring 4+ GB files took ages with SATA 2.0.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116687</id>
	<title>Re:What is the point?</title>
	<author>wjh31</author>
	<datestamp>1243425180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>3Gb/s = 375MB/s a typical HDD should be easily capable of a third of that. So high end SSD's can probably beat 375MB/s, if not now then soon</htmltext>
<tokenext>3Gb/s = 375MB/s a typical HDD should be easily capable of a third of that .
So high end SSD 's can probably beat 375MB/s , if not now then soon</tokentext>
<sentencetext>3Gb/s = 375MB/s a typical HDD should be easily capable of a third of that.
So high end SSD's can probably beat 375MB/s, if not now then soon</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116663</id>
	<title>Re:What is the point?</title>
	<author>Lewis Daggart</author>
	<datestamp>1243425120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Whats the point of designing a hard disk or SSD that works faster if SATA is stuck at 3Gb/s?</htmltext>
<tokenext>Whats the point of designing a hard disk or SSD that works faster if SATA is stuck at 3Gb/s ?</tokentext>
<sentencetext>Whats the point of designing a hard disk or SSD that works faster if SATA is stuck at 3Gb/s?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118439</id>
	<title>Re:isn't it time for</title>
	<author>Anonymous</author>
	<datestamp>1243437900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I have an OCZ Vertex in my Lenovo T60, on my SATA1 internal connector. OCZ vertex SSD does 130MB/sec read, 110MB/sec write. When connected to my SATA2 controller on my workstation, i get peak reads of 230MB/sec sustained, with 160MB/sec average with 120MB/sec writes. Quite a difference.</p><p>The really interesting part is economics. I bought the SSD a month ago for &#226;180/60GB. Affordable enough to justofy a 300MB/sec SATA2 standard.</p></htmltext>
<tokenext>I have an OCZ Vertex in my Lenovo T60 , on my SATA1 internal connector .
OCZ vertex SSD does 130MB/sec read , 110MB/sec write .
When connected to my SATA2 controller on my workstation , i get peak reads of 230MB/sec sustained , with 160MB/sec average with 120MB/sec writes .
Quite a difference.The really interesting part is economics .
I bought the SSD a month ago for   180/60GB .
Affordable enough to justofy a 300MB/sec SATA2 standard .</tokentext>
<sentencetext>I have an OCZ Vertex in my Lenovo T60, on my SATA1 internal connector.
OCZ vertex SSD does 130MB/sec read, 110MB/sec write.
When connected to my SATA2 controller on my workstation, i get peak reads of 230MB/sec sustained, with 160MB/sec average with 120MB/sec writes.
Quite a difference.The really interesting part is economics.
I bought the SSD a month ago for â180/60GB.
Affordable enough to justofy a 300MB/sec SATA2 standard.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120625</id>
	<title>Re:I hope they make the plug stronger</title>
	<author>thexile</author>
	<datestamp>1243504440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I concur as well. I have a drive in RAID 0 that has a broken connector (the plastic broken however the metallic connectors are still attached). Thankfully, it's still working.</htmltext>
<tokenext>I concur as well .
I have a drive in RAID 0 that has a broken connector ( the plastic broken however the metallic connectors are still attached ) .
Thankfully , it 's still working .</tokentext>
<sentencetext>I concur as well.
I have a drive in RAID 0 that has a broken connector (the plastic broken however the metallic connectors are still attached).
Thankfully, it's still working.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120329</id>
	<title>Re:SSD</title>
	<author>Anonymous</author>
	<datestamp>1243501380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The Bronx - High latency, low throughput and MASSIVE data loss?  I don't want my data stolen...er borrowed.</p></htmltext>
<tokenext>The Bronx - High latency , low throughput and MASSIVE data loss ?
I do n't want my data stolen...er borrowed .</tokentext>
<sentencetext>The Bronx - High latency, low throughput and MASSIVE data loss?
I don't want my data stolen...er borrowed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116653</id>
	<title>Re:Theoretical != Real World speeds</title>
	<author>evanbd</author>
	<datestamp>1243425060000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Wow, both your numbers are wrong.  SATA 2.0 has a theoretical transfer rate of 3Gb/s, not 3GB/s.  It also uses an <a href="http://en.wikipedia.org/wiki/SATA#Encoding" title="wikipedia.org">8b/10b encoding</a> [wikipedia.org], so 3.0Gb/s translates to 300MB/s.  Data throughput will be less than that, thanks to control protocol overhead, though the overhead is very small.</p><p>Modern drives do seriously better than 25MB/s.  Seriously, go look at benchmarks.  Also, SSDs, which are a very real design influence on things like SATA, are already getting close to the 300MB/s mark.</p></htmltext>
<tokenext>Wow , both your numbers are wrong .
SATA 2.0 has a theoretical transfer rate of 3Gb/s , not 3GB/s .
It also uses an 8b/10b encoding [ wikipedia.org ] , so 3.0Gb/s translates to 300MB/s .
Data throughput will be less than that , thanks to control protocol overhead , though the overhead is very small.Modern drives do seriously better than 25MB/s .
Seriously , go look at benchmarks .
Also , SSDs , which are a very real design influence on things like SATA , are already getting close to the 300MB/s mark .</tokentext>
<sentencetext>Wow, both your numbers are wrong.
SATA 2.0 has a theoretical transfer rate of 3Gb/s, not 3GB/s.
It also uses an 8b/10b encoding [wikipedia.org], so 3.0Gb/s translates to 300MB/s.
Data throughput will be less than that, thanks to control protocol overhead, though the overhead is very small.Modern drives do seriously better than 25MB/s.
Seriously, go look at benchmarks.
Also, SSDs, which are a very real design influence on things like SATA, are already getting close to the 300MB/s mark.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116543</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039</id>
	<title>Re:SSD</title>
	<author>Wrath0fb0b</author>
	<datestamp>1243427460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p>If my understanding of the technology is correct, the seek time on most hard drives already limits drive access speed to typically be slower than 3Gb/sec. Would this rely on a transition to Solid State Drives for any noticeable difference in performance?</p></div><p>The seek time has nothing to do with the throughput. The seek time refers to the latency between when a read command is issued and when it begins to be fulfilled. The throughput refers to the data transferred per unit time during fulfillment.</p><p>Here's a nice car analogy for those of us in New England -- consider the Mass Pike versus I-93. The Mass Pike has a very long seek time from the onramp because of the toll lanes (and the mouth breathers that won't get a transponder even though they are now free and clog the automatic lanes) but once you get on the highway, you can go 80 MPH until your exit. On I-93, by contrast, you can get right on, but you will be going 30 MPH for the duration. Of course, if you drive down to CT and get on I-84, you have a low-latency AND high throughput highway but if you drive too far down to, say, the Bronx, it becomes high-latency and low throughput.</p></div>
	</htmltext>
<tokenext>If my understanding of the technology is correct , the seek time on most hard drives already limits drive access speed to typically be slower than 3Gb/sec .
Would this rely on a transition to Solid State Drives for any noticeable difference in performance ? The seek time has nothing to do with the throughput .
The seek time refers to the latency between when a read command is issued and when it begins to be fulfilled .
The throughput refers to the data transferred per unit time during fulfillment.Here 's a nice car analogy for those of us in New England -- consider the Mass Pike versus I-93 .
The Mass Pike has a very long seek time from the onramp because of the toll lanes ( and the mouth breathers that wo n't get a transponder even though they are now free and clog the automatic lanes ) but once you get on the highway , you can go 80 MPH until your exit .
On I-93 , by contrast , you can get right on , but you will be going 30 MPH for the duration .
Of course , if you drive down to CT and get on I-84 , you have a low-latency AND high throughput highway but if you drive too far down to , say , the Bronx , it becomes high-latency and low throughput .</tokentext>
<sentencetext>If my understanding of the technology is correct, the seek time on most hard drives already limits drive access speed to typically be slower than 3Gb/sec.
Would this rely on a transition to Solid State Drives for any noticeable difference in performance?The seek time has nothing to do with the throughput.
The seek time refers to the latency between when a read command is issued and when it begins to be fulfilled.
The throughput refers to the data transferred per unit time during fulfillment.Here's a nice car analogy for those of us in New England -- consider the Mass Pike versus I-93.
The Mass Pike has a very long seek time from the onramp because of the toll lanes (and the mouth breathers that won't get a transponder even though they are now free and clog the automatic lanes) but once you get on the highway, you can go 80 MPH until your exit.
On I-93, by contrast, you can get right on, but you will be going 30 MPH for the duration.
Of course, if you drive down to CT and get on I-84, you have a low-latency AND high throughput highway but if you drive too far down to, say, the Bronx, it becomes high-latency and low throughput.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116569</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116579</id>
	<title>6 Gb/sec?  Meh</title>
	<author>Totenglocke</author>
	<datestamp>1243424760000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext>Let me know when we hit 1.21 GW -- then I'll be excited!</htmltext>
<tokenext>Let me know when we hit 1.21 GW -- then I 'll be excited !</tokentext>
<sentencetext>Let me know when we hit 1.21 GW -- then I'll be excited!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116811</id>
	<title>Re:What is the point?</title>
	<author>geekoid</author>
	<datestamp>1243426020000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Not true. SSDs are approaching that now.</p><p>HP has an enterprise SSD that is 800MB/s (Note the large B as opposed to b). So this drive could saturate SATA 3's 6 Gb/s</p></htmltext>
<tokenext>Not true .
SSDs are approaching that now.HP has an enterprise SSD that is 800MB/s ( Note the large B as opposed to b ) .
So this drive could saturate SATA 3 's 6 Gb/s</tokentext>
<sentencetext>Not true.
SSDs are approaching that now.HP has an enterprise SSD that is 800MB/s (Note the large B as opposed to b).
So this drive could saturate SATA 3's 6 Gb/s</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28125115</id>
	<title>Bandwidth isn't really the problem</title>
	<author>Leolo</author>
	<datestamp>1243531680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I don't really care about bandwidth.  What I really care about is parallel requests and out of order requests.  This is why SCSI was so much better then IDE.  Does SATA 3.0 remove the odious limit of 15 NCQ reqs?</htmltext>
<tokenext>I do n't really care about bandwidth .
What I really care about is parallel requests and out of order requests .
This is why SCSI was so much better then IDE .
Does SATA 3.0 remove the odious limit of 15 NCQ reqs ?</tokentext>
<sentencetext>I don't really care about bandwidth.
What I really care about is parallel requests and out of order requests.
This is why SCSI was so much better then IDE.
Does SATA 3.0 remove the odious limit of 15 NCQ reqs?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28124771</id>
	<title>Spec should include the following for MOBO makers</title>
	<author>motherpusbucket</author>
	<datestamp>1243530120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"Thou shalt keep SATA connectors the hell away from PCI-x slots."
It amazes me how many MB's still put them in locations that interfere with vid cards.</htmltext>
<tokenext>" Thou shalt keep SATA connectors the hell away from PCI-x slots .
" It amazes me how many MB 's still put them in locations that interfere with vid cards .</tokentext>
<sentencetext>"Thou shalt keep SATA connectors the hell away from PCI-x slots.
"
It amazes me how many MB's still put them in locations that interfere with vid cards.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119105</id>
	<title>Re:I hope they make the plug stronger</title>
	<author>Anonymous</author>
	<datestamp>1243444320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Are talking about before they used harder plastic with the addition of the metal clip or after?</p></htmltext>
<tokenext>Are talking about before they used harder plastic with the addition of the metal clip or after ?</tokentext>
<sentencetext>Are talking about before they used harder plastic with the addition of the metal clip or after?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887</id>
	<title>Re:isn't it time for</title>
	<author>Anonymous</author>
	<datestamp>1243426380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>I'd say if it's bandwidth we're after, we shouldn't be reducing the number of signal lines. Do things <a href="http://en.wikipedia.org/wiki/Parallel\_ATA" title="wikipedia.org" rel="nofollow">in parallel</a> [wikipedia.org] instead of serializing everything and depending on astronomical clock speeds. Obviously PATA is obsolete but especially with the rising importance of multiprocessing we should be focusing on more parallel solutions, perhaps allowing multiple reads at a time on different lines of the connector.</htmltext>
<tokenext>I 'd say if it 's bandwidth we 're after , we should n't be reducing the number of signal lines .
Do things in parallel [ wikipedia.org ] instead of serializing everything and depending on astronomical clock speeds .
Obviously PATA is obsolete but especially with the rising importance of multiprocessing we should be focusing on more parallel solutions , perhaps allowing multiple reads at a time on different lines of the connector .</tokentext>
<sentencetext>I'd say if it's bandwidth we're after, we shouldn't be reducing the number of signal lines.
Do things in parallel [wikipedia.org] instead of serializing everything and depending on astronomical clock speeds.
Obviously PATA is obsolete but especially with the rising importance of multiprocessing we should be focusing on more parallel solutions, perhaps allowing multiple reads at a time on different lines of the connector.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120985</id>
	<title>Re:isn't it time for</title>
	<author>petermgreen</author>
	<datestamp>1243507920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The thing is signals running in paralell with shared timing doesn't work very well beyond a certain clock speed*distance product.</p><p>Signals running in paralell with independent timing seems at the moment to be the way to go for really high bandwidth (this is what PCIe uses and afaict SAS has the capability to use it as well) but it gets pretty complex.</p></htmltext>
<tokenext>The thing is signals running in paralell with shared timing does n't work very well beyond a certain clock speed * distance product.Signals running in paralell with independent timing seems at the moment to be the way to go for really high bandwidth ( this is what PCIe uses and afaict SAS has the capability to use it as well ) but it gets pretty complex .</tokentext>
<sentencetext>The thing is signals running in paralell with shared timing doesn't work very well beyond a certain clock speed*distance product.Signals running in paralell with independent timing seems at the moment to be the way to go for really high bandwidth (this is what PCIe uses and afaict SAS has the capability to use it as well) but it gets pretty complex.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118307</id>
	<title>Re:Only one problem with this:</title>
	<author>Anonymous</author>
	<datestamp>1243436700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Mark one of the following.</p><p>[ ] You did not get the point.</p><p>There are not just single hard drives. Think of eSATA enclosures, say, with a RAID-0 or RAID-1 array from which you do a linear read. So each device does its, for example, 80MB/s or so, and if  you multiply that by, say, 5, the eSATA controller will actually stream its 3 GBps to the host in theory.</p></htmltext>
<tokenext>Mark one of the following .
[ ] You did not get the point.There are not just single hard drives .
Think of eSATA enclosures , say , with a RAID-0 or RAID-1 array from which you do a linear read .
So each device does its , for example , 80MB/s or so , and if you multiply that by , say , 5 , the eSATA controller will actually stream its 3 GBps to the host in theory .</tokentext>
<sentencetext>Mark one of the following.
[ ] You did not get the point.There are not just single hard drives.
Think of eSATA enclosures, say, with a RAID-0 or RAID-1 array from which you do a linear read.
So each device does its, for example, 80MB/s or so, and if  you multiply that by, say, 5, the eSATA controller will actually stream its 3 GBps to the host in theory.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116503</id>
	<title>bike, nigga stole my bike</title>
	<author>Anonymous</author>
	<datestamp>1243424460000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>adddriaaaaaannnn</p></htmltext>
<tokenext>adddriaaaaaannnn</tokentext>
<sentencetext>adddriaaaaaannnn</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117057</id>
	<title>Re:Only one problem with this:</title>
	<author>ProfMobius</author>
	<datestamp>1243427580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p> <i>The industry has largely been selling SATA II devices to unwitting consumers based on the perceived promise of 3GBps performance</i> </p></div><p>Well, knowing that the standard is backward compatible (from TFA), what is the point in crying ? You will get a faster interface for the same price as the old one, being able to use your current hardware, and when the drives reach this speed, you will be ready (and from previous posting, looks like SSD are close to saturate SATA II).</p></div>
	</htmltext>
<tokenext>The industry has largely been selling SATA II devices to unwitting consumers based on the perceived promise of 3GBps performance Well , knowing that the standard is backward compatible ( from TFA ) , what is the point in crying ?
You will get a faster interface for the same price as the old one , being able to use your current hardware , and when the drives reach this speed , you will be ready ( and from previous posting , looks like SSD are close to saturate SATA II ) .</tokentext>
<sentencetext> The industry has largely been selling SATA II devices to unwitting consumers based on the perceived promise of 3GBps performance Well, knowing that the standard is backward compatible (from TFA), what is the point in crying ?
You will get a faster interface for the same price as the old one, being able to use your current hardware, and when the drives reach this speed, you will be ready (and from previous posting, looks like SSD are close to saturate SATA II).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119679</id>
	<title>Re:I hope they make the plug stronger</title>
	<author>Anonymous</author>
	<datestamp>1243450140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>What the heck are you doing to those poor drives?!</p></htmltext>
<tokenext>What the heck are you doing to those poor drives ?
!</tokentext>
<sentencetext>What the heck are you doing to those poor drives?
!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117879</id>
	<title>Re:SSD</title>
	<author>Anonymous</author>
	<datestamp>1243433100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>ah, this is great. We have the car analogy. Now all we need is for someone to write a post with a Hitler/Nazi reference and we can mark this this one complete.</p></htmltext>
<tokenext>ah , this is great .
We have the car analogy .
Now all we need is for someone to write a post with a Hitler/Nazi reference and we can mark this this one complete .</tokentext>
<sentencetext>ah, this is great.
We have the car analogy.
Now all we need is for someone to write a post with a Hitler/Nazi reference and we can mark this this one complete.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117707</id>
	<title>Re:isn't it time for</title>
	<author>Anonymous</author>
	<datestamp>1243431900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What i think he was trying to get at is this: say you have a 4 platter HDD. Instead of having the heads all working as one you feed each of the platters off a different SATA line. In essence you would have a single RAID drive doing RAID-0. Which might just work, but it would depend on how much the controllers and extra motors would add to the cost.</p><p>What I want to know is what happened to the hybrid idea? I though that was frankly the way to go. Much in the same vein as the WD "green" drives you could have the things that are less likely to change(Say the core OS) stored on a nice fast 16Gb of SSD, while having pictures, videos and all the other programs and files stored on a much larger but slower HDD. I doubt it would even be hard to set up. Just have the drive come with an installer that after OS install uses Junction points to point Documents and Settings, Program Files, and all other non core OS files to the HDD. For all other Operating Systems(or if you don't want to use the installer and instead want to DIY) it would simply look like 2 drives-the smaller SSD OS drive, and the larger HDD storage drive. This would allow everyone to take advantage of the speed of SSD while not having to sacrifice storage space as they do currently.</p><p>

There also needs to be a reliable optical or tape based storage device marketed to the masses. With DVD and even BD(which is still way too expensive for the masses and has too many DRM problems) there simply isn't enough storage to backup the mounds of HDD space that even the cheap machines are coming with nowadays. External HDDs should be only viewed as a stopgap because you still have the inherent problem of dealing with a mechanical medium. There needs to be a simple way to back up large amounts of data and simply "stuff it away" until needed such as what servers have with tape. But we really haven't seen any medium targeting the home user since DVD, and with the mounds of data being created by cameras, camcorders, etc it really is long overdo IMHO. Because all the speed in the world doesn't help if you can lose it all simply by accidentally dropping the external HDD.</p></htmltext>
<tokenext>What i think he was trying to get at is this : say you have a 4 platter HDD .
Instead of having the heads all working as one you feed each of the platters off a different SATA line .
In essence you would have a single RAID drive doing RAID-0 .
Which might just work , but it would depend on how much the controllers and extra motors would add to the cost.What I want to know is what happened to the hybrid idea ?
I though that was frankly the way to go .
Much in the same vein as the WD " green " drives you could have the things that are less likely to change ( Say the core OS ) stored on a nice fast 16Gb of SSD , while having pictures , videos and all the other programs and files stored on a much larger but slower HDD .
I doubt it would even be hard to set up .
Just have the drive come with an installer that after OS install uses Junction points to point Documents and Settings , Program Files , and all other non core OS files to the HDD .
For all other Operating Systems ( or if you do n't want to use the installer and instead want to DIY ) it would simply look like 2 drives-the smaller SSD OS drive , and the larger HDD storage drive .
This would allow everyone to take advantage of the speed of SSD while not having to sacrifice storage space as they do currently .
There also needs to be a reliable optical or tape based storage device marketed to the masses .
With DVD and even BD ( which is still way too expensive for the masses and has too many DRM problems ) there simply is n't enough storage to backup the mounds of HDD space that even the cheap machines are coming with nowadays .
External HDDs should be only viewed as a stopgap because you still have the inherent problem of dealing with a mechanical medium .
There needs to be a simple way to back up large amounts of data and simply " stuff it away " until needed such as what servers have with tape .
But we really have n't seen any medium targeting the home user since DVD , and with the mounds of data being created by cameras , camcorders , etc it really is long overdo IMHO .
Because all the speed in the world does n't help if you can lose it all simply by accidentally dropping the external HDD .</tokentext>
<sentencetext>What i think he was trying to get at is this: say you have a 4 platter HDD.
Instead of having the heads all working as one you feed each of the platters off a different SATA line.
In essence you would have a single RAID drive doing RAID-0.
Which might just work, but it would depend on how much the controllers and extra motors would add to the cost.What I want to know is what happened to the hybrid idea?
I though that was frankly the way to go.
Much in the same vein as the WD "green" drives you could have the things that are less likely to change(Say the core OS) stored on a nice fast 16Gb of SSD, while having pictures, videos and all the other programs and files stored on a much larger but slower HDD.
I doubt it would even be hard to set up.
Just have the drive come with an installer that after OS install uses Junction points to point Documents and Settings, Program Files, and all other non core OS files to the HDD.
For all other Operating Systems(or if you don't want to use the installer and instead want to DIY) it would simply look like 2 drives-the smaller SSD OS drive, and the larger HDD storage drive.
This would allow everyone to take advantage of the speed of SSD while not having to sacrifice storage space as they do currently.
There also needs to be a reliable optical or tape based storage device marketed to the masses.
With DVD and even BD(which is still way too expensive for the masses and has too many DRM problems) there simply isn't enough storage to backup the mounds of HDD space that even the cheap machines are coming with nowadays.
External HDDs should be only viewed as a stopgap because you still have the inherent problem of dealing with a mechanical medium.
There needs to be a simple way to back up large amounts of data and simply "stuff it away" until needed such as what servers have with tape.
But we really haven't seen any medium targeting the home user since DVD, and with the mounds of data being created by cameras, camcorders, etc it really is long overdo IMHO.
Because all the speed in the world doesn't help if you can lose it all simply by accidentally dropping the external HDD.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116993</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117233</id>
	<title>Re:Theoretical != Real World speeds</title>
	<author>Firehed</author>
	<datestamp>1243428720000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Sequential reads on large-capacity drives are often in the 70-90MB/s range (yes MB, not Mb), bursting into the 200MB/s range. Hell, I've seen 50MB/s+ for at least the last half a decade.  High-quality (read: expensive) SSDs can roughly double that.</p><p>And of course, the spec is in gigabits per second, not gigabytes, and includes overhead.  Actual supported, sustained transfer is supported at 150MB/s, 300MB/s, and 600MB/s on SATAI-III respectively.</p></htmltext>
<tokenext>Sequential reads on large-capacity drives are often in the 70-90MB/s range ( yes MB , not Mb ) , bursting into the 200MB/s range .
Hell , I 've seen 50MB/s + for at least the last half a decade .
High-quality ( read : expensive ) SSDs can roughly double that.And of course , the spec is in gigabits per second , not gigabytes , and includes overhead .
Actual supported , sustained transfer is supported at 150MB/s , 300MB/s , and 600MB/s on SATAI-III respectively .</tokentext>
<sentencetext>Sequential reads on large-capacity drives are often in the 70-90MB/s range (yes MB, not Mb), bursting into the 200MB/s range.
Hell, I've seen 50MB/s+ for at least the last half a decade.
High-quality (read: expensive) SSDs can roughly double that.And of course, the spec is in gigabits per second, not gigabytes, and includes overhead.
Actual supported, sustained transfer is supported at 150MB/s, 300MB/s, and 600MB/s on SATAI-III respectively.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116543</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28124475</id>
	<title>Re:SSD</title>
	<author>Anonymous</author>
	<datestamp>1243528860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Depending on what you are doing with the drive, seek time can be a huge factor on throughput. Seek time has very little impact on the transfer throughput of one large file. Many small files? Seek time is very important.</p></htmltext>
<tokenext>Depending on what you are doing with the drive , seek time can be a huge factor on throughput .
Seek time has very little impact on the transfer throughput of one large file .
Many small files ?
Seek time is very important .</tokentext>
<sentencetext>Depending on what you are doing with the drive, seek time can be a huge factor on throughput.
Seek time has very little impact on the transfer throughput of one large file.
Many small files?
Seek time is very important.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117557</id>
	<title>Re:Theoretical != Real World speeds</title>
	<author>BikeHelmet</author>
	<datestamp>1243430760000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>I really wish SATA 3.0 had a bigger jump than this. 600MB/sec is hardly anything for some of the high end SSDs and RAM-drives available.</p><p>If they become affordable, I'm definitely going for PCIe 4x SSDs, since they can hit <a href="http://www.channelregister.co.uk/2009/04/07/hp\_million\_iops/" title="channelregister.co.uk">8GB/sec (80gbit) when RAID'd on server boards with tons of PCIe lanes</a> [channelregister.co.uk].</p><p>I remember when someone stuck six FusionIO IODrives together and got about 2.2GB/sec of bandwidth out of a regular 2-socket server board. (like those Tyan ones, which can be had for well under $1000) It seriously makes me drool... though I suppose all I really need out of an SSD is 200MB/sec with minimal latency.</p></htmltext>
<tokenext>I really wish SATA 3.0 had a bigger jump than this .
600MB/sec is hardly anything for some of the high end SSDs and RAM-drives available.If they become affordable , I 'm definitely going for PCIe 4x SSDs , since they can hit 8GB/sec ( 80gbit ) when RAID 'd on server boards with tons of PCIe lanes [ channelregister.co.uk ] .I remember when someone stuck six FusionIO IODrives together and got about 2.2GB/sec of bandwidth out of a regular 2-socket server board .
( like those Tyan ones , which can be had for well under $ 1000 ) It seriously makes me drool... though I suppose all I really need out of an SSD is 200MB/sec with minimal latency .</tokentext>
<sentencetext>I really wish SATA 3.0 had a bigger jump than this.
600MB/sec is hardly anything for some of the high end SSDs and RAM-drives available.If they become affordable, I'm definitely going for PCIe 4x SSDs, since they can hit 8GB/sec (80gbit) when RAID'd on server boards with tons of PCIe lanes [channelregister.co.uk].I remember when someone stuck six FusionIO IODrives together and got about 2.2GB/sec of bandwidth out of a regular 2-socket server board.
(like those Tyan ones, which can be had for well under $1000) It seriously makes me drool... though I suppose all I really need out of an SSD is 200MB/sec with minimal latency.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116653</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118527</id>
	<title>Re:Why just double?</title>
	<author>Anonymous</author>
	<datestamp>1243438500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you increase 10x each generation you have to wait quite a while between generations. You can also end up in a "Goldilocks" situation where 1 Gbps is not enough but 10 Gbps is overkill and too expensive. 2x or 4x per generation is a lot smoother.</p></htmltext>
<tokenext>If you increase 10x each generation you have to wait quite a while between generations .
You can also end up in a " Goldilocks " situation where 1 Gbps is not enough but 10 Gbps is overkill and too expensive .
2x or 4x per generation is a lot smoother .</tokentext>
<sentencetext>If you increase 10x each generation you have to wait quite a while between generations.
You can also end up in a "Goldilocks" situation where 1 Gbps is not enough but 10 Gbps is overkill and too expensive.
2x or 4x per generation is a lot smoother.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118067</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117201</id>
	<title>Re:isn't it time for</title>
	<author>Ilgaz</author>
	<datestamp>1243428540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you are doing large database work, redundancy needed or 2K/4K video work, you may need SAS. In fact, you would still boot OS and Apps from a serial ATA device and use SAS for program (database, movie etc) data. SATA and SAS have compatible connectors for that reason. They don't really replace each other.</p><p>Of course, SAS is really expensive but for example, if you are at a professional studio which speed may actually earn you more money, you wouldn't care.</p><p>Interestingly, even SMART like features of SCSI doesn't replace actual SMART in SATA. Both have different powers. It can show how different directions IDE and SCSI went</p></htmltext>
<tokenext>If you are doing large database work , redundancy needed or 2K/4K video work , you may need SAS .
In fact , you would still boot OS and Apps from a serial ATA device and use SAS for program ( database , movie etc ) data .
SATA and SAS have compatible connectors for that reason .
They do n't really replace each other.Of course , SAS is really expensive but for example , if you are at a professional studio which speed may actually earn you more money , you would n't care.Interestingly , even SMART like features of SCSI does n't replace actual SMART in SATA .
Both have different powers .
It can show how different directions IDE and SCSI went</tokentext>
<sentencetext>If you are doing large database work, redundancy needed or 2K/4K video work, you may need SAS.
In fact, you would still boot OS and Apps from a serial ATA device and use SAS for program (database, movie etc) data.
SATA and SAS have compatible connectors for that reason.
They don't really replace each other.Of course, SAS is really expensive but for example, if you are at a professional studio which speed may actually earn you more money, you wouldn't care.Interestingly, even SMART like features of SCSI doesn't replace actual SMART in SATA.
Both have different powers.
It can show how different directions IDE and SCSI went</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116853</id>
	<title>Re:What is the point?</title>
	<author>wbattestilli</author>
	<datestamp>1243426200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Actually, the OCZ Vertex <a href="http://www.ocztechnology.com/products/flash\_drives/ocz\_vertex\_series\_sata\_ii\_2\_5-ssd/" title="ocztechnology.com">http://www.ocztechnology.com/products/flash\_drives/ocz\_vertex\_series\_sata\_ii\_2\_5-ssd/</a> [ocztechnology.com] can sustain about 230MB/s.  This is equal to about 2.3 Gb/sec.  Allowing for the rate at which SSD technology seems to be changing, I'd say that this standard is just in the nick of time.  Chances are that the next thing in high-end, consumer SSDs will saturate a SATA link. If this standard doesn't get pushed out soon, drive manufacturers will be doing ugly, proprietary, OS specific hacks to support multiple SATA links to a single device.  In addition, lots of people are packaging multiple physical drives into a single SSD with an internal RAID-0 controller.  These are definitely being(or soon going to be) held back by the 3BG/s SATA link.</htmltext>
<tokenext>Actually , the OCZ Vertex http : //www.ocztechnology.com/products/flash \ _drives/ocz \ _vertex \ _series \ _sata \ _ii \ _2 \ _5-ssd/ [ ocztechnology.com ] can sustain about 230MB/s .
This is equal to about 2.3 Gb/sec .
Allowing for the rate at which SSD technology seems to be changing , I 'd say that this standard is just in the nick of time .
Chances are that the next thing in high-end , consumer SSDs will saturate a SATA link .
If this standard does n't get pushed out soon , drive manufacturers will be doing ugly , proprietary , OS specific hacks to support multiple SATA links to a single device .
In addition , lots of people are packaging multiple physical drives into a single SSD with an internal RAID-0 controller .
These are definitely being ( or soon going to be ) held back by the 3BG/s SATA link .</tokentext>
<sentencetext>Actually, the OCZ Vertex http://www.ocztechnology.com/products/flash\_drives/ocz\_vertex\_series\_sata\_ii\_2\_5-ssd/ [ocztechnology.com] can sustain about 230MB/s.
This is equal to about 2.3 Gb/sec.
Allowing for the rate at which SSD technology seems to be changing, I'd say that this standard is just in the nick of time.
Chances are that the next thing in high-end, consumer SSDs will saturate a SATA link.
If this standard doesn't get pushed out soon, drive manufacturers will be doing ugly, proprietary, OS specific hacks to support multiple SATA links to a single device.
In addition, lots of people are packaging multiple physical drives into a single SSD with an internal RAID-0 controller.
These are definitely being(or soon going to be) held back by the 3BG/s SATA link.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119631</id>
	<title>Re:Only one problem with this:</title>
	<author>drizek</author>
	<datestamp>1243449720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Maybe not DISK drives, but we do have SSDs that are approaching 3Gb. SSDs go over 250megs easy, and they are still in their infancy.</p><p>We already have SSDs which go well over 3Gb, and they are all PCIe based.</p></htmltext>
<tokenext>Maybe not DISK drives , but we do have SSDs that are approaching 3Gb .
SSDs go over 250megs easy , and they are still in their infancy.We already have SSDs which go well over 3Gb , and they are all PCIe based .</tokentext>
<sentencetext>Maybe not DISK drives, but we do have SSDs that are approaching 3Gb.
SSDs go over 250megs easy, and they are still in their infancy.We already have SSDs which go well over 3Gb, and they are all PCIe based.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119105
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117501
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120205
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120329
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116687
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117557
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116653
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116543
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117001
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117429
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118281
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28124531
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116993
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116937
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116679
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116811
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119631
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116895
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119681
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118067
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119679
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117879
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119309
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117707
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116993
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120985
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28124475
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119613
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117845
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116993
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28123803
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116653
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116543
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118763
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118527
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118067
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117057
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119293
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117281
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116981
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117043
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118747
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116653
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116543
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119881
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117379
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118439
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117233
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116543
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118433
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117201
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116853
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120625
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118307
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117523
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116913
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119879
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116569
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116663
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118329
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118559
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116589
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116543
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28125003
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116615
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_05_27_2210226_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117311
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116615
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116583
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116663
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116811
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116853
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116687
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116679
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117001
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116741
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118163
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116785
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116909
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118307
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119631
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116895
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117057
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116937
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118841
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116913
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117523
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116777
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119613
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119105
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117379
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119881
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117429
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120625
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119679
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116521
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118433
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116887
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116993
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28124531
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117707
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117845
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116981
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117043
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120985
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117501
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117201
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116995
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118329
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118439
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118281
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119293
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117281
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120205
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120669
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118559
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119309
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116615
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28125003
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117311
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119239
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116641
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116503
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116579
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118067
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119681
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118527
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117049
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116543
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116653
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28123803
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117557
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118747
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117233
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116589
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_05_27_2210226.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28116569
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117039
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28120329
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28124475
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28117879
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28118763
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_05_27_2210226.28119879
</commentlist>
</conversation>
