<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_02_03_1814248</id>
	<title>A Hybrid Approach For SSD Speed From Your 2TB HDD</title>
	<author>timothy</author>
	<datestamp>1265223300000</datestamp>
	<htmltext>Claave writes <i>"bit-tech.net reports that SilverStone has announced a device that <a href="http://www.bit-tech.net/news/hardware/2010/02/03/silverstone-announces-hybrid-ssd-hard-disk/1">daisy-chains an SSD with a hard disk</a>, with the aim of providing SSD speeds plus loads of storage space. The SilverStone HDDBoost is a hard disk caddy with an integrated storage controller, and is an easy upgrade for your PC. The device copies the 'front-end' of your hard disk to the SSD, and tells your OS to prefer the SSD when possible. SSD speeds for a 2TB storage device? Yep, sounds good to me!"</i></htmltext>
<tokenext>Claave writes " bit-tech.net reports that SilverStone has announced a device that daisy-chains an SSD with a hard disk , with the aim of providing SSD speeds plus loads of storage space .
The SilverStone HDDBoost is a hard disk caddy with an integrated storage controller , and is an easy upgrade for your PC .
The device copies the 'front-end ' of your hard disk to the SSD , and tells your OS to prefer the SSD when possible .
SSD speeds for a 2TB storage device ?
Yep , sounds good to me !
"</tokentext>
<sentencetext>Claave writes "bit-tech.net reports that SilverStone has announced a device that daisy-chains an SSD with a hard disk, with the aim of providing SSD speeds plus loads of storage space.
The SilverStone HDDBoost is a hard disk caddy with an integrated storage controller, and is an easy upgrade for your PC.
The device copies the 'front-end' of your hard disk to the SSD, and tells your OS to prefer the SSD when possible.
SSD speeds for a 2TB storage device?
Yep, sounds good to me!
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014524</id>
	<title>Been There, Done That</title>
	<author>HTH NE1</author>
	<datestamp>1264930860000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Sounds a lot like the <a href="http://www.9thtee.com/tivocachecard.htm" title="9thtee.com">CacheCard from SiliconDust for Series1 TiVos</a> [9thtee.com], except instead of an SDRAM DIMM it uses an SSD. And the CacheCard doesn't sit between the devices but instead connects to the TiVo motherboard's card-edge connector, provides an Ethernet port, and is designed only to cache a particular 0.5 GiB part of the drive.</p><p>But since the SDRAM loses its contents on power off, it does add significant time to test and fill at startup, while the SSD would be ready nearly immediately.</p></htmltext>
<tokenext>Sounds a lot like the CacheCard from SiliconDust for Series1 TiVos [ 9thtee.com ] , except instead of an SDRAM DIMM it uses an SSD .
And the CacheCard does n't sit between the devices but instead connects to the TiVo motherboard 's card-edge connector , provides an Ethernet port , and is designed only to cache a particular 0.5 GiB part of the drive.But since the SDRAM loses its contents on power off , it does add significant time to test and fill at startup , while the SSD would be ready nearly immediately .</tokentext>
<sentencetext>Sounds a lot like the CacheCard from SiliconDust for Series1 TiVos [9thtee.com], except instead of an SDRAM DIMM it uses an SSD.
And the CacheCard doesn't sit between the devices but instead connects to the TiVo motherboard's card-edge connector, provides an Ethernet port, and is designed only to cache a particular 0.5 GiB part of the drive.But since the SDRAM loses its contents on power off, it does add significant time to test and fill at startup, while the SSD would be ready nearly immediately.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015982</id>
	<title>Leaping forward into the 1980's</title>
	<author>Nefarious Wheel</author>
	<datestamp>1264937460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Haven't disk manufacturers been doing this forever, using faster memories to cache disk?</p></div><p>Digital's ESE series disks.  RAM backed by disk with (iirc) write-behind caching.  Expensive (memory was, after all) but in production in the 1980's.  Welcome to the future.</p></div>
	</htmltext>
<tokenext>Have n't disk manufacturers been doing this forever , using faster memories to cache disk ? Digital 's ESE series disks .
RAM backed by disk with ( iirc ) write-behind caching .
Expensive ( memory was , after all ) but in production in the 1980 's .
Welcome to the future .</tokentext>
<sentencetext>Haven't disk manufacturers been doing this forever, using faster memories to cache disk?Digital's ESE series disks.
RAM backed by disk with (iirc) write-behind caching.
Expensive (memory was, after all) but in production in the 1980's.
Welcome to the future.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31026592</id>
	<title>won't a sata controller do that...</title>
	<author>revboden</author>
	<datestamp>1265276340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>This is kinda like my Vertex SSD boot drive and two 1TB HD raid 1 bulk data setup. Only slower...</htmltext>
<tokenext>This is kinda like my Vertex SSD boot drive and two 1TB HD raid 1 bulk data setup .
Only slower.. .</tokentext>
<sentencetext>This is kinda like my Vertex SSD boot drive and two 1TB HD raid 1 bulk data setup.
Only slower...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013490</id>
	<title>Re:Isn't this just a fancy cache?</title>
	<author>argent</author>
	<datestamp>1264968540000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>No, it's a simple version of cache that doesn't actually do proper caching. All it does is preloading, and only over part of the device. Most of the volume of the hard disk will have no performance boost at all. You'd almost certainly be better off just having two devices, and using junction points on Windows or soft links on UNIX to move the frequently accessed files to the smaller disk.</p></htmltext>
<tokenext>No , it 's a simple version of cache that does n't actually do proper caching .
All it does is preloading , and only over part of the device .
Most of the volume of the hard disk will have no performance boost at all .
You 'd almost certainly be better off just having two devices , and using junction points on Windows or soft links on UNIX to move the frequently accessed files to the smaller disk .</tokentext>
<sentencetext>No, it's a simple version of cache that doesn't actually do proper caching.
All it does is preloading, and only over part of the device.
Most of the volume of the hard disk will have no performance boost at all.
You'd almost certainly be better off just having two devices, and using junction points on Windows or soft links on UNIX to move the frequently accessed files to the smaller disk.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013418</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013600</id>
	<title>Waste of money and data safety</title>
	<author>KDN</author>
	<datestamp>1264969020000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Would it not be more cost effective to add more main memory to the machine?  Main memory would be a lot faster than SSD ram.  Also I have a concern that frequently updated blocks (like your file system superblocks) would not get written out to disk in a timely fashion.
<p>
Now, maybe you could do it safely if the device had RRD ram to handle the caching, SSD flash ram to handle power outages, a rechargable battery or ultra cap to provide power to write the RRD ram to flash ram after a power outage, and a controller to handle all this.  You would need to implement all the normal os buffer caching and writebacks as well.</p></htmltext>
<tokenext>Would it not be more cost effective to add more main memory to the machine ?
Main memory would be a lot faster than SSD ram .
Also I have a concern that frequently updated blocks ( like your file system superblocks ) would not get written out to disk in a timely fashion .
Now , maybe you could do it safely if the device had RRD ram to handle the caching , SSD flash ram to handle power outages , a rechargable battery or ultra cap to provide power to write the RRD ram to flash ram after a power outage , and a controller to handle all this .
You would need to implement all the normal os buffer caching and writebacks as well .</tokentext>
<sentencetext>Would it not be more cost effective to add more main memory to the machine?
Main memory would be a lot faster than SSD ram.
Also I have a concern that frequently updated blocks (like your file system superblocks) would not get written out to disk in a timely fashion.
Now, maybe you could do it safely if the device had RRD ram to handle the caching, SSD flash ram to handle power outages, a rechargable battery or ultra cap to provide power to write the RRD ram to flash ram after a power outage, and a controller to handle all this.
You would need to implement all the normal os buffer caching and writebacks as well.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013354</id>
	<title>Save your money...</title>
	<author>Anonymous</author>
	<datestamp>1264967940000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext>To use this in a desktop, you need<ul> <li>An available 3.5" bay</li><li>A <b>2.5"</b> hard drive</li><li>An SSD of whatever size you can afford</li></ul><p>
This seems like a lot of money to spend for potentially not a lot of speed.  Generally, 2.5" hard drives aren't quite as fast as their 3.5" counterparts anyways, so you're spending a fair bit of money to speed up something that wasn't really made for speed anyways.<br> <br>
Sure, you can "drop it right in" to your existing computer, assuming that your desktop is for some reason already using 2.5" SATA drives.  And if your desktop is currently using 2.5" SATA drives you probably didn't build it to be a speed demon anyways.</p></htmltext>
<tokenext>To use this in a desktop , you need An available 3.5 " bayA 2.5 " hard driveAn SSD of whatever size you can afford This seems like a lot of money to spend for potentially not a lot of speed .
Generally , 2.5 " hard drives are n't quite as fast as their 3.5 " counterparts anyways , so you 're spending a fair bit of money to speed up something that was n't really made for speed anyways .
Sure , you can " drop it right in " to your existing computer , assuming that your desktop is for some reason already using 2.5 " SATA drives .
And if your desktop is currently using 2.5 " SATA drives you probably did n't build it to be a speed demon anyways .</tokentext>
<sentencetext>To use this in a desktop, you need An available 3.5" bayA 2.5" hard driveAn SSD of whatever size you can afford
This seems like a lot of money to spend for potentially not a lot of speed.
Generally, 2.5" hard drives aren't quite as fast as their 3.5" counterparts anyways, so you're spending a fair bit of money to speed up something that wasn't really made for speed anyways.
Sure, you can "drop it right in" to your existing computer, assuming that your desktop is for some reason already using 2.5" SATA drives.
And if your desktop is currently using 2.5" SATA drives you probably didn't build it to be a speed demon anyways.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013536</id>
	<title>Re:Just a cache?</title>
	<author>Drethon</author>
	<datestamp>1264968720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I have a Lenovo laptop that has something like 2G flash drive doing basically that.  So I'd say its probably pretty much just a cache...</htmltext>
<tokenext>I have a Lenovo laptop that has something like 2G flash drive doing basically that .
So I 'd say its probably pretty much just a cache.. .</tokentext>
<sentencetext>I have a Lenovo laptop that has something like 2G flash drive doing basically that.
So I'd say its probably pretty much just a cache...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013682</id>
	<title>Move along, nothing to see here folks.</title>
	<author>Velorium</author>
	<datestamp>1264969500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Come back when there are some benchmarks to look at.</htmltext>
<tokenext>Come back when there are some benchmarks to look at .</tokentext>
<sentencetext>Come back when there are some benchmarks to look at.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31019486</id>
	<title>I'd like HDD speeds for my SSD - seriously!</title>
	<author>RobWalker</author>
	<datestamp>1264967160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>But then I was dumb enough to buy DELL:

<a href="http://www.tomshardware.co.uk/forum/254961-14-warning-careful-ordering-dell-machines-ssds" title="tomshardware.co.uk" rel="nofollow">http://www.tomshardware.co.uk/forum/254961-14-warning-careful-ordering-dell-machines-ssds</a> [tomshardware.co.uk]</htmltext>
<tokenext>But then I was dumb enough to buy DELL : http : //www.tomshardware.co.uk/forum/254961-14-warning-careful-ordering-dell-machines-ssds [ tomshardware.co.uk ]</tokentext>
<sentencetext>But then I was dumb enough to buy DELL:

http://www.tomshardware.co.uk/forum/254961-14-warning-careful-ordering-dell-machines-ssds [tomshardware.co.uk]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013520</id>
	<title>Re:2.5" drives only</title>
	<author>NitroWolf</author>
	<datestamp>1264968660000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p> <i>The device takes the form of a 2.5in to 3.5in hard disk caddy with a couple of SATA connectors on the end.</i></p><p>Good job Claave! You apparently didn't even get to the second paragraph before submitting the article. You can't use a 2TB hard drive with this because there are no 2TB 2.5" drives yet.</p></div><p>Good job, GEvil! You didn't even read the article at ALL!</p><p>Please point to where it says you must use a 2.5" hard drive.  Hmm, you don't think that maybe... JUST MAYBE... the 2.5" caddy is FOR THE SSD so that you can mount it in a 3.5" bay?  Then you mount your 3.5" HD as normal!</p><p>Gosh... reading comprehension!  Learn it.  Live it!  Love it!  Take it home and call it George.</p><p>kthxbye</p></div>
	</htmltext>
<tokenext>The device takes the form of a 2.5in to 3.5in hard disk caddy with a couple of SATA connectors on the end.Good job Claave !
You apparently did n't even get to the second paragraph before submitting the article .
You ca n't use a 2TB hard drive with this because there are no 2TB 2.5 " drives yet.Good job , GEvil !
You did n't even read the article at ALL ! Please point to where it says you must use a 2.5 " hard drive .
Hmm , you do n't think that maybe... JUST MAYBE... the 2.5 " caddy is FOR THE SSD so that you can mount it in a 3.5 " bay ?
Then you mount your 3.5 " HD as normal ! Gosh... reading comprehension !
Learn it .
Live it !
Love it !
Take it home and call it George.kthxbye</tokentext>
<sentencetext> The device takes the form of a 2.5in to 3.5in hard disk caddy with a couple of SATA connectors on the end.Good job Claave!
You apparently didn't even get to the second paragraph before submitting the article.
You can't use a 2TB hard drive with this because there are no 2TB 2.5" drives yet.Good job, GEvil!
You didn't even read the article at ALL!Please point to where it says you must use a 2.5" hard drive.
Hmm, you don't think that maybe... JUST MAYBE... the 2.5" caddy is FOR THE SSD so that you can mount it in a 3.5" bay?
Then you mount your 3.5" HD as normal!Gosh... reading comprehension!
Learn it.
Live it!
Love it!
Take it home and call it George.kthxbye
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013392</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013316</id>
	<title>Tiny penis</title>
	<author>Anonymous</author>
	<datestamp>1264967820000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Rob Malda has a tiny penis.</p></htmltext>
<tokenext>Rob Malda has a tiny penis .</tokentext>
<sentencetext>Rob Malda has a tiny penis.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013960</id>
	<title>Re:Save your money...</title>
	<author>zippthorne</author>
	<datestamp>1264971060000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>Great Idea.<nobr> <wbr></nobr>... Computer management, clickly clicky, drop list.. NTFS or FAT.  Which one of those is ZFS?</p><p>Let's try another option.<br>$ mkfs.zfs<nobr> <wbr></nobr>/dev/disk/by-label/MyMainDisk</p><p>No command 'mkfs.zfs' found, did you mean:<br>
&nbsp; Command 'mkfs.gfs' from package 'gfs-tools' (main)<br>
&nbsp; Command 'mkfs.hfs' from package 'hfsprogs' (universe)<br>
&nbsp; Command 'mkfs.bfs' from package 'util-linux' (main)<br>
&nbsp; Command 'mkfs.xfs' from package 'xfsprogs' (main)<br>
&nbsp; Command 'mkfs.ufs' from package 'ufsutils' (universe)<br>
&nbsp; Command 'mkfs.jfs' from package 'jfsutils' (main)<br>mkfs.zfs: command not found</p><p>Okay...hmm.</p><p>$ diskutil listFilesystems | grep -i zfs</p><p>hm, nothing.</p><p>I suppose it *should* be possible to do in software, and I'd even imagine that like RAID, the benefits of doing it in hardware become more dubious as tome goes on.  However, there is the question of the least effort way of getting it done in "my" computer right now.</p></htmltext>
<tokenext>Great Idea .
... Computer management , clickly clicky , drop list.. NTFS or FAT .
Which one of those is ZFS ? Let 's try another option. $ mkfs.zfs /dev/disk/by-label/MyMainDiskNo command 'mkfs.zfs ' found , did you mean :   Command 'mkfs.gfs ' from package 'gfs-tools ' ( main )   Command 'mkfs.hfs ' from package 'hfsprogs ' ( universe )   Command 'mkfs.bfs ' from package 'util-linux ' ( main )   Command 'mkfs.xfs ' from package 'xfsprogs ' ( main )   Command 'mkfs.ufs ' from package 'ufsutils ' ( universe )   Command 'mkfs.jfs ' from package 'jfsutils ' ( main ) mkfs.zfs : command not foundOkay...hmm. $ diskutil listFilesystems | grep -i zfshm , nothing.I suppose it * should * be possible to do in software , and I 'd even imagine that like RAID , the benefits of doing it in hardware become more dubious as tome goes on .
However , there is the question of the least effort way of getting it done in " my " computer right now .</tokentext>
<sentencetext>Great Idea.
... Computer management, clickly clicky, drop list.. NTFS or FAT.
Which one of those is ZFS?Let's try another option.$ mkfs.zfs /dev/disk/by-label/MyMainDiskNo command 'mkfs.zfs' found, did you mean:
  Command 'mkfs.gfs' from package 'gfs-tools' (main)
  Command 'mkfs.hfs' from package 'hfsprogs' (universe)
  Command 'mkfs.bfs' from package 'util-linux' (main)
  Command 'mkfs.xfs' from package 'xfsprogs' (main)
  Command 'mkfs.ufs' from package 'ufsutils' (universe)
  Command 'mkfs.jfs' from package 'jfsutils' (main)mkfs.zfs: command not foundOkay...hmm.$ diskutil listFilesystems | grep -i zfshm, nothing.I suppose it *should* be possible to do in software, and I'd even imagine that like RAID, the benefits of doing it in hardware become more dubious as tome goes on.
However, there is the question of the least effort way of getting it done in "my" computer right now.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013386</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31017584</id>
	<title>Re:For those that didn't RTFA</title>
	<author>Zebra\_X</author>
	<datestamp>1264946700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"Microsoft has a rocket launcher pointed at their feet and they think they can rocket jump."</p><p>Well, it worked in Marathon...</p></htmltext>
<tokenext>" Microsoft has a rocket launcher pointed at their feet and they think they can rocket jump .
" Well , it worked in Marathon.. .</tokentext>
<sentencetext>"Microsoft has a rocket launcher pointed at their feet and they think they can rocket jump.
"Well, it worked in Marathon...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013708</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013386</id>
	<title>Re:Save your money...</title>
	<author>TheRaven64</author>
	<datestamp>1264968060000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>Or, you can just use ZFS and turn on the L2ARC, which will use the SSD as a cache for the hard disks and not need any custom hardware.</htmltext>
<tokenext>Or , you can just use ZFS and turn on the L2ARC , which will use the SSD as a cache for the hard disks and not need any custom hardware .</tokentext>
<sentencetext>Or, you can just use ZFS and turn on the L2ARC, which will use the SSD as a cache for the hard disks and not need any custom hardware.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013354</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013404</id>
	<title>What 2TB HD?</title>
	<author>damn\_registrars</author>
	<datestamp>1264968120000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>0</modscore>
	<htmltext>This adapter is for <b>2.5"</b> hard drives - if you put a 3.5 drive in it, you wouldn't fit drive+adapter+SSD into a 3.5" bay.  Who makes a 2TB 2.5" SATA drive currently?  I am not aware of any...</htmltext>
<tokenext>This adapter is for 2.5 " hard drives - if you put a 3.5 drive in it , you would n't fit drive + adapter + SSD into a 3.5 " bay .
Who makes a 2TB 2.5 " SATA drive currently ?
I am not aware of any.. .</tokentext>
<sentencetext>This adapter is for 2.5" hard drives - if you put a 3.5 drive in it, you wouldn't fit drive+adapter+SSD into a 3.5" bay.
Who makes a 2TB 2.5" SATA drive currently?
I am not aware of any...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013704</id>
	<title>File system</title>
	<author>olau</author>
	<datestamp>1264969620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>There was a paper some years ago about building the file system in such a manner that smaller files were placed on an SSD ( 1 MB) and large files were placed on a harddisk. At that time, SSDs were a lot smaller than today though.</p><p>Generally, it can make sense to discriminate your files because they don't all have the same space and access characteristics. Maybe 100 files is taking up 90\% of the space compared to the other 9900 files. Maybe it's similar for the access pattern.</p><p>Still, for the idea to fly, you need to a robust algorithm and it needs to be clever about the strengths of the hardware. For instance, SSDs aren't so hot at random writes, sadly. Less than 0.1 msec write time would be neat for an ACID database.</p></htmltext>
<tokenext>There was a paper some years ago about building the file system in such a manner that smaller files were placed on an SSD ( 1 MB ) and large files were placed on a harddisk .
At that time , SSDs were a lot smaller than today though.Generally , it can make sense to discriminate your files because they do n't all have the same space and access characteristics .
Maybe 100 files is taking up 90 \ % of the space compared to the other 9900 files .
Maybe it 's similar for the access pattern.Still , for the idea to fly , you need to a robust algorithm and it needs to be clever about the strengths of the hardware .
For instance , SSDs are n't so hot at random writes , sadly .
Less than 0.1 msec write time would be neat for an ACID database .</tokentext>
<sentencetext>There was a paper some years ago about building the file system in such a manner that smaller files were placed on an SSD ( 1 MB) and large files were placed on a harddisk.
At that time, SSDs were a lot smaller than today though.Generally, it can make sense to discriminate your files because they don't all have the same space and access characteristics.
Maybe 100 files is taking up 90\% of the space compared to the other 9900 files.
Maybe it's similar for the access pattern.Still, for the idea to fly, you need to a robust algorithm and it needs to be clever about the strengths of the hardware.
For instance, SSDs aren't so hot at random writes, sadly.
Less than 0.1 msec write time would be neat for an ACID database.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015358</id>
	<title>Seagate, WD, others tried this.</title>
	<author>CFD339</author>
	<datestamp>1264935060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In 2007 there was a whole movement toward hybrid drives -- it went nowhere.</p></htmltext>
<tokenext>In 2007 there was a whole movement toward hybrid drives -- it went nowhere .</tokentext>
<sentencetext>In 2007 there was a whole movement toward hybrid drives -- it went nowhere.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013480</id>
	<title>Windows Only</title>
	<author>asdf7890</author>
	<datestamp>1264968480000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>In order to appear as one storage device in Windows, SilverStone has needed to use some software to...</p></div><p>There is the turn off for me. If I were to use something like this I would want an OS agnostic solution. Of course that would mean the caching would have to be done at the block level rather than the file level so it might not be able to be as bright (a block level cache manager wouldn't know to deallocate space on the SSD immediately when a file is deleted for instance), but it should be quite practical to design an algorithm that keeps the most often used blocks in the cache (the SSD) without the whole thing being needless wiped first time you copy a massive data file in (you wouldn't want that 20Gb file to be written to the SSD first time it is laid down, at the expense of dropping blocks frmo OS startup files and such, in case it is hardly ever accessed again - for instance an image of a blueray disc that you are copying to another disc would not want to touch the cache as it'll probably be written one, read once then wiped. How this block-based cache management algorithm would work in detail is left as an exercise for the reader...</p></div>
	</htmltext>
<tokenext>In order to appear as one storage device in Windows , SilverStone has needed to use some software to...There is the turn off for me .
If I were to use something like this I would want an OS agnostic solution .
Of course that would mean the caching would have to be done at the block level rather than the file level so it might not be able to be as bright ( a block level cache manager would n't know to deallocate space on the SSD immediately when a file is deleted for instance ) , but it should be quite practical to design an algorithm that keeps the most often used blocks in the cache ( the SSD ) without the whole thing being needless wiped first time you copy a massive data file in ( you would n't want that 20Gb file to be written to the SSD first time it is laid down , at the expense of dropping blocks frmo OS startup files and such , in case it is hardly ever accessed again - for instance an image of a blueray disc that you are copying to another disc would not want to touch the cache as it 'll probably be written one , read once then wiped .
How this block-based cache management algorithm would work in detail is left as an exercise for the reader.. .</tokentext>
<sentencetext>In order to appear as one storage device in Windows, SilverStone has needed to use some software to...There is the turn off for me.
If I were to use something like this I would want an OS agnostic solution.
Of course that would mean the caching would have to be done at the block level rather than the file level so it might not be able to be as bright (a block level cache manager wouldn't know to deallocate space on the SSD immediately when a file is deleted for instance), but it should be quite practical to design an algorithm that keeps the most often used blocks in the cache (the SSD) without the whole thing being needless wiped first time you copy a massive data file in (you wouldn't want that 20Gb file to be written to the SSD first time it is laid down, at the expense of dropping blocks frmo OS startup files and such, in case it is hardly ever accessed again - for instance an image of a blueray disc that you are copying to another disc would not want to touch the cache as it'll probably be written one, read once then wiped.
How this block-based cache management algorithm would work in detail is left as an exercise for the reader...
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013376</id>
	<title>Sounds like bullshit to me</title>
	<author>Anonymous</author>
	<datestamp>1264968060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>How would the disk supposedly know, which part of the 2TB I am  going to need next?</p></htmltext>
<tokenext>How would the disk supposedly know , which part of the 2TB I am going to need next ?</tokentext>
<sentencetext>How would the disk supposedly know, which part of the 2TB I am  going to need next?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013338</id>
	<title>Frpsty</title>
	<author>Anonymous</author>
	<datestamp>1264967880000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Check out this shi-te<nobr> <wbr></nobr>/////////////</p></htmltext>
<tokenext>Check out this shi-te /////////////</tokentext>
<sentencetext>Check out this shi-te /////////////</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013792</id>
	<title>Re:Holy carp!</title>
	<author>Phleg</author>
	<datestamp>1264970100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>RAID 0 is for chumps. You get a similar read speed boost from RAID 1, and you don't have the dramatically increased risk of failure.</htmltext>
<tokenext>RAID 0 is for chumps .
You get a similar read speed boost from RAID 1 , and you do n't have the dramatically increased risk of failure .</tokentext>
<sentencetext>RAID 0 is for chumps.
You get a similar read speed boost from RAID 1, and you don't have the dramatically increased risk of failure.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013408</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014436</id>
	<title>Re:Waste of money and data safety</title>
	<author>Seth024</author>
	<datestamp>1264930380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>This would provide a speed boost to machines that already have enough memory (and most do). Part of your operating system would be copied to the SSD (persistent storage!). A system boot would be much faster because the data can be accessed from the SSD and doesn't have to be read from the HDD.
<br>
Having more memory, which only solves capacity problems, wouldn't be helpful in this case.</htmltext>
<tokenext>This would provide a speed boost to machines that already have enough memory ( and most do ) .
Part of your operating system would be copied to the SSD ( persistent storage ! ) .
A system boot would be much faster because the data can be accessed from the SSD and does n't have to be read from the HDD .
Having more memory , which only solves capacity problems , would n't be helpful in this case .</tokentext>
<sentencetext>This would provide a speed boost to machines that already have enough memory (and most do).
Part of your operating system would be copied to the SSD (persistent storage!).
A system boot would be much faster because the data can be accessed from the SSD and doesn't have to be read from the HDD.
Having more memory, which only solves capacity problems, wouldn't be helpful in this case.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013600</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015072</id>
	<title>Already do! rootfs on SSD, home/huge on disk</title>
	<author>redelm</author>
	<datestamp>1264933680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>There are alignment tricks with SSD around their large erase blocks, so you have to be careful partitioning.</p><p>Also, consumer-grade MHC SSDs are \_not\_ tremendously faster than spinning disks in transfer speed.  Maybe 20\%.  Access time is where SSDs shine, 0.2 ms vs 8-10ms<nobr> <wbr></nobr>.</p><p>A simple scheme I use is to put the OS &amp; small, frequent datafiles on SSD, and large [image] files on platter.</p><p>This might not help large databases with sparse access, but lots of RAM disk cache should be better.  IIRC Seagate had a disk with flash boost, but had trouble with it.</p></htmltext>
<tokenext>There are alignment tricks with SSD around their large erase blocks , so you have to be careful partitioning.Also , consumer-grade MHC SSDs are \ _not \ _ tremendously faster than spinning disks in transfer speed .
Maybe 20 \ % .
Access time is where SSDs shine , 0.2 ms vs 8-10ms .A simple scheme I use is to put the OS &amp; small , frequent datafiles on SSD , and large [ image ] files on platter.This might not help large databases with sparse access , but lots of RAM disk cache should be better .
IIRC Seagate had a disk with flash boost , but had trouble with it .</tokentext>
<sentencetext>There are alignment tricks with SSD around their large erase blocks, so you have to be careful partitioning.Also, consumer-grade MHC SSDs are \_not\_ tremendously faster than spinning disks in transfer speed.
Maybe 20\%.
Access time is where SSDs shine, 0.2 ms vs 8-10ms .A simple scheme I use is to put the OS &amp; small, frequent datafiles on SSD, and large [image] files on platter.This might not help large databases with sparse access, but lots of RAM disk cache should be better.
IIRC Seagate had a disk with flash boost, but had trouble with it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015216</id>
	<title>Specifically, Big Cheap Mid-Speed No-Brainer cache</title>
	<author>billstewart</author>
	<datestamp>1264934340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yeah, it's a disk cache that's external to your system RAM, with a (hopefully) no-brainer setup.  You can use whatever size SSD you want, so it can be bigger than you'll get by expanding your system RAM or replacing your motherboard with one that handles more RAM, and you can use slower RAM or flash for the SSD as opposed to blazing-fast system RAM.  </p><p>The big performance win you get from systems like this is write caching on database transaction logs and file system journals, because the writes don't have to wait for rotating machinery to spin around and seek to the right part of the disk, and you can queue stuff on stable storage outside the OS so the application can go on to the next step instead of waiting around for disk interrupts.</p></htmltext>
<tokenext>Yeah , it 's a disk cache that 's external to your system RAM , with a ( hopefully ) no-brainer setup .
You can use whatever size SSD you want , so it can be bigger than you 'll get by expanding your system RAM or replacing your motherboard with one that handles more RAM , and you can use slower RAM or flash for the SSD as opposed to blazing-fast system RAM .
The big performance win you get from systems like this is write caching on database transaction logs and file system journals , because the writes do n't have to wait for rotating machinery to spin around and seek to the right part of the disk , and you can queue stuff on stable storage outside the OS so the application can go on to the next step instead of waiting around for disk interrupts .</tokentext>
<sentencetext>Yeah, it's a disk cache that's external to your system RAM, with a (hopefully) no-brainer setup.
You can use whatever size SSD you want, so it can be bigger than you'll get by expanding your system RAM or replacing your motherboard with one that handles more RAM, and you can use slower RAM or flash for the SSD as opposed to blazing-fast system RAM.
The big performance win you get from systems like this is write caching on database transaction logs and file system journals, because the writes don't have to wait for rotating machinery to spin around and seek to the right part of the disk, and you can queue stuff on stable storage outside the OS so the application can go on to the next step instead of waiting around for disk interrupts.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013418</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014016</id>
	<title>Re: SSD vs DRAM as cache...</title>
	<author>Anonymous</author>
	<datestamp>1264971300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Performance of disk systems is determined by the cache system almost entirely (er... given same data transport, etc).</p><p>Usually the write side is more complex because you don't want your device telling the o/s that the write committed when it is still in vulnerable cache.  Storage system mfrs recognize this and put in batteries to make sure the write side cache is able to survive power outages.  This adds weight as well as complexity and there is always residual risk in these systems.</p><p>If you can make a non-volatile cache write that is effectively equivalent to writing to the magnetic media and do it without external support (eg batteries) then you have a better device.</p><p>Also, this has the potential of bringing high performance writes to the disk device level instead of the storage chassis/system.</p></htmltext>
<tokenext>Performance of disk systems is determined by the cache system almost entirely ( er... given same data transport , etc ) .Usually the write side is more complex because you do n't want your device telling the o/s that the write committed when it is still in vulnerable cache .
Storage system mfrs recognize this and put in batteries to make sure the write side cache is able to survive power outages .
This adds weight as well as complexity and there is always residual risk in these systems.If you can make a non-volatile cache write that is effectively equivalent to writing to the magnetic media and do it without external support ( eg batteries ) then you have a better device.Also , this has the potential of bringing high performance writes to the disk device level instead of the storage chassis/system .</tokentext>
<sentencetext>Performance of disk systems is determined by the cache system almost entirely (er... given same data transport, etc).Usually the write side is more complex because you don't want your device telling the o/s that the write committed when it is still in vulnerable cache.
Storage system mfrs recognize this and put in batteries to make sure the write side cache is able to survive power outages.
This adds weight as well as complexity and there is always residual risk in these systems.If you can make a non-volatile cache write that is effectively equivalent to writing to the magnetic media and do it without external support (eg batteries) then you have a better device.Also, this has the potential of bringing high performance writes to the disk device level instead of the storage chassis/system.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016220</id>
	<title>Windows ReadyBoost</title>
	<author>MobyDisk</author>
	<datestamp>1264938480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Windows ReadyBoost already does this.  Plug-in an SSD, turn it on, and it caches frequently accessed files there.  The last benchmark I read on it said it wasn't any faster though - probably because USB flash-drive SSDs are really slow since they are optimized for physical size, data density, and power consumption -- not speed.</p></htmltext>
<tokenext>Windows ReadyBoost already does this .
Plug-in an SSD , turn it on , and it caches frequently accessed files there .
The last benchmark I read on it said it was n't any faster though - probably because USB flash-drive SSDs are really slow since they are optimized for physical size , data density , and power consumption -- not speed .</tokentext>
<sentencetext>Windows ReadyBoost already does this.
Plug-in an SSD, turn it on, and it caches frequently accessed files there.
The last benchmark I read on it said it wasn't any faster though - probably because USB flash-drive SSDs are really slow since they are optimized for physical size, data density, and power consumption -- not speed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013402</id>
	<title>You mean like in...</title>
	<author>kungfuj35u5</author>
	<datestamp>1264968120000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>ZFS?  Hybrid storage pools have been around for a long while, and exist as a pretty well balanced software solution to this problem.  Hybrid solid-state/magnetic disks were in the market as well which used a similar technique.  There is nothing new or impressive about this device.</htmltext>
<tokenext>ZFS ?
Hybrid storage pools have been around for a long while , and exist as a pretty well balanced software solution to this problem .
Hybrid solid-state/magnetic disks were in the market as well which used a similar technique .
There is nothing new or impressive about this device .</tokentext>
<sentencetext>ZFS?
Hybrid storage pools have been around for a long while, and exist as a pretty well balanced software solution to this problem.
Hybrid solid-state/magnetic disks were in the market as well which used a similar technique.
There is nothing new or impressive about this device.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013398</id>
	<title>Pick the false statement</title>
	<author>sakdoctor</author>
	<datestamp>1264968120000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>No software or driver update is required</p></div><p><div class="quote"><p>Some software is needed to achieve the magic</p></div></div>
	</htmltext>
<tokenext>No software or driver update is requiredSome software is needed to achieve the magic</tokentext>
<sentencetext>No software or driver update is requiredSome software is needed to achieve the magic
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015916</id>
	<title>Re:For those that didn't RTFA</title>
	<author>mariushm</author>
	<datestamp>1264937160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's not how I see it. You have to defragment the big drive first, to have all executables and OS files at the start of the big disk. Then you use the software they give you to fill up the SSD with the data at the start of the big disk and after a reboot, reads and writes to those first 32 GB or so of the big disk will pass through the SSD.</p></htmltext>
<tokenext>That 's not how I see it .
You have to defragment the big drive first , to have all executables and OS files at the start of the big disk .
Then you use the software they give you to fill up the SSD with the data at the start of the big disk and after a reboot , reads and writes to those first 32 GB or so of the big disk will pass through the SSD .</tokentext>
<sentencetext>That's not how I see it.
You have to defragment the big drive first, to have all executables and OS files at the start of the big disk.
Then you use the software they give you to fill up the SSD with the data at the start of the big disk and after a reboot, reads and writes to those first 32 GB or so of the big disk will pass through the SSD.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013708</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013838</id>
	<title>Re:Save your money...</title>
	<author>jgagnon</author>
	<datestamp>1264970280000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Except that then you're at USB speeds instead of SATA speeds.</p></htmltext>
<tokenext>Except that then you 're at USB speeds instead of SATA speeds .</tokentext>
<sentencetext>Except that then you're at USB speeds instead of SATA speeds.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013448</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</id>
	<title>Just a cache?</title>
	<author>Erich</author>
	<datestamp>1264968060000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Haven't disk manufacturers been doing this forever, using faster memories to cache disk?  I guess the difference now is that the memory is slower than DRAM and non-volatile so it isn't lost in the event of power failure?  Or maybe you can get more flash storage at a low price point?</htmltext>
<tokenext>Have n't disk manufacturers been doing this forever , using faster memories to cache disk ?
I guess the difference now is that the memory is slower than DRAM and non-volatile so it is n't lost in the event of power failure ?
Or maybe you can get more flash storage at a low price point ?</tokentext>
<sentencetext>Haven't disk manufacturers been doing this forever, using faster memories to cache disk?
I guess the difference now is that the memory is slower than DRAM and non-volatile so it isn't lost in the event of power failure?
Or maybe you can get more flash storage at a low price point?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018678</id>
	<title>Re:Why not just make an SSD cache controller?</title>
	<author>radish</author>
	<datestamp>1264956720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What advantage does that have to the OS level disk cache? Why not just put more system RAM in for that to use than add a special device?</p></htmltext>
<tokenext>What advantage does that have to the OS level disk cache ?
Why not just put more system RAM in for that to use than add a special device ?</tokentext>
<sentencetext>What advantage does that have to the OS level disk cache?
Why not just put more system RAM in for that to use than add a special device?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014572</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018018</id>
	<title>Re:Just a cache?</title>
	<author>Eil</author>
	<datestamp>1264950420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yeah, it's just a cache. But it's a really big cache.</p><p>Actually, it sounds a lot to me like a beige-box version of Intel's <a href="http://www.intel.com/design/flash/nand/turbomemory/index.htm" title="intel.com">Turbo Memory</a> [intel.com] thing for laptops, which only has drivers for Windows.</p></htmltext>
<tokenext>Yeah , it 's just a cache .
But it 's a really big cache.Actually , it sounds a lot to me like a beige-box version of Intel 's Turbo Memory [ intel.com ] thing for laptops , which only has drivers for Windows .</tokentext>
<sentencetext>Yeah, it's just a cache.
But it's a really big cache.Actually, it sounds a lot to me like a beige-box version of Intel's Turbo Memory [intel.com] thing for laptops, which only has drivers for Windows.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013448</id>
	<title>Re:Save your money...</title>
	<author>Anonymous</author>
	<datestamp>1264968360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Or just plug in a usb drive into any Windows 7 computer.</htmltext>
<tokenext>Or just plug in a usb drive into any Windows 7 computer .</tokentext>
<sentencetext>Or just plug in a usb drive into any Windows 7 computer.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013386</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013392</id>
	<title>2.5" drives only</title>
	<author>gEvil (beta)</author>
	<datestamp>1264968120000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><i>The device takes the form of a 2.5in to 3.5in hard disk caddy with a couple of SATA connectors on the end.</i> <br>
<br>
Good job Claave! You apparently didn't even get to the second paragraph before submitting the article. You can't use a 2TB hard drive with this because there are no 2TB 2.5" drives yet.</htmltext>
<tokenext>The device takes the form of a 2.5in to 3.5in hard disk caddy with a couple of SATA connectors on the end .
Good job Claave !
You apparently did n't even get to the second paragraph before submitting the article .
You ca n't use a 2TB hard drive with this because there are no 2TB 2.5 " drives yet .</tokentext>
<sentencetext>The device takes the form of a 2.5in to 3.5in hard disk caddy with a couple of SATA connectors on the end.
Good job Claave!
You apparently didn't even get to the second paragraph before submitting the article.
You can't use a 2TB hard drive with this because there are no 2TB 2.5" drives yet.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31017898</id>
	<title>Re:Just a cache?</title>
	<author>BikeHelmet</author>
	<datestamp>1264949340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>DRAM cache mostly helps with reading - not writing. I guess that's where the SSD comes in. Our filesystems just aren't set up to allow writing huge swathes of tiny files sequentially, so when random write speeds are important the SSD is king.</p><p>Although I suppose ZFS might be set up for exactly that... but few of the others are.</p><p>An HDD with a huge cache would be interesting. One company tried it a few years back.</p><p><a href="http://arstechnica.com/hardware/news/2007/09/coming-soon-hard-drives-with-1gb-ddr-ram-cache.ars" title="arstechnica.com">http://arstechnica.com/hardware/news/2007/09/coming-soon-hard-drives-with-1gb-ddr-ram-cache.ars</a> [arstechnica.com]</p><p>Based on this 5400RPM drive's performance, I imagine a WD Black(7200RPM with dual heads) would come close to maxing out SATA2 if it had a gigabyte or two of cache.</p></htmltext>
<tokenext>DRAM cache mostly helps with reading - not writing .
I guess that 's where the SSD comes in .
Our filesystems just are n't set up to allow writing huge swathes of tiny files sequentially , so when random write speeds are important the SSD is king.Although I suppose ZFS might be set up for exactly that... but few of the others are.An HDD with a huge cache would be interesting .
One company tried it a few years back.http : //arstechnica.com/hardware/news/2007/09/coming-soon-hard-drives-with-1gb-ddr-ram-cache.ars [ arstechnica.com ] Based on this 5400RPM drive 's performance , I imagine a WD Black ( 7200RPM with dual heads ) would come close to maxing out SATA2 if it had a gigabyte or two of cache .</tokentext>
<sentencetext>DRAM cache mostly helps with reading - not writing.
I guess that's where the SSD comes in.
Our filesystems just aren't set up to allow writing huge swathes of tiny files sequentially, so when random write speeds are important the SSD is king.Although I suppose ZFS might be set up for exactly that... but few of the others are.An HDD with a huge cache would be interesting.
One company tried it a few years back.http://arstechnica.com/hardware/news/2007/09/coming-soon-hard-drives-with-1gb-ddr-ram-cache.ars [arstechnica.com]Based on this 5400RPM drive's performance, I imagine a WD Black(7200RPM with dual heads) would come close to maxing out SATA2 if it had a gigabyte or two of cache.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013836</id>
	<title>Re:Holy carp!</title>
	<author>Anonymous</author>
	<datestamp>1264970280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Only a fucking dipshit would use RAID 0 for a system drive.</htmltext>
<tokenext>Only a fucking dipshit would use RAID 0 for a system drive .</tokentext>
<sentencetext>Only a fucking dipshit would use RAID 0 for a system drive.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013408</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014030</id>
	<title>Re:Holy carp!</title>
	<author>obarthelemy</author>
	<datestamp>1264971360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Holy carp indeed... always loved that fish symbolism for the Christians.</p></htmltext>
<tokenext>Holy carp indeed... always loved that fish symbolism for the Christians .</tokentext>
<sentencetext>Holy carp indeed... always loved that fish symbolism for the Christians.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013408</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016888</id>
	<title>Re:Just a cache?</title>
	<author>Sycraft-fu</author>
	<datestamp>1264942200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The main problem with HDD cache is it is so small with regards to the space it is caching. Yes, HDs have cache but 16MB and 32MB are common. There are a couple 64MB ones but that's it. That's for 500GB-2TB. Can't cache much data with that. Compare that to your CPU which tend to have somewhere in the 4-8MB of L2/L3 cache and usually RAM in the 2-8GB range. There's a lot less oversubscription of the cache which makes it works better. A system like this has the potential to work a lot better since you can equalize the difference much better. You can have 64GB for a 2TB drive, instead of 64MB.</p><p>Also yes, non volatility matters. You can't go and have a massive RAM cache on a disk if it means something could be written to that, power could fail, and then it never gets committed. Something like this you can write to it as often as you like and that can be delayed for disk write as long as you like because it is non volatile.</p><p>That is another area that kills disk performance is doing simultaneous reads and writes. Disk needs to read a bunch of data, but also needs to do some small writes during that. Well, would make sense to delay the writes and batch them, but you don't want to do that or you lose data. No problem here, flash is fast for random access so reads and writes at the same time should be a big deal.</p></htmltext>
<tokenext>The main problem with HDD cache is it is so small with regards to the space it is caching .
Yes , HDs have cache but 16MB and 32MB are common .
There are a couple 64MB ones but that 's it .
That 's for 500GB-2TB .
Ca n't cache much data with that .
Compare that to your CPU which tend to have somewhere in the 4-8MB of L2/L3 cache and usually RAM in the 2-8GB range .
There 's a lot less oversubscription of the cache which makes it works better .
A system like this has the potential to work a lot better since you can equalize the difference much better .
You can have 64GB for a 2TB drive , instead of 64MB.Also yes , non volatility matters .
You ca n't go and have a massive RAM cache on a disk if it means something could be written to that , power could fail , and then it never gets committed .
Something like this you can write to it as often as you like and that can be delayed for disk write as long as you like because it is non volatile.That is another area that kills disk performance is doing simultaneous reads and writes .
Disk needs to read a bunch of data , but also needs to do some small writes during that .
Well , would make sense to delay the writes and batch them , but you do n't want to do that or you lose data .
No problem here , flash is fast for random access so reads and writes at the same time should be a big deal .</tokentext>
<sentencetext>The main problem with HDD cache is it is so small with regards to the space it is caching.
Yes, HDs have cache but 16MB and 32MB are common.
There are a couple 64MB ones but that's it.
That's for 500GB-2TB.
Can't cache much data with that.
Compare that to your CPU which tend to have somewhere in the 4-8MB of L2/L3 cache and usually RAM in the 2-8GB range.
There's a lot less oversubscription of the cache which makes it works better.
A system like this has the potential to work a lot better since you can equalize the difference much better.
You can have 64GB for a 2TB drive, instead of 64MB.Also yes, non volatility matters.
You can't go and have a massive RAM cache on a disk if it means something could be written to that, power could fail, and then it never gets committed.
Something like this you can write to it as often as you like and that can be delayed for disk write as long as you like because it is non volatile.That is another area that kills disk performance is doing simultaneous reads and writes.
Disk needs to read a bunch of data, but also needs to do some small writes during that.
Well, would make sense to delay the writes and batch them, but you don't want to do that or you lose data.
No problem here, flash is fast for random access so reads and writes at the same time should be a big deal.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013884</id>
	<title>It seems logical</title>
	<author>obarthelemy</author>
	<datestamp>1264970580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>1- RAM systems work that way too: L1 cache, L2 cache, slow RAM, to compare to RAM cache (OS or controller), SSD, HD.</p><p>2- SSDs right now are very un-optimized: you've got to put, for example, your whole OS on them, even though I'd guess 20-30\% of the files are actually read frequently enough to justify being on the SSD... and probably 5-10\% of the files are *written* frequently enough to justify NOT being on the SSD. So seeing the SSDs as a cache rather than a hard disk makes a whole lot of sense, and probably doubles or triples their efficiency, by letting them hold only files that best fit the SSD strong points, and hence a lot more of those files.</p><p>My concern is that this "cache"- or "ready-boost"-like mechanism requires quite some intelligence, either, at the most basic level, to keep count of which sectors get read a lot and written not so much, or even, at a higher level, to identify usage patterns and cache the appropriate files (boot, app launch, game play). I'm not sure where that intelligence goes on with the product described... if it goes on at all.</p></htmltext>
<tokenext>1- RAM systems work that way too : L1 cache , L2 cache , slow RAM , to compare to RAM cache ( OS or controller ) , SSD , HD.2- SSDs right now are very un-optimized : you 've got to put , for example , your whole OS on them , even though I 'd guess 20-30 \ % of the files are actually read frequently enough to justify being on the SSD... and probably 5-10 \ % of the files are * written * frequently enough to justify NOT being on the SSD .
So seeing the SSDs as a cache rather than a hard disk makes a whole lot of sense , and probably doubles or triples their efficiency , by letting them hold only files that best fit the SSD strong points , and hence a lot more of those files.My concern is that this " cache " - or " ready-boost " -like mechanism requires quite some intelligence , either , at the most basic level , to keep count of which sectors get read a lot and written not so much , or even , at a higher level , to identify usage patterns and cache the appropriate files ( boot , app launch , game play ) .
I 'm not sure where that intelligence goes on with the product described... if it goes on at all .</tokentext>
<sentencetext>1- RAM systems work that way too: L1 cache, L2 cache, slow RAM, to compare to RAM cache (OS or controller), SSD, HD.2- SSDs right now are very un-optimized: you've got to put, for example, your whole OS on them, even though I'd guess 20-30\% of the files are actually read frequently enough to justify being on the SSD... and probably 5-10\% of the files are *written* frequently enough to justify NOT being on the SSD.
So seeing the SSDs as a cache rather than a hard disk makes a whole lot of sense, and probably doubles or triples their efficiency, by letting them hold only files that best fit the SSD strong points, and hence a lot more of those files.My concern is that this "cache"- or "ready-boost"-like mechanism requires quite some intelligence, either, at the most basic level, to keep count of which sectors get read a lot and written not so much, or even, at a higher level, to identify usage patterns and cache the appropriate files (boot, app launch, game play).
I'm not sure where that intelligence goes on with the product described... if it goes on at all.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013432</id>
	<title>Re:2.5" drives only</title>
	<author>gEvil (beta)</author>
	<datestamp>1264968300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Derrrr. Nevermind. It helps to also look at the <a href="http://images.bit-tech.net/news\_images/2010/02/silverstone-announces-hybrid-ssd-hard-disk/1.jpg" title="bit-tech.net">pictures</a> [bit-tech.net] when commenting on TFA.</htmltext>
<tokenext>Derrrr .
Nevermind. It helps to also look at the pictures [ bit-tech.net ] when commenting on TFA .</tokentext>
<sentencetext>Derrrr.
Nevermind. It helps to also look at the pictures [bit-tech.net] when commenting on TFA.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013392</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016330</id>
	<title>Re:'front-end'?</title>
	<author>Ant P.</author>
	<datestamp>1264939020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I am guessing it would be the end that's at the front</p></htmltext>
<tokenext>I am guessing it would be the end that 's at the front</tokentext>
<sentencetext>I am guessing it would be the end that's at the front</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013952</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013932</id>
	<title>Breakthrough</title>
	<author>thethibs</author>
	<datestamp>1264970820000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext>Good Grief, Alice! They've invented cache!</htmltext>
<tokenext>Good Grief , Alice !
They 've invented cache !</tokentext>
<sentencetext>Good Grief, Alice!
They've invented cache!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014880</id>
	<title>Re:Your sig</title>
	<author>St.Creed</author>
	<datestamp>1264932660000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>That's because you might confuse him with other people with low numbers, who only post things and never read anything...</p></htmltext>
<tokenext>That 's because you might confuse him with other people with low numbers , who only post things and never read anything.. .</tokentext>
<sentencetext>That's because you might confuse him with other people with low numbers, who only post things and never read anything...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014404</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014808</id>
	<title>Re:Sounds like bullshit to me</title>
	<author>izomiac</author>
	<datestamp>1264932420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Essentially this is a high capacity "bulk storage" drive and a fast "working" drive.  So the potential customers seem to be either those who are too lazy to copy their data from "storage" to "working" manually, or those so predictable that that a computer program can cache stuff before they ask for it.  My guess is that the predictive algorithm is basically "the user asked for this a second ago, let's copy it to the SSD in case the user has a short term memory deficiency".<br> <br>

OTOH, this would help with some stupid programs that are unable to cope with multiple drives/partitions.  E.g. my last laptop's TV tuner insisted upon storing everything on an NTFS D:\, so either I could map D:\ to a USB harddrive and not use it when I was traveling, or limit myself to the few gigabytes I had to spare on my laptop's internal drive.</htmltext>
<tokenext>Essentially this is a high capacity " bulk storage " drive and a fast " working " drive .
So the potential customers seem to be either those who are too lazy to copy their data from " storage " to " working " manually , or those so predictable that that a computer program can cache stuff before they ask for it .
My guess is that the predictive algorithm is basically " the user asked for this a second ago , let 's copy it to the SSD in case the user has a short term memory deficiency " .
OTOH , this would help with some stupid programs that are unable to cope with multiple drives/partitions .
E.g. my last laptop 's TV tuner insisted upon storing everything on an NTFS D : \ , so either I could map D : \ to a USB harddrive and not use it when I was traveling , or limit myself to the few gigabytes I had to spare on my laptop 's internal drive .</tokentext>
<sentencetext>Essentially this is a high capacity "bulk storage" drive and a fast "working" drive.
So the potential customers seem to be either those who are too lazy to copy their data from "storage" to "working" manually, or those so predictable that that a computer program can cache stuff before they ask for it.
My guess is that the predictive algorithm is basically "the user asked for this a second ago, let's copy it to the SSD in case the user has a short term memory deficiency".
OTOH, this would help with some stupid programs that are unable to cope with multiple drives/partitions.
E.g. my last laptop's TV tuner insisted upon storing everything on an NTFS D:\, so either I could map D:\ to a USB harddrive and not use it when I was traveling, or limit myself to the few gigabytes I had to spare on my laptop's internal drive.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013376</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013418</id>
	<title>Isn't this just a fancy cache?</title>
	<author>Anonymous</author>
	<datestamp>1264968240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>I DNRTFA, but this really just seems like a fancy version of cache to me....</p></htmltext>
<tokenext>I DNRTFA , but this really just seems like a fancy version of cache to me... .</tokentext>
<sentencetext>I DNRTFA, but this really just seems like a fancy version of cache to me....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013710</id>
	<title>Shouldn't this be integrated into the controller?</title>
	<author>blackketter</author>
	<datestamp>1264969680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It seems to me that the natural evolution in hard drives would be to build the flash cache on to the controller board on the hard drive.   Is any drive manufacturer building this kind of hybrid flash/magnetic drive?</p></htmltext>
<tokenext>It seems to me that the natural evolution in hard drives would be to build the flash cache on to the controller board on the hard drive .
Is any drive manufacturer building this kind of hybrid flash/magnetic drive ?</tokentext>
<sentencetext>It seems to me that the natural evolution in hard drives would be to build the flash cache on to the controller board on the hard drive.
Is any drive manufacturer building this kind of hybrid flash/magnetic drive?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014354</id>
	<title>Re:'front-end'?</title>
	<author>Anonymous</author>
	<datestamp>1264930020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That would be the purple part.</p><p>Oh, disk...sorry.</p></htmltext>
<tokenext>That would be the purple part.Oh , disk...sorry .</tokentext>
<sentencetext>That would be the purple part.Oh, disk...sorry.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013952</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013408</id>
	<title>Holy carp!</title>
	<author>NitroWolf</author>
	<datestamp>1264968180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well... it looks like there finally might be a reason to spend the money on an SSD.  Up until now, it would be a nice speed boost, but the cost:performance ratio is so out of whack for SSDs, it just makes purchasing one ridiculous unless you have some very specific needs.  For 95\% of the people who have purchased them, they just want the biggest e-peen.  That's fine and all, but my days of swinging around the biggest e-peen are over, so I've held off buying an SSD until the prices drop and capacity goes WAYYYY up.</p><p>However, with this particular device, it actually makes it worth it to spring for a lower capacity, fast SSD (for naturally less money than the higher capacity ones) that will cache the files I use the most.  The question is, and it wasn't really clear from the article unfortunately, is it a real time "mirror" - in so far as over time, if I start using more file and others less, will the drive start caching those newer files that I use more than the older ones I am using less?  Assuming it does (since it would be kind of useless if not), this makes an 80 GB SSD a viable option!</p><p>However, the one drawback I see to this is my current RAID 0 setup would be unusable and I'd have to switch back to using one drive.  That's not a terrible thing, as I've never been too thrilled with the whole RAID 0 thing and if the minor speed advantages it imparts are fully mitigated by the SSD - switching over to a single 2TB drive is awesome.</p><p>I would definitely shell out some bucks for this solution, assuming it works as advertised.</p></htmltext>
<tokenext>Well... it looks like there finally might be a reason to spend the money on an SSD .
Up until now , it would be a nice speed boost , but the cost : performance ratio is so out of whack for SSDs , it just makes purchasing one ridiculous unless you have some very specific needs .
For 95 \ % of the people who have purchased them , they just want the biggest e-peen .
That 's fine and all , but my days of swinging around the biggest e-peen are over , so I 've held off buying an SSD until the prices drop and capacity goes WAYYYY up.However , with this particular device , it actually makes it worth it to spring for a lower capacity , fast SSD ( for naturally less money than the higher capacity ones ) that will cache the files I use the most .
The question is , and it was n't really clear from the article unfortunately , is it a real time " mirror " - in so far as over time , if I start using more file and others less , will the drive start caching those newer files that I use more than the older ones I am using less ?
Assuming it does ( since it would be kind of useless if not ) , this makes an 80 GB SSD a viable option ! However , the one drawback I see to this is my current RAID 0 setup would be unusable and I 'd have to switch back to using one drive .
That 's not a terrible thing , as I 've never been too thrilled with the whole RAID 0 thing and if the minor speed advantages it imparts are fully mitigated by the SSD - switching over to a single 2TB drive is awesome.I would definitely shell out some bucks for this solution , assuming it works as advertised .</tokentext>
<sentencetext>Well... it looks like there finally might be a reason to spend the money on an SSD.
Up until now, it would be a nice speed boost, but the cost:performance ratio is so out of whack for SSDs, it just makes purchasing one ridiculous unless you have some very specific needs.
For 95\% of the people who have purchased them, they just want the biggest e-peen.
That's fine and all, but my days of swinging around the biggest e-peen are over, so I've held off buying an SSD until the prices drop and capacity goes WAYYYY up.However, with this particular device, it actually makes it worth it to spring for a lower capacity, fast SSD (for naturally less money than the higher capacity ones) that will cache the files I use the most.
The question is, and it wasn't really clear from the article unfortunately, is it a real time "mirror" - in so far as over time, if I start using more file and others less, will the drive start caching those newer files that I use more than the older ones I am using less?
Assuming it does (since it would be kind of useless if not), this makes an 80 GB SSD a viable option!However, the one drawback I see to this is my current RAID 0 setup would be unusable and I'd have to switch back to using one drive.
That's not a terrible thing, as I've never been too thrilled with the whole RAID 0 thing and if the minor speed advantages it imparts are fully mitigated by the SSD - switching over to a single 2TB drive is awesome.I would definitely shell out some bucks for this solution, assuming it works as advertised.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015470</id>
	<title>Re:You mean like in...</title>
	<author>Demonantis</author>
	<datestamp>1264935420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I usually just put the OS on a scsi and data on a separate drive(possible raid if desired). This is not really interesting news unless the guy does it at a hardware level.</htmltext>
<tokenext>I usually just put the OS on a scsi and data on a separate drive ( possible raid if desired ) .
This is not really interesting news unless the guy does it at a hardware level .</tokentext>
<sentencetext>I usually just put the OS on a scsi and data on a separate drive(possible raid if desired).
This is not really interesting news unless the guy does it at a hardware level.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013402</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014280</id>
	<title>Makes no sense...</title>
	<author>Spazmania</author>
	<datestamp>1264929540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>From <a href="http://www.silverstonetek.com/qa/qa\_contents.php?pno=HDDBOOST&amp;area=usa" title="silverstonetek.com">http://www.silverstonetek.com/qa/qa\_contents.php?pno=HDDBOOST&amp;area=usa</a> [silverstonetek.com]</p><p><em>After the initial mirroring of data is completed, SSD and HDD will have the same front -end data. HDDBOOST's controller chip will then set data read priority to SSD to take advantage of SSD's much faster read speed. HDDBOOST's priority will be determined by the following rules:</em></p><p><em>1.When data is present on both drives, read from SSD.<br>2.When data is not present on both drives, read from HDD.<br>3.Data will only be written to HDD.</em></p><p><em>[...]</em></p><p><em>In normal operating system environment, a system drive gets written onto constantly until the system is turned off. Compared to using SSD only as the main system drive, HDDBOOST will only write to SSD once sequentially during system boot up when it activates mirror backup. This significantly reduces the wear and tear that normally occurs when writing data to SSD.</em></p><p>This makes no sense. How is it supposed to read from the SSD if the SSD doesn't have a current copy of the data because you only wrote it to the hard disk?</p></htmltext>
<tokenext>From http : //www.silverstonetek.com/qa/qa \ _contents.php ? pno = HDDBOOST&amp;area = usa [ silverstonetek.com ] After the initial mirroring of data is completed , SSD and HDD will have the same front -end data .
HDDBOOST 's controller chip will then set data read priority to SSD to take advantage of SSD 's much faster read speed .
HDDBOOST 's priority will be determined by the following rules : 1.When data is present on both drives , read from SSD.2.When data is not present on both drives , read from HDD.3.Data will only be written to HDD. [ .. .
] In normal operating system environment , a system drive gets written onto constantly until the system is turned off .
Compared to using SSD only as the main system drive , HDDBOOST will only write to SSD once sequentially during system boot up when it activates mirror backup .
This significantly reduces the wear and tear that normally occurs when writing data to SSD.This makes no sense .
How is it supposed to read from the SSD if the SSD does n't have a current copy of the data because you only wrote it to the hard disk ?</tokentext>
<sentencetext>From http://www.silverstonetek.com/qa/qa\_contents.php?pno=HDDBOOST&amp;area=usa [silverstonetek.com]After the initial mirroring of data is completed, SSD and HDD will have the same front -end data.
HDDBOOST's controller chip will then set data read priority to SSD to take advantage of SSD's much faster read speed.
HDDBOOST's priority will be determined by the following rules:1.When data is present on both drives, read from SSD.2.When data is not present on both drives, read from HDD.3.Data will only be written to HDD.[...
]In normal operating system environment, a system drive gets written onto constantly until the system is turned off.
Compared to using SSD only as the main system drive, HDDBOOST will only write to SSD once sequentially during system boot up when it activates mirror backup.
This significantly reduces the wear and tear that normally occurs when writing data to SSD.This makes no sense.
How is it supposed to read from the SSD if the SSD doesn't have a current copy of the data because you only wrote it to the hard disk?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013708</id>
	<title>For those that didn't RTFA</title>
	<author>HannethCom</author>
	<datestamp>1264969620000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>This solution uses two 3.5 inch drive bays in your computer, one for your large platter drive, the other for the caddy with a SSD drive.<br>
<br>
Some software is installed (Windows only) that makes the two drives look like one.<br>
<br>
The most used files from the large drive are copies to the smaller SSD drive. When files cached on the SSD drive are requested, they are read from there, if they do not exist there the request is passed onto the bigger drive. If the file is being used enough it will be copied to the SSD drive at the same time as the information is getting sent to the computer. You will not get SSD drive speeds in this case.<br>
<br>
Yes, this is just using a SSD drive as a cache.<br>
<br>
The product does not come with SSD storage, you have to buy a SSD drive of your choosing as well as this caddy.</htmltext>
<tokenext>This solution uses two 3.5 inch drive bays in your computer , one for your large platter drive , the other for the caddy with a SSD drive .
Some software is installed ( Windows only ) that makes the two drives look like one .
The most used files from the large drive are copies to the smaller SSD drive .
When files cached on the SSD drive are requested , they are read from there , if they do not exist there the request is passed onto the bigger drive .
If the file is being used enough it will be copied to the SSD drive at the same time as the information is getting sent to the computer .
You will not get SSD drive speeds in this case .
Yes , this is just using a SSD drive as a cache .
The product does not come with SSD storage , you have to buy a SSD drive of your choosing as well as this caddy .</tokentext>
<sentencetext>This solution uses two 3.5 inch drive bays in your computer, one for your large platter drive, the other for the caddy with a SSD drive.
Some software is installed (Windows only) that makes the two drives look like one.
The most used files from the large drive are copies to the smaller SSD drive.
When files cached on the SSD drive are requested, they are read from there, if they do not exist there the request is passed onto the bigger drive.
If the file is being used enough it will be copied to the SSD drive at the same time as the information is getting sent to the computer.
You will not get SSD drive speeds in this case.
Yes, this is just using a SSD drive as a cache.
The product does not come with SSD storage, you have to buy a SSD drive of your choosing as well as this caddy.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014526</id>
	<title>writes to disk</title>
	<author>cenc</author>
	<datestamp>1264930860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I am not sure about the speed advantage of reads from disk, given the problem of what to prioritize; but I could see the advantage of writes to disk.</p><p>Does that make any sense?</p></htmltext>
<tokenext>I am not sure about the speed advantage of reads from disk , given the problem of what to prioritize ; but I could see the advantage of writes to disk.Does that make any sense ?</tokentext>
<sentencetext>I am not sure about the speed advantage of reads from disk, given the problem of what to prioritize; but I could see the advantage of writes to disk.Does that make any sense?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018172</id>
	<title>Re:Save your money...</title>
	<author>illumin8</author>
	<datestamp>1264951680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Or, you can just use ZFS and turn on the L2ARC, which will use the SSD as a cache for the hard disks and not need any custom hardware.</p></div></blockquote><p>Is anything like L2ARC available for Linux?  I would love to have something like this in our database servers.</p></div>
	</htmltext>
<tokenext>Or , you can just use ZFS and turn on the L2ARC , which will use the SSD as a cache for the hard disks and not need any custom hardware.Is anything like L2ARC available for Linux ?
I would love to have something like this in our database servers .</tokentext>
<sentencetext>Or, you can just use ZFS and turn on the L2ARC, which will use the SSD as a cache for the hard disks and not need any custom hardware.Is anything like L2ARC available for Linux?
I would love to have something like this in our database servers.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013386</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31020220</id>
	<title>If it exists, how to set it up?</title>
	<author>Ed Avis</author>
	<datestamp>1265280720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>OK let's suppose you have a fairly vanilla Linux desktop system, with one spinning hard disk and one SSD.  How do you set things up in software to use the SSD as a kind of cache for the hard disk?</htmltext>
<tokenext>OK let 's suppose you have a fairly vanilla Linux desktop system , with one spinning hard disk and one SSD .
How do you set things up in software to use the SSD as a kind of cache for the hard disk ?</tokentext>
<sentencetext>OK let's suppose you have a fairly vanilla Linux desktop system, with one spinning hard disk and one SSD.
How do you set things up in software to use the SSD as a kind of cache for the hard disk?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013402</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015128</id>
	<title>Fast stable write caching is the big win</title>
	<author>billstewart</author>
	<datestamp>1264933920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>For a really wide range of applications, what you really need is fast caching of writes to stable storage, so your database transaction logs, file system journals, and modified inodes are saved and the application can go on to its next steps while the SSD box copies itself to higher-latency cheaper rotating machinery.  Read-caching is something the operating system can do for itself in RAM (though the SSD box can also do its own prediction, especially for things like track-at-a-time reads of the disk), and the SSD can often keep a bigger read cache than the OS can because it's less performance-critical than caching in system RAM.</p></htmltext>
<tokenext>For a really wide range of applications , what you really need is fast caching of writes to stable storage , so your database transaction logs , file system journals , and modified inodes are saved and the application can go on to its next steps while the SSD box copies itself to higher-latency cheaper rotating machinery .
Read-caching is something the operating system can do for itself in RAM ( though the SSD box can also do its own prediction , especially for things like track-at-a-time reads of the disk ) , and the SSD can often keep a bigger read cache than the OS can because it 's less performance-critical than caching in system RAM .</tokentext>
<sentencetext>For a really wide range of applications, what you really need is fast caching of writes to stable storage, so your database transaction logs, file system journals, and modified inodes are saved and the application can go on to its next steps while the SSD box copies itself to higher-latency cheaper rotating machinery.
Read-caching is something the operating system can do for itself in RAM (though the SSD box can also do its own prediction, especially for things like track-at-a-time reads of the disk), and the SSD can often keep a bigger read cache than the OS can because it's less performance-critical than caching in system RAM.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013376</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015328</id>
	<title>Re:Save your money...</title>
	<author>Pojut</author>
	<datestamp>1264934940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Screw what is required to do it, I'm more concerned about trying to find a 2TB drive that has decent reliability.  So far, every 2 TB hard drive I have looked at has had a lot of problems, at least according to online reviews from multiple sources (forums, newegg, IT guys I know, etc.)  Obviously, the people who complain tend to be the loudest, but still...I haven't come across a single 2TB drive that didn't have a LARGE number of complainers.</p><p>The same can't be said for 1TB or 1.5TB drives.</p></htmltext>
<tokenext>Screw what is required to do it , I 'm more concerned about trying to find a 2TB drive that has decent reliability .
So far , every 2 TB hard drive I have looked at has had a lot of problems , at least according to online reviews from multiple sources ( forums , newegg , IT guys I know , etc .
) Obviously , the people who complain tend to be the loudest , but still...I have n't come across a single 2TB drive that did n't have a LARGE number of complainers.The same ca n't be said for 1TB or 1.5TB drives .</tokentext>
<sentencetext>Screw what is required to do it, I'm more concerned about trying to find a 2TB drive that has decent reliability.
So far, every 2 TB hard drive I have looked at has had a lot of problems, at least according to online reviews from multiple sources (forums, newegg, IT guys I know, etc.
)  Obviously, the people who complain tend to be the loudest, but still...I haven't come across a single 2TB drive that didn't have a LARGE number of complainers.The same can't be said for 1TB or 1.5TB drives.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013354</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31023758</id>
	<title>Re:Windows Only</title>
	<author>Anonymous</author>
	<datestamp>1265305860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>a block level cache manager wouldn't know to deallocate space on the SSD immediately when a file is deleted for instance</p></div><p>That is what the TRIM command is for on drives and OSes that support it. However only 7 fully supports it now and only when the hard drive reports a RPM of 0 (indicating an SSD). There are apps that can compute and send TRIM commands in other OSes, but these are not automatic upon deletion obviously. Upon file deletion, a TRIM command is sent to the drive indicating where the file used to be, telling the drive that nothing useful is stored there and it can be "garbage collected". This mainly is useful in that the built-in wear leveling or other algorithms does not have to "save" that block of data when it is doing housekeeping - it can be overrwritten without saving the contents.</p><p>Certain filesystems would benefit from this more than others. For instance, the journal in journaling filesystems, and the "Hot Zone" of HFS+ would benefit from the additional access speed boost.. The problem is that these areas are updated very frequently, which is not good for flash. In other words, the uses for which flash would be best utilized are hardest on the flash.</p><p>I also vote for huge DRAM caches instead.</p></div>
	</htmltext>
<tokenext>a block level cache manager would n't know to deallocate space on the SSD immediately when a file is deleted for instanceThat is what the TRIM command is for on drives and OSes that support it .
However only 7 fully supports it now and only when the hard drive reports a RPM of 0 ( indicating an SSD ) .
There are apps that can compute and send TRIM commands in other OSes , but these are not automatic upon deletion obviously .
Upon file deletion , a TRIM command is sent to the drive indicating where the file used to be , telling the drive that nothing useful is stored there and it can be " garbage collected " .
This mainly is useful in that the built-in wear leveling or other algorithms does not have to " save " that block of data when it is doing housekeeping - it can be overrwritten without saving the contents.Certain filesystems would benefit from this more than others .
For instance , the journal in journaling filesystems , and the " Hot Zone " of HFS + would benefit from the additional access speed boost.. The problem is that these areas are updated very frequently , which is not good for flash .
In other words , the uses for which flash would be best utilized are hardest on the flash.I also vote for huge DRAM caches instead .</tokentext>
<sentencetext>a block level cache manager wouldn't know to deallocate space on the SSD immediately when a file is deleted for instanceThat is what the TRIM command is for on drives and OSes that support it.
However only 7 fully supports it now and only when the hard drive reports a RPM of 0 (indicating an SSD).
There are apps that can compute and send TRIM commands in other OSes, but these are not automatic upon deletion obviously.
Upon file deletion, a TRIM command is sent to the drive indicating where the file used to be, telling the drive that nothing useful is stored there and it can be "garbage collected".
This mainly is useful in that the built-in wear leveling or other algorithms does not have to "save" that block of data when it is doing housekeeping - it can be overrwritten without saving the contents.Certain filesystems would benefit from this more than others.
For instance, the journal in journaling filesystems, and the "Hot Zone" of HFS+ would benefit from the additional access speed boost.. The problem is that these areas are updated very frequently, which is not good for flash.
In other words, the uses for which flash would be best utilized are hardest on the flash.I also vote for huge DRAM caches instead.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013480</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014990</id>
	<title>Re:Save your money...</title>
	<author>yabos</author>
	<datestamp>1264933200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Um, do you think this device is aimed at anyone that would know how to set up a ZFS pool like that?  All you have to do with this thing is plug in 2 drives and install a driver if their claims are true.  It's only 33 Euros which is not a bad price, although who knows if they'll come out with drivers for other OS'</htmltext>
<tokenext>Um , do you think this device is aimed at anyone that would know how to set up a ZFS pool like that ?
All you have to do with this thing is plug in 2 drives and install a driver if their claims are true .
It 's only 33 Euros which is not a bad price , although who knows if they 'll come out with drivers for other OS'</tokentext>
<sentencetext>Um, do you think this device is aimed at anyone that would know how to set up a ZFS pool like that?
All you have to do with this thing is plug in 2 drives and install a driver if their claims are true.
It's only 33 Euros which is not a bad price, although who knows if they'll come out with drivers for other OS'</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013386</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014096</id>
	<title>zfs hybrid pool</title>
	<author>Anonymous</author>
	<datestamp>1264928460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Opensolaris can use SSD as cache device for zfs volumes. More details here http://www.filibeto.org/~aduritz/truetrue/solaris10/zfs/zfs-what-next-sdc09.pdf</p></htmltext>
<tokenext>Opensolaris can use SSD as cache device for zfs volumes .
More details here http : //www.filibeto.org/ ~ aduritz/truetrue/solaris10/zfs/zfs-what-next-sdc09.pdf</tokentext>
<sentencetext>Opensolaris can use SSD as cache device for zfs volumes.
More details here http://www.filibeto.org/~aduritz/truetrue/solaris10/zfs/zfs-what-next-sdc09.pdf</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014404</id>
	<title>Your sig</title>
	<author>nuckfuts</author>
	<datestamp>1264930260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You know, you don't need a sig that announces "Slashdot reader since 1997". We can all see the number beside you nickname.</htmltext>
<tokenext>You know , you do n't need a sig that announces " Slashdot reader since 1997 " .
We can all see the number beside you nickname .</tokentext>
<sentencetext>You know, you don't need a sig that announces "Slashdot reader since 1997".
We can all see the number beside you nickname.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013464</id>
	<title>Re:Just a cache?</title>
	<author>eln</author>
	<datestamp>1264968420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>High-end storage devices have been using SSD for years to speed things up.  It basically allows for a larger cache than RAM (for less money), and also means non-volatile cache like you noted.  Of course, how much of a speed gain you get depends on what your workload looks like and how good their caching algorithms are.  So, I'm not impressed at all by this little device, but I would be impressed if it came with a new and more efficient caching algorithm.</htmltext>
<tokenext>High-end storage devices have been using SSD for years to speed things up .
It basically allows for a larger cache than RAM ( for less money ) , and also means non-volatile cache like you noted .
Of course , how much of a speed gain you get depends on what your workload looks like and how good their caching algorithms are .
So , I 'm not impressed at all by this little device , but I would be impressed if it came with a new and more efficient caching algorithm .</tokentext>
<sentencetext>High-end storage devices have been using SSD for years to speed things up.
It basically allows for a larger cache than RAM (for less money), and also means non-volatile cache like you noted.
Of course, how much of a speed gain you get depends on what your workload looks like and how good their caching algorithms are.
So, I'm not impressed at all by this little device, but I would be impressed if it came with a new and more efficient caching algorithm.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014572</id>
	<title>Why not just make an SSD cache controller?</title>
	<author>Anonymous</author>
	<datestamp>1264931040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Why hasn't someone already made this, just like caching IDE controllers? (Terminology wrong, of course, in that SATA doesn't need a "controller" in the IDE sense because it's host-to-host)   <br> <br>
An inline device that you plug in the SATA line.   Should be the size of a USB memory stick with a connector at each end, or with an extension cord &amp; connector at one end.  Give it 2 to 8 GB of memory, again like a USB memory stick. Different sizes could be different price points.
<br> <br>
Monitor all reads.  Cache them while you have empty pages.  Obviously the first thing to be read and cached will be the boot sequence on the first powerup after installation, which is probably what you want most anyway, and 2GB is bigger than your core working set even on Windows.  On any read, if in cache, return cached copy (obviously), otherwise pass along to disk.  Best design will completely avoid delay on the return data by letting it pass through and monitoring it multidrop.  Maintain a reference counter on the N pages cached, and on the next favorite N pages (at least);  any time a page in cache (or reference count list) is written, invalidate it, replacing the next time you see one of your "next favorite" pages go past.  Ditto if a "next favorite" reference count gets higher than the lowest of the N live pages.<br> <br>
Remember, this thing never instigates action on its own, just piggybacks on system activity.  Eventually it stabilizes on your OS, your most-used programs, etc.  When you do an update, things get invalidated for a while - your next-frequently-used replace them, until the reference counts go back up.  It's self-tuning.  The operating system doesn't even know it's there - no driver, no changes, no special code. The disk drive doesn't know it's there.  For a frame delay on the SATA request you get acceleration on everything; and if you parse the request in parallel to match you can keep the delay below a full frame.</htmltext>
<tokenext>Why has n't someone already made this , just like caching IDE controllers ?
( Terminology wrong , of course , in that SATA does n't need a " controller " in the IDE sense because it 's host-to-host ) An inline device that you plug in the SATA line .
Should be the size of a USB memory stick with a connector at each end , or with an extension cord &amp; connector at one end .
Give it 2 to 8 GB of memory , again like a USB memory stick .
Different sizes could be different price points .
Monitor all reads .
Cache them while you have empty pages .
Obviously the first thing to be read and cached will be the boot sequence on the first powerup after installation , which is probably what you want most anyway , and 2GB is bigger than your core working set even on Windows .
On any read , if in cache , return cached copy ( obviously ) , otherwise pass along to disk .
Best design will completely avoid delay on the return data by letting it pass through and monitoring it multidrop .
Maintain a reference counter on the N pages cached , and on the next favorite N pages ( at least ) ; any time a page in cache ( or reference count list ) is written , invalidate it , replacing the next time you see one of your " next favorite " pages go past .
Ditto if a " next favorite " reference count gets higher than the lowest of the N live pages .
Remember , this thing never instigates action on its own , just piggybacks on system activity .
Eventually it stabilizes on your OS , your most-used programs , etc .
When you do an update , things get invalidated for a while - your next-frequently-used replace them , until the reference counts go back up .
It 's self-tuning .
The operating system does n't even know it 's there - no driver , no changes , no special code .
The disk drive does n't know it 's there .
For a frame delay on the SATA request you get acceleration on everything ; and if you parse the request in parallel to match you can keep the delay below a full frame .</tokentext>
<sentencetext>Why hasn't someone already made this, just like caching IDE controllers?
(Terminology wrong, of course, in that SATA doesn't need a "controller" in the IDE sense because it's host-to-host)    
An inline device that you plug in the SATA line.
Should be the size of a USB memory stick with a connector at each end, or with an extension cord &amp; connector at one end.
Give it 2 to 8 GB of memory, again like a USB memory stick.
Different sizes could be different price points.
Monitor all reads.
Cache them while you have empty pages.
Obviously the first thing to be read and cached will be the boot sequence on the first powerup after installation, which is probably what you want most anyway, and 2GB is bigger than your core working set even on Windows.
On any read, if in cache, return cached copy (obviously), otherwise pass along to disk.
Best design will completely avoid delay on the return data by letting it pass through and monitoring it multidrop.
Maintain a reference counter on the N pages cached, and on the next favorite N pages (at least);  any time a page in cache (or reference count list) is written, invalidate it, replacing the next time you see one of your "next favorite" pages go past.
Ditto if a "next favorite" reference count gets higher than the lowest of the N live pages.
Remember, this thing never instigates action on its own, just piggybacks on system activity.
Eventually it stabilizes on your OS, your most-used programs, etc.
When you do an update, things get invalidated for a while - your next-frequently-used replace them, until the reference counts go back up.
It's self-tuning.
The operating system doesn't even know it's there - no driver, no changes, no special code.
The disk drive doesn't know it's there.
For a frame delay on the SATA request you get acceleration on everything; and if you parse the request in parallel to match you can keep the delay below a full frame.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014128</id>
	<title>Re:Just a cache?</title>
	<author>Anonymous</author>
	<datestamp>1264928640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yes, it's been called a cache, and also a buffer.<br>It's even been done with non-volatile memory before as well.<br>This is just the latest version to come out, seems more 'universal' than many of the others, and has a much better price point than previous ones.<br>(I remember one from the late 80s made with volatile ram that cost around a $1000. Worked beautifully, but way to expensive for the public.)</p></htmltext>
<tokenext>Yes , it 's been called a cache , and also a buffer.It 's even been done with non-volatile memory before as well.This is just the latest version to come out , seems more 'universal ' than many of the others , and has a much better price point than previous ones .
( I remember one from the late 80s made with volatile ram that cost around a $ 1000 .
Worked beautifully , but way to expensive for the public .
)</tokentext>
<sentencetext>Yes, it's been called a cache, and also a buffer.It's even been done with non-volatile memory before as well.This is just the latest version to come out, seems more 'universal' than many of the others, and has a much better price point than previous ones.
(I remember one from the late 80s made with volatile ram that cost around a $1000.
Worked beautifully, but way to expensive for the public.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016526</id>
	<title>Re:Just a cache?</title>
	<author>should\_be\_linear</author>
	<datestamp>1264940220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>DRAM can only be used as read-cache. SSD is much larger and is read and write cache.</htmltext>
<tokenext>DRAM can only be used as read-cache .
SSD is much larger and is read and write cache .</tokentext>
<sentencetext>DRAM can only be used as read-cache.
SSD is much larger and is read and write cache.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016224</id>
	<title>Why doesn't</title>
	<author>Anonymous</author>
	<datestamp>1264938540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Linux do this in software?</p></htmltext>
<tokenext>Linux do this in software ?</tokentext>
<sentencetext>Linux do this in software?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013342</id>
	<title>first post</title>
	<author>Anonymous</author>
	<datestamp>1264967880000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>first post!</p></htmltext>
<tokenext>first post !</tokentext>
<sentencetext>first post!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013952</id>
	<title>'front-end'?</title>
	<author>janap</author>
	<datestamp>1264970940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"'front-end' of your hard disk"?</p><p>What does that even mean?</p></htmltext>
<tokenext>" 'front-end ' of your hard disk " ? What does that even mean ?</tokentext>
<sentencetext>"'front-end' of your hard disk"?What does that even mean?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018462</id>
	<title>Re:Windows Only</title>
	<author>Anonymous</author>
	<datestamp>1264954320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>There is the turn off for me. If I were to use something like this I would want an OS agnostic solution. Of course that would mean the caching would have to be done at the block level rather than the file level so it might not be able to be as bright (a block level cache manager wouldn't know to deallocate space on the SSD immediately when a file is deleted for instance), but it should be quite practical to design an algorithm that keeps the most often used blocks in the cache (the SSD) without the whole thing being needless wiped first time you copy a massive data file in (you wouldn't want that 20Gb file to be written to the SSD first time it is laid down, at the expense of dropping blocks frmo OS startup files and such, in case it is hardly ever accessed again - for instance an image of a blueray disc that you are copying to another disc would not want to touch the cache as it'll probably be written one, read once then wiped. How this block-based cache management algorithm would work in detail is left as an exercise for the reader...</p></div><p>I can't imagine not wanting to use Windows so badly that I end up with no solution. And worse, when someone else doesn't want to use Windows and tells me "it is my honor" to figure it out. If God wanted me to write anything more complicated than a Bubble Sort then FSM wouldn't have given us NewEgg.</p></div>
	</htmltext>
<tokenext>There is the turn off for me .
If I were to use something like this I would want an OS agnostic solution .
Of course that would mean the caching would have to be done at the block level rather than the file level so it might not be able to be as bright ( a block level cache manager would n't know to deallocate space on the SSD immediately when a file is deleted for instance ) , but it should be quite practical to design an algorithm that keeps the most often used blocks in the cache ( the SSD ) without the whole thing being needless wiped first time you copy a massive data file in ( you would n't want that 20Gb file to be written to the SSD first time it is laid down , at the expense of dropping blocks frmo OS startup files and such , in case it is hardly ever accessed again - for instance an image of a blueray disc that you are copying to another disc would not want to touch the cache as it 'll probably be written one , read once then wiped .
How this block-based cache management algorithm would work in detail is left as an exercise for the reader...I ca n't imagine not wanting to use Windows so badly that I end up with no solution .
And worse , when someone else does n't want to use Windows and tells me " it is my honor " to figure it out .
If God wanted me to write anything more complicated than a Bubble Sort then FSM would n't have given us NewEgg .</tokentext>
<sentencetext>There is the turn off for me.
If I were to use something like this I would want an OS agnostic solution.
Of course that would mean the caching would have to be done at the block level rather than the file level so it might not be able to be as bright (a block level cache manager wouldn't know to deallocate space on the SSD immediately when a file is deleted for instance), but it should be quite practical to design an algorithm that keeps the most often used blocks in the cache (the SSD) without the whole thing being needless wiped first time you copy a massive data file in (you wouldn't want that 20Gb file to be written to the SSD first time it is laid down, at the expense of dropping blocks frmo OS startup files and such, in case it is hardly ever accessed again - for instance an image of a blueray disc that you are copying to another disc would not want to touch the cache as it'll probably be written one, read once then wiped.
How this block-based cache management algorithm would work in detail is left as an exercise for the reader...I can't imagine not wanting to use Windows so badly that I end up with no solution.
And worse, when someone else doesn't want to use Windows and tells me "it is my honor" to figure it out.
If God wanted me to write anything more complicated than a Bubble Sort then FSM wouldn't have given us NewEgg.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013480</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013838
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013448
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013386
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013354
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31017898
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31017584
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013708
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013960
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013386
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013354
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013536
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015470
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013402
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015982
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31020220
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013402
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31023758
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013480
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013490
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013418
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014354
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013952
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018018
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016526
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014016
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015216
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013418
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013464
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018462
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013480
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013432
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013392
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013520
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013392
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015328
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013354
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013836
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013408
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013792
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013408
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015128
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013376
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014128
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016330
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013952
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014880
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014404
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018678
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014572
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014990
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013386
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013354
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015916
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013708
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014808
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013376
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014436
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013600
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018172
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013386
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013354
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_02_03_1814248_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014030
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013408
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014280
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013704
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014572
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018678
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013408
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014030
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013792
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013836
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013952
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016330
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014354
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013710
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013342
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013600
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014436
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013402
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015470
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31020220
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013884
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016224
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015072
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013392
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013520
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013432
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013480
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31023758
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018462
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013398
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013354
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015328
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013386
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018172
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013960
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014990
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013448
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013838
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013708
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31017584
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015916
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013404
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013316
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013374
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31017898
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016888
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31018018
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013464
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014404
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014880
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014128
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013536
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015982
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014016
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31016526
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013418
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015216
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013490
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_02_03_1814248.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31013376
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31015128
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_02_03_1814248.31014808
</commentlist>
</conversation>
