<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_01_01_0019233</id>
	<title>Phase Change Memory vs. Storage As We Know It</title>
	<author>timothy</author>
	<datestamp>1262348640000</datestamp>
	<htmltext>storagedude writes <i>"Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant. The author sees phase change memory as a <a href="http://www.enterprisestorageforum.com/technology/features/article.php/3856121">technology that could unseat storage networks</a>. From the article: 'While years away, PCM has the potential to move data storage and storage networks from the center of data centers to the periphery. I/O would only have to be conducted at the start and end of the day, with data parked in memory while applications are running. In short, disk becomes the new tape."</i></htmltext>
<tokenext>storagedude writes " Access to data is n't keeping pace with advances in CPU and memory , creating an I/O bottleneck that threatens to make data storage irrelevant .
The author sees phase change memory as a technology that could unseat storage networks .
From the article : 'While years away , PCM has the potential to move data storage and storage networks from the center of data centers to the periphery .
I/O would only have to be conducted at the start and end of the day , with data parked in memory while applications are running .
In short , disk becomes the new tape .
"</tokentext>
<sentencetext>storagedude writes "Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant.
The author sees phase change memory as a technology that could unseat storage networks.
From the article: 'While years away, PCM has the potential to move data storage and storage networks from the center of data centers to the periphery.
I/O would only have to be conducted at the start and end of the day, with data parked in memory while applications are running.
In short, disk becomes the new tape.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30626794</id>
	<title>Suddenly glacial melting</title>
	<author>Anonymous</author>
	<datestamp>1262431440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>has taken on a sense of urgency.  We're losing Terabytes of zeros a day people!!</p><p>But seriously, I'm not sure about their claim that a PCM cell works better the smaller it gets.  At some point the material will be subject to statistical mechanical fluctuations which could wipe your memory.</p></htmltext>
<tokenext>has taken on a sense of urgency .
We 're losing Terabytes of zeros a day people !
! But seriously , I 'm not sure about their claim that a PCM cell works better the smaller it gets .
At some point the material will be subject to statistical mechanical fluctuations which could wipe your memory .</tokentext>
<sentencetext>has taken on a sense of urgency.
We're losing Terabytes of zeros a day people!
!But seriously, I'm not sure about their claim that a PCM cell works better the smaller it gets.
At some point the material will be subject to statistical mechanical fluctuations which could wipe your memory.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611544</id>
	<title>Slashdot is computer science</title>
	<author>Anonymous</author>
	<datestamp>1293804360000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I thought most of slashdot is computer-science only.  What is the big-Oh notation of PCM?  Most of slashdot are not engineers--they are computer scientists.  Thus, they essentially are wannabee mathematicians.</p></htmltext>
<tokenext>I thought most of slashdot is computer-science only .
What is the big-Oh notation of PCM ?
Most of slashdot are not engineers--they are computer scientists .
Thus , they essentially are wannabee mathematicians .</tokentext>
<sentencetext>I thought most of slashdot is computer-science only.
What is the big-Oh notation of PCM?
Most of slashdot are not engineers--they are computer scientists.
Thus, they essentially are wannabee mathematicians.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612392</id>
	<title>Re:Why the vapourware tag?</title>
	<author>Ropati</author>
	<datestamp>1293817080000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Kevin has this right, what an obtuse article.</p><p>Henry Newman is talking about PC storage not enterprise storage.  He discusses all disk IO performance in MBs/sec, meaning sequential.  When in reality, very little (disk level) IO for the enterprise is sequential.  The numbers here are flawed as is the characterization of storage.</p><p>Storage is where we keep our data.  Keeping data is a central requirement of information technology.  It will never be a peripheral feature.</p><p>Presently the real IO bottleneck is the spinning platter and the requirements of getting a read/write head to the right place quickly.  Newer solid state storage devices will alleviate this bottleneck in the very near future.  Perhaps PCM is the solution, but I for one will wait for a GB/$ threshold at which time the winning solid state storage will be available to everyone.</p><p>Mr. Newman talks about inter-computer bus speeds as not keeping up with CPUs and memory, when in fact they keeping up.  The place where data transport still can't keep up, is serially on a single transport, (wire or optical).   Networked (switchable) data needs to be serial single transport for a number of obvious reasons.  Like the platter, this is a physical limitation and not easily surmounted.</p><p>If and when we get +10GB/sec consumer networks, storage networks (transporting SCSI blocks) will become a thing of the past as we pass and store all our data in an application aware protocol.</p></htmltext>
<tokenext>Kevin has this right , what an obtuse article.Henry Newman is talking about PC storage not enterprise storage .
He discusses all disk IO performance in MBs/sec , meaning sequential .
When in reality , very little ( disk level ) IO for the enterprise is sequential .
The numbers here are flawed as is the characterization of storage.Storage is where we keep our data .
Keeping data is a central requirement of information technology .
It will never be a peripheral feature.Presently the real IO bottleneck is the spinning platter and the requirements of getting a read/write head to the right place quickly .
Newer solid state storage devices will alleviate this bottleneck in the very near future .
Perhaps PCM is the solution , but I for one will wait for a GB/ $ threshold at which time the winning solid state storage will be available to everyone.Mr .
Newman talks about inter-computer bus speeds as not keeping up with CPUs and memory , when in fact they keeping up .
The place where data transport still ca n't keep up , is serially on a single transport , ( wire or optical ) .
Networked ( switchable ) data needs to be serial single transport for a number of obvious reasons .
Like the platter , this is a physical limitation and not easily surmounted.If and when we get + 10GB/sec consumer networks , storage networks ( transporting SCSI blocks ) will become a thing of the past as we pass and store all our data in an application aware protocol .</tokentext>
<sentencetext>Kevin has this right, what an obtuse article.Henry Newman is talking about PC storage not enterprise storage.
He discusses all disk IO performance in MBs/sec, meaning sequential.
When in reality, very little (disk level) IO for the enterprise is sequential.
The numbers here are flawed as is the characterization of storage.Storage is where we keep our data.
Keeping data is a central requirement of information technology.
It will never be a peripheral feature.Presently the real IO bottleneck is the spinning platter and the requirements of getting a read/write head to the right place quickly.
Newer solid state storage devices will alleviate this bottleneck in the very near future.
Perhaps PCM is the solution, but I for one will wait for a GB/$ threshold at which time the winning solid state storage will be available to everyone.Mr.
Newman talks about inter-computer bus speeds as not keeping up with CPUs and memory, when in fact they keeping up.
The place where data transport still can't keep up, is serially on a single transport, (wire or optical).
Networked (switchable) data needs to be serial single transport for a number of obvious reasons.
Like the platter, this is a physical limitation and not easily surmounted.If and when we get +10GB/sec consumer networks, storage networks (transporting SCSI blocks) will become a thing of the past as we pass and store all our data in an application aware protocol.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611522</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611602</id>
	<title>Numonyx will probably make it happen</title>
	<author>AllynM</author>
	<datestamp>1293805080000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Numonyx announced some good advances in PCM a few months back:</p><p><a href="http://www.pcper.com/comments.php?nid=7930" title="pcper.com">http://www.pcper.com/comments.php?nid=7930</a> [pcper.com]</p><p>Allyn Malventano<br>Storage Editor, PC Perspective</p></htmltext>
<tokenext>Numonyx announced some good advances in PCM a few months back : http : //www.pcper.com/comments.php ? nid = 7930 [ pcper.com ] Allyn MalventanoStorage Editor , PC Perspective</tokentext>
<sentencetext>Numonyx announced some good advances in PCM a few months back:http://www.pcper.com/comments.php?nid=7930 [pcper.com]Allyn MalventanoStorage Editor, PC Perspective</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611374</id>
	<title>We're almost there already</title>
	<author>Lord Byron II</author>
	<datestamp>1293802860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory? As long as you don't suffer a system crash, you can unload it back to disk when you're done.</p></htmltext>
<tokenext>When you can pick up 4GB of RAM memory for a song , why not load the whole OS into memory ?
As long as you do n't suffer a system crash , you can unload it back to disk when you 're done .</tokentext>
<sentencetext>When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?
As long as you don't suffer a system crash, you can unload it back to disk when you're done.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611350</id>
	<title>We've heard this forever...</title>
	<author>blahplusplus</author>
	<datestamp>1293802560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>... the death of x tech here, it will eventually die once the groundwork has been laid to migrate to a better system.</p></htmltext>
<tokenext>... the death of x tech here , it will eventually die once the groundwork has been laid to migrate to a better system .</tokentext>
<sentencetext>... the death of x tech here, it will eventually die once the groundwork has been laid to migrate to a better system.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30613130</id>
	<title>Why not just normal RAM?</title>
	<author>Casandro</author>
	<datestamp>1262343480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I mean what's the advantage of phase change memory in this scenario? If you loose power to your CPU or your system crashes, you will have effectively lost your memory content anyhow. So you might as well open your files with mmap and have lots of RAM. The system will automagically figure out what to swap to disk if RAM isn't enough as well as it will regularly backup the contents do disk.</p></htmltext>
<tokenext>I mean what 's the advantage of phase change memory in this scenario ?
If you loose power to your CPU or your system crashes , you will have effectively lost your memory content anyhow .
So you might as well open your files with mmap and have lots of RAM .
The system will automagically figure out what to swap to disk if RAM is n't enough as well as it will regularly backup the contents do disk .</tokentext>
<sentencetext>I mean what's the advantage of phase change memory in this scenario?
If you loose power to your CPU or your system crashes, you will have effectively lost your memory content anyhow.
So you might as well open your files with mmap and have lots of RAM.
The system will automagically figure out what to swap to disk if RAM isn't enough as well as it will regularly backup the contents do disk.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611434</id>
	<title>CD-RW</title>
	<author>Anonymous</author>
	<datestamp>1293803460000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext><p>CD-RW is/was phase change technology...</p></htmltext>
<tokenext>CD-RW is/was phase change technology.. .</tokentext>
<sentencetext>CD-RW is/was phase change technology...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611308</id>
	<title>fp</title>
	<author>Anonymous</author>
	<datestamp>1293802080000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>frosty piss!!</p><p>oops, I mean first post!</p></htmltext>
<tokenext>frosty piss !
! oops , I mean first post !</tokentext>
<sentencetext>frosty piss!
!oops, I mean first post!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611722</id>
	<title>Re:We're almost there already</title>
	<author>Anonymous</author>
	<datestamp>1293806700000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?</p></div><p>For what it's worth, you can do this with most Linux distros if you know what you're doing.  Linux is pretty well designed to act from a ramdisk - you can set it up to copy the system files into RAM on boot and continue from there all in RAM.  I've been doing this on my Debian (stable) boxes when I realized I couldn't afford a decent SSD and wanted a super-responsive system.  Firefox (well, Iceweasel) starts cold in about two seconds on an eeepc when set up this way, and it starts cold virtually instantly on my C2D box.  In fact, everything seems instant on my C2D box.  It's really snazzy.</p><p><div class="quote"><p>As long as you don't suffer a system crash, you can unload it back to disk when you're done.</p></div><p>Depending on what you're doing, even that may not be an issue.  If you're doing massive database stuff, then yes.  However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy.  From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging.  Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM.  It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.<br> <br>

tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely.  It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.</p></div>
	</htmltext>
<tokenext>When you can pick up 4GB of RAM memory for a song , why not load the whole OS into memory ? For what it 's worth , you can do this with most Linux distros if you know what you 're doing .
Linux is pretty well designed to act from a ramdisk - you can set it up to copy the system files into RAM on boot and continue from there all in RAM .
I 've been doing this on my Debian ( stable ) boxes when I realized I could n't afford a decent SSD and wanted a super-responsive system .
Firefox ( well , Iceweasel ) starts cold in about two seconds on an eeepc when set up this way , and it starts cold virtually instantly on my C2D box .
In fact , everything seems instant on my C2D box .
It 's really snazzy.As long as you do n't suffer a system crash , you can unload it back to disk when you 're done.Depending on what you 're doing , even that may not be an issue .
If you 're doing massive database stuff , then yes .
However , if your disk I/O is n't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the " hard " copy .
From your POV everything is instant , but any crash will only result in the loss of data from however far behind the harddrive copy is lagging .
Personally , what little I do need saved is simply text files - my notes in class , my homework , etc , and so I can just write to a partition on the harddrive that is n't loaded to RAM .
It does n't suffer at all from the harddrive I/O - I ca n't really type faster then a harddrive can write .
tl ; dr : It 's perfectly feasible for ( some ) people to do as you 've described , and it works quite nicely .
It 's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology , it 's available today .</tokentext>
<sentencetext>When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?For what it's worth, you can do this with most Linux distros if you know what you're doing.
Linux is pretty well designed to act from a ramdisk - you can set it up to copy the system files into RAM on boot and continue from there all in RAM.
I've been doing this on my Debian (stable) boxes when I realized I couldn't afford a decent SSD and wanted a super-responsive system.
Firefox (well, Iceweasel) starts cold in about two seconds on an eeepc when set up this way, and it starts cold virtually instantly on my C2D box.
In fact, everything seems instant on my C2D box.
It's really snazzy.As long as you don't suffer a system crash, you can unload it back to disk when you're done.Depending on what you're doing, even that may not be an issue.
If you're doing massive database stuff, then yes.
However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy.
From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging.
Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM.
It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.
tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely.
It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30613070</id>
	<title>This does not kill the SAN</title>
	<author>Anonymous</author>
	<datestamp>1262342100000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>I don't think the author knows much about the purpose of a SAN. A SAN is not just a disk array giving you faster access to disks. Local storage that is faster does not help you with concurrent access (clusters), rollback capability(Snapshots, mirror copies \ point in time server recovery), site recovery(off sited mirrors) or substantial data compression gain through technologies like deduplication.</p><p>As for speed, my SAN is giving me write performance in the range of 600mbytes/sec per client. I access my storage over a 10gbit ethernet backbone. Certainly suboptimal, but my blades have a pair of nics and no disks. It's cheap, very fast and I have 3-4 rollback points for my ESX cluster. Thats around 200 VM's in two sites, active, active cross recoverable.</p><p>The SAN is not going away.</p><p>(In case any of you are desiging and want the part list I'm talking about Cisco Nexus 5020 10Gbe backbone, Bluearc Mercury 100 cluster with disks slung on a HDS USP-VM. 64gb cache depth on each path and a few hundred tb of disk. Servers are HP BL495 G6's, with Chelsio cards. Chassis has BNT(HP) 10gbe switches. I haven't even started with Jumbo's yet, I can do better, but this is pretty good for now. All up it was just over a mil AUD).</p><p>Whats this? It's a faster storage device. Thats a fairly small part in a SAN.</p></htmltext>
<tokenext>I do n't think the author knows much about the purpose of a SAN .
A SAN is not just a disk array giving you faster access to disks .
Local storage that is faster does not help you with concurrent access ( clusters ) , rollback capability ( Snapshots , mirror copies \ point in time server recovery ) , site recovery ( off sited mirrors ) or substantial data compression gain through technologies like deduplication.As for speed , my SAN is giving me write performance in the range of 600mbytes/sec per client .
I access my storage over a 10gbit ethernet backbone .
Certainly suboptimal , but my blades have a pair of nics and no disks .
It 's cheap , very fast and I have 3-4 rollback points for my ESX cluster .
Thats around 200 VM 's in two sites , active , active cross recoverable.The SAN is not going away .
( In case any of you are desiging and want the part list I 'm talking about Cisco Nexus 5020 10Gbe backbone , Bluearc Mercury 100 cluster with disks slung on a HDS USP-VM .
64gb cache depth on each path and a few hundred tb of disk .
Servers are HP BL495 G6 's , with Chelsio cards .
Chassis has BNT ( HP ) 10gbe switches .
I have n't even started with Jumbo 's yet , I can do better , but this is pretty good for now .
All up it was just over a mil AUD ) .Whats this ?
It 's a faster storage device .
Thats a fairly small part in a SAN .</tokentext>
<sentencetext>I don't think the author knows much about the purpose of a SAN.
A SAN is not just a disk array giving you faster access to disks.
Local storage that is faster does not help you with concurrent access (clusters), rollback capability(Snapshots, mirror copies \ point in time server recovery), site recovery(off sited mirrors) or substantial data compression gain through technologies like deduplication.As for speed, my SAN is giving me write performance in the range of 600mbytes/sec per client.
I access my storage over a 10gbit ethernet backbone.
Certainly suboptimal, but my blades have a pair of nics and no disks.
It's cheap, very fast and I have 3-4 rollback points for my ESX cluster.
Thats around 200 VM's in two sites, active, active cross recoverable.The SAN is not going away.
(In case any of you are desiging and want the part list I'm talking about Cisco Nexus 5020 10Gbe backbone, Bluearc Mercury 100 cluster with disks slung on a HDS USP-VM.
64gb cache depth on each path and a few hundred tb of disk.
Servers are HP BL495 G6's, with Chelsio cards.
Chassis has BNT(HP) 10gbe switches.
I haven't even started with Jumbo's yet, I can do better, but this is pretty good for now.
All up it was just over a mil AUD).Whats this?
It's a faster storage device.
Thats a fairly small part in a SAN.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611760</id>
	<title>Re:The 70's called. They want their I/O methods ba</title>
	<author>Guy Harris</author>
	<datestamp>1293807300000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>From TFA:</p><blockquote><div><p>There is no method to provide hints about file usage; for example, you might want to have a hint that says the file will be read sequentially, or a hint that a file might be over written.  There are lots of possible hints, but there is no standard way of providing file hints...</p></div></blockquote><p>
Ya, we had that back in the stone-age and Multics would have been poster-child for this type of thinking, but it was a *bitch* and made portability problematic.</p></div><p>No, Multics would have been the poster child for "there's no I/O, there's just paging" - file system I/O was done in Multics by mapping the file into your address space and referring to it as if it were memory.  ("Multi-segment files" were just directories with a bunch of real files in them, each no larger than the maximum size of a segment.  I/O was done through read/write calls, but those were implemented by mapping the file, or the segments of a multi-segment file, into the address space and copying to/from the mapped segment.)</p><p><div class="quote"><p>I think VMS has some of this type of capability with their <a href="http://en.wikipedia.org/wiki/Files-11" title="wikipedia.org">Files 11</a> [wikipedia.org] support - any VMS people care to comment. Unix (and most current OS) sees everything as a stream of bytes, in most cases, and this is much simpler.
</p></div><p>"Seeing everything as a stream of bytes" is orthogonal to "a hint that the file will be read sequentially".  See, for example, <a href="http://linux.die.net/man/2/fadvise" title="die.net">fadvise() in Linux</a> [die.net], or some of the FILE\_FLAG\_ options in <a href="http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx" title="microsoft.com">CreateFile() in Windows</a> [microsoft.com] (Windows being another OS that shows a file as a seekable stream of bytes).</p></div>
	</htmltext>
<tokenext>From TFA : There is no method to provide hints about file usage ; for example , you might want to have a hint that says the file will be read sequentially , or a hint that a file might be over written .
There are lots of possible hints , but there is no standard way of providing file hints.. . Ya , we had that back in the stone-age and Multics would have been poster-child for this type of thinking , but it was a * bitch * and made portability problematic.No , Multics would have been the poster child for " there 's no I/O , there 's just paging " - file system I/O was done in Multics by mapping the file into your address space and referring to it as if it were memory .
( " Multi-segment files " were just directories with a bunch of real files in them , each no larger than the maximum size of a segment .
I/O was done through read/write calls , but those were implemented by mapping the file , or the segments of a multi-segment file , into the address space and copying to/from the mapped segment .
) I think VMS has some of this type of capability with their Files 11 [ wikipedia.org ] support - any VMS people care to comment .
Unix ( and most current OS ) sees everything as a stream of bytes , in most cases , and this is much simpler .
" Seeing everything as a stream of bytes " is orthogonal to " a hint that the file will be read sequentially " .
See , for example , fadvise ( ) in Linux [ die.net ] , or some of the FILE \ _FLAG \ _ options in CreateFile ( ) in Windows [ microsoft.com ] ( Windows being another OS that shows a file as a seekable stream of bytes ) .</tokentext>
<sentencetext>From TFA:There is no method to provide hints about file usage; for example, you might want to have a hint that says the file will be read sequentially, or a hint that a file might be over written.
There are lots of possible hints, but there is no standard way of providing file hints...
Ya, we had that back in the stone-age and Multics would have been poster-child for this type of thinking, but it was a *bitch* and made portability problematic.No, Multics would have been the poster child for "there's no I/O, there's just paging" - file system I/O was done in Multics by mapping the file into your address space and referring to it as if it were memory.
("Multi-segment files" were just directories with a bunch of real files in them, each no larger than the maximum size of a segment.
I/O was done through read/write calls, but those were implemented by mapping the file, or the segments of a multi-segment file, into the address space and copying to/from the mapped segment.
)I think VMS has some of this type of capability with their Files 11 [wikipedia.org] support - any VMS people care to comment.
Unix (and most current OS) sees everything as a stream of bytes, in most cases, and this is much simpler.
"Seeing everything as a stream of bytes" is orthogonal to "a hint that the file will be read sequentially".
See, for example, fadvise() in Linux [die.net], or some of the FILE\_FLAG\_ options in CreateFile() in Windows [microsoft.com] (Windows being another OS that shows a file as a seekable stream of bytes).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611600</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30619856</id>
	<title>The Facts</title>
	<author>Anonymous</author>
	<datestamp>1262371620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Actually the Amiga uses the better technology - architecture been here for a while . Intel bought that out -The Itanium and PS3 are clones of the Amiga-So yes a better architecture / system does exist and the real problem is the x86 architecture -1947 technology invented by Motorola left in 1969. The x86 (north and south bridge) is the problem not the storage.</p></htmltext>
<tokenext>Actually the Amiga uses the better technology - architecture been here for a while .
Intel bought that out -The Itanium and PS3 are clones of the Amiga-So yes a better architecture / system does exist and the real problem is the x86 architecture -1947 technology invented by Motorola left in 1969 .
The x86 ( north and south bridge ) is the problem not the storage .</tokentext>
<sentencetext>Actually the Amiga uses the better technology - architecture been here for a while .
Intel bought that out -The Itanium and PS3 are clones of the Amiga-So yes a better architecture / system does exist and the real problem is the x86 architecture -1947 technology invented by Motorola left in 1969.
The x86 (north and south bridge) is the problem not the storage.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611750</id>
	<title>It is about time</title>
	<author>Anonymous</author>
	<datestamp>1293807120000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I am with Linus on this one<br>Linus is right<br>The man makes sense<br>He is absolutely correct on this one</p></htmltext>
<tokenext>I am with Linus on this oneLinus is rightThe man makes senseHe is absolutely correct on this one</tokentext>
<sentencetext>I am with Linus on this oneLinus is rightThe man makes senseHe is absolutely correct on this one</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30618466</id>
	<title>Re:We're almost there already</title>
	<author>Anonymous</author>
	<datestamp>1262357460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?</i>
</p><p>On any remotely modern OS, the whole OS is *already* "loaded into memory" if you have enough of it.  It's called a disk cache.</p></htmltext>
<tokenext>When you can pick up 4GB of RAM memory for a song , why not load the whole OS into memory ?
On any remotely modern OS , the whole OS is * already * " loaded into memory " if you have enough of it .
It 's called a disk cache .</tokentext>
<sentencetext> When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?
On any remotely modern OS, the whole OS is *already* "loaded into memory" if you have enough of it.
It's called a disk cache.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612682</id>
	<title>Threatens to make data storage irrelevant? Hardly!</title>
	<author>Anonymous</author>
	<datestamp>1293823020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that <b>threatens to make data storage irrelevant</b></p></div>
</blockquote><p>
It's because data storage will ALWAYS be relevant (talk to any Alzheimers' patient if you don't believe me) that access speeds are a concern.
</p></div>
	</htmltext>
<tokenext>Access to data is n't keeping pace with advances in CPU and memory , creating an I/O bottleneck that threatens to make data storage irrelevant It 's because data storage will ALWAYS be relevant ( talk to any Alzheimers ' patient if you do n't believe me ) that access speeds are a concern .</tokentext>
<sentencetext>Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant

It's because data storage will ALWAYS be relevant (talk to any Alzheimers' patient if you don't believe me) that access speeds are a concern.

	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611442</id>
	<title>Re:We're almost there already</title>
	<author>mangobrain</author>
	<datestamp>1293803460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>You may be able to "load the whole OS into memory", but that's missing the point, which is the data people work with once the OS is up and running.  If that 4GB was enough to store all the data for the entirety of any conceivable session, on servers as well as desktops, why would anyone ever buy a hard drive larger than that?  Hard drives would probably already be obsolete.  I bet you own at least one hard drive larger than 4GB - and as the type of person who comments on slashdot, I bet more than 4GB of that hard drive is currently in use.</p><p>TFA is talking about replacing <em>mass storage</em> with PCM.  The summary's usage of the phrase <em>"storage networks"</em> should also have been a hint.</p></htmltext>
<tokenext>You may be able to " load the whole OS into memory " , but that 's missing the point , which is the data people work with once the OS is up and running .
If that 4GB was enough to store all the data for the entirety of any conceivable session , on servers as well as desktops , why would anyone ever buy a hard drive larger than that ?
Hard drives would probably already be obsolete .
I bet you own at least one hard drive larger than 4GB - and as the type of person who comments on slashdot , I bet more than 4GB of that hard drive is currently in use.TFA is talking about replacing mass storage with PCM .
The summary 's usage of the phrase " storage networks " should also have been a hint .</tokentext>
<sentencetext>You may be able to "load the whole OS into memory", but that's missing the point, which is the data people work with once the OS is up and running.
If that 4GB was enough to store all the data for the entirety of any conceivable session, on servers as well as desktops, why would anyone ever buy a hard drive larger than that?
Hard drives would probably already be obsolete.
I bet you own at least one hard drive larger than 4GB - and as the type of person who comments on slashdot, I bet more than 4GB of that hard drive is currently in use.TFA is talking about replacing mass storage with PCM.
The summary's usage of the phrase "storage networks" should also have been a hint.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611374</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30613904</id>
	<title>microdisk Radio?</title>
	<author>Doc Ruby</author>
	<datestamp>1262359140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Is anyone working on micromachines (MEMS) that set vast arrays of very tiny storage discs into very tiny radio transmitters, each disc transceiving on its own very narrow frequency band? A 1cm^2 chip, perhaps stacked a dozen (or more) layers thick, delivering a couple hundred million discs per layer, each holding something like 32bits per microdisc and a GB per layer, streaming something like 2-200Tbps per layer, seek time 10ns, consuming a few centiwatts per layer.</p><p>Or skip the radio and just max out a multimode fiber throughput. Parallelizing data transfer should leave stored data transferrable entirely in under 250ms.</p></htmltext>
<tokenext>Is anyone working on micromachines ( MEMS ) that set vast arrays of very tiny storage discs into very tiny radio transmitters , each disc transceiving on its own very narrow frequency band ?
A 1cm ^ 2 chip , perhaps stacked a dozen ( or more ) layers thick , delivering a couple hundred million discs per layer , each holding something like 32bits per microdisc and a GB per layer , streaming something like 2-200Tbps per layer , seek time 10ns , consuming a few centiwatts per layer.Or skip the radio and just max out a multimode fiber throughput .
Parallelizing data transfer should leave stored data transferrable entirely in under 250ms .</tokentext>
<sentencetext>Is anyone working on micromachines (MEMS) that set vast arrays of very tiny storage discs into very tiny radio transmitters, each disc transceiving on its own very narrow frequency band?
A 1cm^2 chip, perhaps stacked a dozen (or more) layers thick, delivering a couple hundred million discs per layer, each holding something like 32bits per microdisc and a GB per layer, streaming something like 2-200Tbps per layer, seek time 10ns, consuming a few centiwatts per layer.Or skip the radio and just max out a multimode fiber throughput.
Parallelizing data transfer should leave stored data transferrable entirely in under 250ms.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611564</id>
	<title>disk becomes the new tape</title>
	<author>ls671</author>
	<datestamp>1293804720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt; disk becomes the new tape</p><p>Well they got this right even if it was not to be accomplished with the mentioned technology.</p><p>I think that in the medium/long time range this will undoubtedly come true.</p><p>I mean, would any<nobr> <wbr></nobr>/. reader bet on the chances of hard drives to come on par with today memory access speeds in the future, even with zillions of years of technological advancement ?</p><p>
&nbsp; &nbsp; &nbsp;</p></htmltext>
<tokenext>&gt; disk becomes the new tapeWell they got this right even if it was not to be accomplished with the mentioned technology.I think that in the medium/long time range this will undoubtedly come true.I mean , would any / .
reader bet on the chances of hard drives to come on par with today memory access speeds in the future , even with zillions of years of technological advancement ?
     </tokentext>
<sentencetext>&gt; disk becomes the new tapeWell they got this right even if it was not to be accomplished with the mentioned technology.I think that in the medium/long time range this will undoubtedly come true.I mean, would any /.
reader bet on the chances of hard drives to come on par with today memory access speeds in the future, even with zillions of years of technological advancement ?
     </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611636</id>
	<title>This "author' is pretty much irrelevant</title>
	<author>Zero\_\_Kelvin</author>
	<datestamp>1293805620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><blockquote><div><p>"I will assume that this translates to performance (which it does not)<nobr> <wbr></nobr>..."</p></div></blockquote><p>I was tempted to stop reading right there, but I kept reading.  While his point about POSIX improvements is not bad, the rest of the article is ridiculous.  It essentially amounts to:  <i>Imagine if we had pretty much exactly what we have today, but we used different words to describe the components of the system!</i>  We already have slower external storage (Networked drives / SANs, local hard disk), and incremental means of making data available locally more quickly  by degrees (Local Memory, L2 Cache, L1 Cache, etc.)  We already get that at the expense of its ability to be accessed by other CPUs a further distance away.  It turns out I probably should have stopped reading when I first got the feeling I should when reading the first sentence in the article: <i>"Data storage has become the weak link in enterprise applications, and without a concerted effort on the part of storage vendors, the technology is in danger of becoming irrelevant."</i>  I can't wait to answer with that one next time and watch jaws drop:<br> <br> <b>Boss:</b> Where and how are we storing our database, how are do we ensure database availability, and how are we handling backups?<br> <b>me:</b>  You're behind the times Boss.  That is now irrelevant! <br> <br>Yeah.  That's the ticket<nobr> <wbr></nobr>...</p></div>
	</htmltext>
<tokenext>" I will assume that this translates to performance ( which it does not ) ... " I was tempted to stop reading right there , but I kept reading .
While his point about POSIX improvements is not bad , the rest of the article is ridiculous .
It essentially amounts to : Imagine if we had pretty much exactly what we have today , but we used different words to describe the components of the system !
We already have slower external storage ( Networked drives / SANs , local hard disk ) , and incremental means of making data available locally more quickly by degrees ( Local Memory , L2 Cache , L1 Cache , etc .
) We already get that at the expense of its ability to be accessed by other CPUs a further distance away .
It turns out I probably should have stopped reading when I first got the feeling I should when reading the first sentence in the article : " Data storage has become the weak link in enterprise applications , and without a concerted effort on the part of storage vendors , the technology is in danger of becoming irrelevant .
" I ca n't wait to answer with that one next time and watch jaws drop : Boss : Where and how are we storing our database , how are do we ensure database availability , and how are we handling backups ?
me : You 're behind the times Boss .
That is now irrelevant !
Yeah. That 's the ticket .. .</tokentext>
<sentencetext>"I will assume that this translates to performance (which it does not) ..."I was tempted to stop reading right there, but I kept reading.
While his point about POSIX improvements is not bad, the rest of the article is ridiculous.
It essentially amounts to:  Imagine if we had pretty much exactly what we have today, but we used different words to describe the components of the system!
We already have slower external storage (Networked drives / SANs, local hard disk), and incremental means of making data available locally more quickly  by degrees (Local Memory, L2 Cache, L1 Cache, etc.
)  We already get that at the expense of its ability to be accessed by other CPUs a further distance away.
It turns out I probably should have stopped reading when I first got the feeling I should when reading the first sentence in the article: "Data storage has become the weak link in enterprise applications, and without a concerted effort on the part of storage vendors, the technology is in danger of becoming irrelevant.
"  I can't wait to answer with that one next time and watch jaws drop:  Boss: Where and how are we storing our database, how are do we ensure database availability, and how are we handling backups?
me:  You're behind the times Boss.
That is now irrelevant!
Yeah.  That's the ticket ...
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611320</id>
	<title>CD-R?</title>
	<author>Anonymous</author>
	<datestamp>1293802200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>CD-R is a phase change memory. It revolutionized things, but even DVD-Rs and BD-Rs aren't that spectacular these days. Seems holographic discs have more potential if the cost barrier comes down.</p></htmltext>
<tokenext>CD-R is a phase change memory .
It revolutionized things , but even DVD-Rs and BD-Rs are n't that spectacular these days .
Seems holographic discs have more potential if the cost barrier comes down .</tokentext>
<sentencetext>CD-R is a phase change memory.
It revolutionized things, but even DVD-Rs and BD-Rs aren't that spectacular these days.
Seems holographic discs have more potential if the cost barrier comes down.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611952</id>
	<title>What to do with solid-state memory?</title>
	<author>Animats</author>
	<datestamp>1293810240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
The real question is whether we need something other than read/write/seek to deal with the various forms of solid-state memory.  The usual options are 1) treat it as disk, reading and writing in big blocks, and 2) treat it as another layer of RAM cache, in main memory space.  Flash, etc. though have much faster "seek times" than hard drives, and the penalty for reading smaller blocks is thus much lower.  Flash also has the property that writing is slower than reading, while for disk the two are about the same.  For small I/O operations, the operating system overhead for the operation takes more time than the actual data access.
</p><p>
For most end users, permanent storage is for storing big sequential files, audio or video.  There are interfaces that would make databases faster (one could have flash devices that implemented a key/value store, with onboard lookup), but nobody would notice when playing video.  The trend in databases is already to get enough RAM to keep all the indices in RAM, so we're already doing the "read it in the morning" thing suggested in the article.  So the payoff for building flash devices to help with that is modest.
</p><p>
There are interesting things to do in this space, but improving reliability in the RAID sense is probably more important than speeding up non-sequential small accesses.</p></htmltext>
<tokenext>The real question is whether we need something other than read/write/seek to deal with the various forms of solid-state memory .
The usual options are 1 ) treat it as disk , reading and writing in big blocks , and 2 ) treat it as another layer of RAM cache , in main memory space .
Flash , etc .
though have much faster " seek times " than hard drives , and the penalty for reading smaller blocks is thus much lower .
Flash also has the property that writing is slower than reading , while for disk the two are about the same .
For small I/O operations , the operating system overhead for the operation takes more time than the actual data access .
For most end users , permanent storage is for storing big sequential files , audio or video .
There are interfaces that would make databases faster ( one could have flash devices that implemented a key/value store , with onboard lookup ) , but nobody would notice when playing video .
The trend in databases is already to get enough RAM to keep all the indices in RAM , so we 're already doing the " read it in the morning " thing suggested in the article .
So the payoff for building flash devices to help with that is modest .
There are interesting things to do in this space , but improving reliability in the RAID sense is probably more important than speeding up non-sequential small accesses .</tokentext>
<sentencetext>
The real question is whether we need something other than read/write/seek to deal with the various forms of solid-state memory.
The usual options are 1) treat it as disk, reading and writing in big blocks, and 2) treat it as another layer of RAM cache, in main memory space.
Flash, etc.
though have much faster "seek times" than hard drives, and the penalty for reading smaller blocks is thus much lower.
Flash also has the property that writing is slower than reading, while for disk the two are about the same.
For small I/O operations, the operating system overhead for the operation takes more time than the actual data access.
For most end users, permanent storage is for storing big sequential files, audio or video.
There are interfaces that would make databases faster (one could have flash devices that implemented a key/value store, with onboard lookup), but nobody would notice when playing video.
The trend in databases is already to get enough RAM to keep all the indices in RAM, so we're already doing the "read it in the morning" thing suggested in the article.
So the payoff for building flash devices to help with that is modest.
There are interesting things to do in this space, but improving reliability in the RAID sense is probably more important than speeding up non-sequential small accesses.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612138</id>
	<title>Re:Why the vapourware tag?</title>
	<author>drinkypoo</author>
	<datestamp>1293812880000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>It probably got tagged vaporware because <em>where the fuck is my system with MRAM for main memory</em>? MRAM is a shipping product, too, but it was "supposed" to be in consumer devices before now, as main System RAM.</p></htmltext>
<tokenext>It probably got tagged vaporware because where the fuck is my system with MRAM for main memory ?
MRAM is a shipping product , too , but it was " supposed " to be in consumer devices before now , as main System RAM .</tokentext>
<sentencetext>It probably got tagged vaporware because where the fuck is my system with MRAM for main memory?
MRAM is a shipping product, too, but it was "supposed" to be in consumer devices before now, as main System RAM.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611522</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30613388</id>
	<title>Re:We're almost there already</title>
	<author>Urkki</author>
	<datestamp>1262348580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Depending on what you're doing, even that may not be an issue.  If you're doing massive database stuff, then yes.  However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy.  From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging.  Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM.  It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.</p><p>tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely.  It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.</p></div><p>Not just "available", but that's pretty much how all current operating systems work today. Software operates on a copy in memory (wether reading or writing), and OS writes back any changes at it's leisure. It's just a matter of available RAM vs. required RAM, and only if you run out of RAM, only then the disk becomes a bottleneck. I don't think data read from disk to memory is ever discarded even if unused for a long time, unless you run out of RAM (why would it be, that's just unnecessary extra work for OS when there's plenty of unused ram available already).</p></div>
	</htmltext>
<tokenext>Depending on what you 're doing , even that may not be an issue .
If you 're doing massive database stuff , then yes .
However , if your disk I/O is n't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the " hard " copy .
From your POV everything is instant , but any crash will only result in the loss of data from however far behind the harddrive copy is lagging .
Personally , what little I do need saved is simply text files - my notes in class , my homework , etc , and so I can just write to a partition on the harddrive that is n't loaded to RAM .
It does n't suffer at all from the harddrive I/O - I ca n't really type faster then a harddrive can write.tl ; dr : It 's perfectly feasible for ( some ) people to do as you 've described , and it works quite nicely .
It 's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology , it 's available today.Not just " available " , but that 's pretty much how all current operating systems work today .
Software operates on a copy in memory ( wether reading or writing ) , and OS writes back any changes at it 's leisure .
It 's just a matter of available RAM vs. required RAM , and only if you run out of RAM , only then the disk becomes a bottleneck .
I do n't think data read from disk to memory is ever discarded even if unused for a long time , unless you run out of RAM ( why would it be , that 's just unnecessary extra work for OS when there 's plenty of unused ram available already ) .</tokentext>
<sentencetext>Depending on what you're doing, even that may not be an issue.
If you're doing massive database stuff, then yes.
However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy.
From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging.
Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM.
It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely.
It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.Not just "available", but that's pretty much how all current operating systems work today.
Software operates on a copy in memory (wether reading or writing), and OS writes back any changes at it's leisure.
It's just a matter of available RAM vs. required RAM, and only if you run out of RAM, only then the disk becomes a bottleneck.
I don't think data read from disk to memory is ever discarded even if unused for a long time, unless you run out of RAM (why would it be, that's just unnecessary extra work for OS when there's plenty of unused ram available already).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611722</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30615236</id>
	<title>Bus speeds</title>
	<author>Torg</author>
	<datestamp>1262374440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>What the author fails to realize is that the limiting factor on a SAN is most often the host itself, not the disk.  A single disk my not have the IO, but an array most certainly does (depends on array).   A standard, 33 MHz PCI bus can only transfer 133Mb/s (theoretical max).  Even faster buses still do not match the I/O speed or throughput of a SAN.</p><p>The limiting factor on a PC is that southbridge chip, not the storage.  The vast majority of the systems typically connected simply can not push the I/O fast enough out of its ports.  It is not waiting on disk, it is waiting on the IO of its bridge chip and bus. Of course putting it on a ram disk is faster.  RAM sits off the north bridge and therefore has better throughput to the CPU.</p><p>This is more a limit of bridge chips and PC architecture then the speed of a SAN.</p><p>
&nbsp;</p></htmltext>
<tokenext>What the author fails to realize is that the limiting factor on a SAN is most often the host itself , not the disk .
A single disk my not have the IO , but an array most certainly does ( depends on array ) .
A standard , 33 MHz PCI bus can only transfer 133Mb/s ( theoretical max ) .
Even faster buses still do not match the I/O speed or throughput of a SAN.The limiting factor on a PC is that southbridge chip , not the storage .
The vast majority of the systems typically connected simply can not push the I/O fast enough out of its ports .
It is not waiting on disk , it is waiting on the IO of its bridge chip and bus .
Of course putting it on a ram disk is faster .
RAM sits off the north bridge and therefore has better throughput to the CPU.This is more a limit of bridge chips and PC architecture then the speed of a SAN .
 </tokentext>
<sentencetext>What the author fails to realize is that the limiting factor on a SAN is most often the host itself, not the disk.
A single disk my not have the IO, but an array most certainly does (depends on array).
A standard, 33 MHz PCI bus can only transfer 133Mb/s (theoretical max).
Even faster buses still do not match the I/O speed or throughput of a SAN.The limiting factor on a PC is that southbridge chip, not the storage.
The vast majority of the systems typically connected simply can not push the I/O fast enough out of its ports.
It is not waiting on disk, it is waiting on the IO of its bridge chip and bus.
Of course putting it on a ram disk is faster.
RAM sits off the north bridge and therefore has better throughput to the CPU.This is more a limit of bridge chips and PC architecture then the speed of a SAN.
 </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611600</id>
	<title>The 70's called. They want their I/O methods back.</title>
	<author>Anonymous</author>
	<datestamp>1293805080000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>From TFA:<blockquote><div><p>There is no method to provide hints about file usage; for example, you might want to have a hint that says the file will be read sequentially, or a hint that a file might be over written.  There are lots of possible hints, but there is no standard way of providing file hints...</p></div>
</blockquote><p>
Ya, we had that back in the stone-age and Multics would have been poster-child for this type of thinking, but it was a *bitch* and made portability problematic.  I think VMS has some of this type of capability with their <a href="http://en.wikipedia.org/wiki/Files-11" title="wikipedia.org">Files 11</a> [wikipedia.org] support - any VMS people care to comment. Unix (and most current OS) sees everything as a stream of bytes, in most cases, and this is much simpler.
</p><p>
An OS cannot be everything to all people all the time...</p></div>
	</htmltext>
<tokenext>From TFA : There is no method to provide hints about file usage ; for example , you might want to have a hint that says the file will be read sequentially , or a hint that a file might be over written .
There are lots of possible hints , but there is no standard way of providing file hints.. . Ya , we had that back in the stone-age and Multics would have been poster-child for this type of thinking , but it was a * bitch * and made portability problematic .
I think VMS has some of this type of capability with their Files 11 [ wikipedia.org ] support - any VMS people care to comment .
Unix ( and most current OS ) sees everything as a stream of bytes , in most cases , and this is much simpler .
An OS can not be everything to all people all the time.. .</tokentext>
<sentencetext>From TFA:There is no method to provide hints about file usage; for example, you might want to have a hint that says the file will be read sequentially, or a hint that a file might be over written.
There are lots of possible hints, but there is no standard way of providing file hints...

Ya, we had that back in the stone-age and Multics would have been poster-child for this type of thinking, but it was a *bitch* and made portability problematic.
I think VMS has some of this type of capability with their Files 11 [wikipedia.org] support - any VMS people care to comment.
Unix (and most current OS) sees everything as a stream of bytes, in most cases, and this is much simpler.
An OS cannot be everything to all people all the time...
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611764</id>
	<title>Plastique explosives plus hard drive</title>
	<author>Anonymous</author>
	<datestamp>1293807360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>equals phase change memory</p></htmltext>
<tokenext>equals phase change memory</tokentext>
<sentencetext>equals phase change memory</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611390</id>
	<title>Names</title>
	<author>Anonymous</author>
	<datestamp>1293802980000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Can we at least call it something else. Imagine talking to a female co-worker and describing the problem as the phase has changed, she'd think you were talking about her "time of the month!"</p></htmltext>
<tokenext>Can we at least call it something else .
Imagine talking to a female co-worker and describing the problem as the phase has changed , she 'd think you were talking about her " time of the month !
"</tokentext>
<sentencetext>Can we at least call it something else.
Imagine talking to a female co-worker and describing the problem as the phase has changed, she'd think you were talking about her "time of the month!
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611522</id>
	<title>Why the vapourware tag?</title>
	<author>Areyoukiddingme</author>
	<datestamp>1293804240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>How soon we forget.  The article is speculative, sure, but the hardware is not only real, it's in mass production by Samsung: <a href="http://hardware.slashdot.org/article.pl?sid=09/09/28/1959212" title="slashdot.org">http://hardware.slashdot.org/article.pl?sid=09/09/28/1959212</a> [slashdot.org]</p><p>Just looking at the numbers, the article is a bit overblown.  Phase change memory will first be a good replacement for flash memory, not DRAM.  It's still considerably slower than DRAM.  But it eliminates the erasable-by-page-only problem that has plagued SSDs, especially Intel SSDs, and the article does mention SSDs as a bright spot in the storage landscape.  PCM should make serious inroads into SSDs very quickly because manufacturers can eliminate a whole blob of difficult code.  With Samsung's manufacturing muscle behind it, prices per megabyte should be reasonable right out of the gate and as Samsung gets better at it, prices should plummet even faster than flash memory did.</p><p>The I/O path between storage and the CPU will get an upgrade, and it could very well be driven by PCM.  Flash memory SSDs are already very fast and PCM is claimed to be 4X faster.  That saturates the existing I/O paths (barring 16-lane PCIe cards sitting next to the video card in an identical slot).  Magnetic hard drives haven't come anywhere close to saturation.  Development concentrated for a decade (or two?) on increasing capacity, for which we are thankful, but the successes in capacity development have outrun improvements in I/O speed.  In turn, that meant that video cards were the driver behind I/O development, not storage.  Now that there's a storage tech in the same throughput class as a video card, I expect there to be a great deal of I/O standards development to deal with it.</p><p>But hard drives == tape?  Not for a long long time.  The development concentration on increasing capacity will pay off for many years to come.  PCM arrays with capacities matching modern hard drives (2 TB in a 3.5" half height case.  Unreal!) are undoubtedly a long ways off.</p><p>Hopefully there are no lurking patent trolls under the PCM bridge...</p></htmltext>
<tokenext>How soon we forget .
The article is speculative , sure , but the hardware is not only real , it 's in mass production by Samsung : http : //hardware.slashdot.org/article.pl ? sid = 09/09/28/1959212 [ slashdot.org ] Just looking at the numbers , the article is a bit overblown .
Phase change memory will first be a good replacement for flash memory , not DRAM .
It 's still considerably slower than DRAM .
But it eliminates the erasable-by-page-only problem that has plagued SSDs , especially Intel SSDs , and the article does mention SSDs as a bright spot in the storage landscape .
PCM should make serious inroads into SSDs very quickly because manufacturers can eliminate a whole blob of difficult code .
With Samsung 's manufacturing muscle behind it , prices per megabyte should be reasonable right out of the gate and as Samsung gets better at it , prices should plummet even faster than flash memory did.The I/O path between storage and the CPU will get an upgrade , and it could very well be driven by PCM .
Flash memory SSDs are already very fast and PCM is claimed to be 4X faster .
That saturates the existing I/O paths ( barring 16-lane PCIe cards sitting next to the video card in an identical slot ) .
Magnetic hard drives have n't come anywhere close to saturation .
Development concentrated for a decade ( or two ?
) on increasing capacity , for which we are thankful , but the successes in capacity development have outrun improvements in I/O speed .
In turn , that meant that video cards were the driver behind I/O development , not storage .
Now that there 's a storage tech in the same throughput class as a video card , I expect there to be a great deal of I/O standards development to deal with it.But hard drives = = tape ?
Not for a long long time .
The development concentration on increasing capacity will pay off for many years to come .
PCM arrays with capacities matching modern hard drives ( 2 TB in a 3.5 " half height case .
Unreal ! ) are undoubtedly a long ways off.Hopefully there are no lurking patent trolls under the PCM bridge.. .</tokentext>
<sentencetext>How soon we forget.
The article is speculative, sure, but the hardware is not only real, it's in mass production by Samsung: http://hardware.slashdot.org/article.pl?sid=09/09/28/1959212 [slashdot.org]Just looking at the numbers, the article is a bit overblown.
Phase change memory will first be a good replacement for flash memory, not DRAM.
It's still considerably slower than DRAM.
But it eliminates the erasable-by-page-only problem that has plagued SSDs, especially Intel SSDs, and the article does mention SSDs as a bright spot in the storage landscape.
PCM should make serious inroads into SSDs very quickly because manufacturers can eliminate a whole blob of difficult code.
With Samsung's manufacturing muscle behind it, prices per megabyte should be reasonable right out of the gate and as Samsung gets better at it, prices should plummet even faster than flash memory did.The I/O path between storage and the CPU will get an upgrade, and it could very well be driven by PCM.
Flash memory SSDs are already very fast and PCM is claimed to be 4X faster.
That saturates the existing I/O paths (barring 16-lane PCIe cards sitting next to the video card in an identical slot).
Magnetic hard drives haven't come anywhere close to saturation.
Development concentrated for a decade (or two?
) on increasing capacity, for which we are thankful, but the successes in capacity development have outrun improvements in I/O speed.
In turn, that meant that video cards were the driver behind I/O development, not storage.
Now that there's a storage tech in the same throughput class as a video card, I expect there to be a great deal of I/O standards development to deal with it.But hard drives == tape?
Not for a long long time.
The development concentration on increasing capacity will pay off for many years to come.
PCM arrays with capacities matching modern hard drives (2 TB in a 3.5" half height case.
Unreal!) are undoubtedly a long ways off.Hopefully there are no lurking patent trolls under the PCM bridge...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612070</id>
	<title>Boon for Linux, Bust for Windows.</title>
	<author>jameskojiro</author>
	<datestamp>1293811560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Windows is more closely tied to the whole "Separate levels of RAM memory and Hard Disk Memory" than Linux is I could really see Linux get more traction of all systems went to PCM tomorrow.</p></htmltext>
<tokenext>Windows is more closely tied to the whole " Separate levels of RAM memory and Hard Disk Memory " than Linux is I could really see Linux get more traction of all systems went to PCM tomorrow .</tokentext>
<sentencetext>Windows is more closely tied to the whole "Separate levels of RAM memory and Hard Disk Memory" than Linux is I could really see Linux get more traction of all systems went to PCM tomorrow.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30617948</id>
	<title>Re:Forgetting the lessons of SANs?</title>
	<author>Thing 1</author>
	<datestamp>1262354700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>I personally like how my arrays 'call home' and an HDS/EMC engineer shows up with a new drive, replaces the failed one and walks out the door, without me having to do anything about it.</p></div>
</blockquote><p>I personally like how my source code doesn't randomly walk out the door, but then that's just me I guess...</p></div>
	</htmltext>
<tokenext>I personally like how my arrays 'call home ' and an HDS/EMC engineer shows up with a new drive , replaces the failed one and walks out the door , without me having to do anything about it .
I personally like how my source code does n't randomly walk out the door , but then that 's just me I guess.. .</tokentext>
<sentencetext>I personally like how my arrays 'call home' and an HDS/EMC engineer shows up with a new drive, replaces the failed one and walks out the door, without me having to do anything about it.
I personally like how my source code doesn't randomly walk out the door, but then that's just me I guess...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612196</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611996</id>
	<title>Is there some kind of a prize?</title>
	<author>Anonymous</author>
	<datestamp>1293810840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant.</p></div><p>Data storage. Irrelevant. I.. see. The new year is not yet 14 hours old but I feel a certain confidence that this will be the single most vacuous thing I encounter in 2010 - and I've already seen Entertainment Tonight this year.</p></div>
	</htmltext>
<tokenext>Access to data is n't keeping pace with advances in CPU and memory , creating an I/O bottleneck that threatens to make data storage irrelevant.Data storage .
Irrelevant. I.. see. The new year is not yet 14 hours old but I feel a certain confidence that this will be the single most vacuous thing I encounter in 2010 - and I 've already seen Entertainment Tonight this year .</tokentext>
<sentencetext>Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant.Data storage.
Irrelevant. I.. see. The new year is not yet 14 hours old but I feel a certain confidence that this will be the single most vacuous thing I encounter in 2010 - and I've already seen Entertainment Tonight this year.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612196</id>
	<title>Forgetting the lessons of SANs?</title>
	<author>Anonymous</author>
	<datestamp>1293813900000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>Maybe these guys ought to ask someone that was around in the days BEFORE there were SANs.  Managing storage back then absolutely sucked.  Every server had it's own internal storage with it's own raid controller OR had to be within 9m (the max distance of LVD SCSI) of a storage array.</p><p>There was no standardization, every OS has it's own volume managers, firmware updates, patches etc etc etc.  Plus compare the number of management points when using a SAN vs internal storage.  An enterprise would have thousands of servers connecting through a handful of SAN switches to a handful of arrays. Server admins have more important things to do than replace dead hard drives.</p><p>Want to replace a hot spare on a server, what a pain.  As you had to understand the volume manager or unique raid controller in that specific server.  I personally like how my arrays 'call home' and an HDS/EMC engineer shows up with a new drive, replaces the failed one and walks out the door, without me having to do anything about it.</p><p>Two words: Low Utilization.  You'd buy an HP server with two 36GB drives and the OS+APP+data would only require 10GB of space.  So you'd have this land locked storage all over the place.</p><p>Moving the storage to the edge?  Even if you replace spinning platters with solid state, putting all the data on the edge is a 'bad thing.'</p><p><i>"But Google does it!"</i></p><p>Maybe so, but then again they don't run their enterprise based upon Oracle, Exchange, SAP, CIFS/NFS based home directories etc like almost all other enterprises do.</p></htmltext>
<tokenext>Maybe these guys ought to ask someone that was around in the days BEFORE there were SANs .
Managing storage back then absolutely sucked .
Every server had it 's own internal storage with it 's own raid controller OR had to be within 9m ( the max distance of LVD SCSI ) of a storage array.There was no standardization , every OS has it 's own volume managers , firmware updates , patches etc etc etc .
Plus compare the number of management points when using a SAN vs internal storage .
An enterprise would have thousands of servers connecting through a handful of SAN switches to a handful of arrays .
Server admins have more important things to do than replace dead hard drives.Want to replace a hot spare on a server , what a pain .
As you had to understand the volume manager or unique raid controller in that specific server .
I personally like how my arrays 'call home ' and an HDS/EMC engineer shows up with a new drive , replaces the failed one and walks out the door , without me having to do anything about it.Two words : Low Utilization .
You 'd buy an HP server with two 36GB drives and the OS + APP + data would only require 10GB of space .
So you 'd have this land locked storage all over the place.Moving the storage to the edge ?
Even if you replace spinning platters with solid state , putting all the data on the edge is a 'bad thing .
' " But Google does it !
" Maybe so , but then again they do n't run their enterprise based upon Oracle , Exchange , SAP , CIFS/NFS based home directories etc like almost all other enterprises do .</tokentext>
<sentencetext>Maybe these guys ought to ask someone that was around in the days BEFORE there were SANs.
Managing storage back then absolutely sucked.
Every server had it's own internal storage with it's own raid controller OR had to be within 9m (the max distance of LVD SCSI) of a storage array.There was no standardization, every OS has it's own volume managers, firmware updates, patches etc etc etc.
Plus compare the number of management points when using a SAN vs internal storage.
An enterprise would have thousands of servers connecting through a handful of SAN switches to a handful of arrays.
Server admins have more important things to do than replace dead hard drives.Want to replace a hot spare on a server, what a pain.
As you had to understand the volume manager or unique raid controller in that specific server.
I personally like how my arrays 'call home' and an HDS/EMC engineer shows up with a new drive, replaces the failed one and walks out the door, without me having to do anything about it.Two words: Low Utilization.
You'd buy an HP server with two 36GB drives and the OS+APP+data would only require 10GB of space.
So you'd have this land locked storage all over the place.Moving the storage to the edge?
Even if you replace spinning platters with solid state, putting all the data on the edge is a 'bad thing.
'"But Google does it!
"Maybe so, but then again they don't run their enterprise based upon Oracle, Exchange, SAP, CIFS/NFS based home directories etc like almost all other enterprises do.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611846</id>
	<title>Re:The 70's called. They want their I/O methods ba</title>
	<author>mysidia</author>
	<datestamp>1293808320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
We have it today.  Tfa's on crack.
</p><p>
It's called <a href="http://linux.die.net/man/2/madvise" title="die.net" rel="nofollow">madvise</a> [die.net]
</p><blockquote><div><p>It allows an application to tell the kernel <b>how it expects to use some mapped or shared memory areas</b>, so that the kernel can choose appropriate read-ahead and caching techniques.</p></div>
</blockquote><p>
In Linux there is also <a href="http://linux.die.net/man/2/fadvise" title="die.net" rel="nofollow">fadvise()</a> [die.net] </p><p>
Of course...  reading from a file  (from an app point of view) is really nothing more than accessing data in a <b>mapped memory area</b>.
Oh..   I    suppose unless you actually use the POSIX <b>mmap</b>  call  to map the file into memory for reading,   you won't  have an easy ability to provide the advise.
</p><p>
And it makes portability a bitch regardless,  as not all OSes are POSIX, and not all OSes have mmap().
</p><p>
Nevertheless, it's not fair to say it is impossible for an app to provide hints.
Whether giving the hints or not actually has a useful effect (usually)  may be a matter of debate.
</p></div>
	</htmltext>
<tokenext>We have it today .
Tfa 's on crack .
It 's called madvise [ die.net ] It allows an application to tell the kernel how it expects to use some mapped or shared memory areas , so that the kernel can choose appropriate read-ahead and caching techniques .
In Linux there is also fadvise ( ) [ die.net ] Of course... reading from a file ( from an app point of view ) is really nothing more than accessing data in a mapped memory area .
Oh.. I suppose unless you actually use the POSIX mmap call to map the file into memory for reading , you wo n't have an easy ability to provide the advise .
And it makes portability a bitch regardless , as not all OSes are POSIX , and not all OSes have mmap ( ) .
Nevertheless , it 's not fair to say it is impossible for an app to provide hints .
Whether giving the hints or not actually has a useful effect ( usually ) may be a matter of debate .</tokentext>
<sentencetext>
We have it today.
Tfa's on crack.
It's called madvise [die.net]
It allows an application to tell the kernel how it expects to use some mapped or shared memory areas, so that the kernel can choose appropriate read-ahead and caching techniques.
In Linux there is also fadvise() [die.net] 
Of course...  reading from a file  (from an app point of view) is really nothing more than accessing data in a mapped memory area.
Oh..   I    suppose unless you actually use the POSIX mmap  call  to map the file into memory for reading,   you won't  have an easy ability to provide the advise.
And it makes portability a bitch regardless,  as not all OSes are POSIX, and not all OSes have mmap().
Nevertheless, it's not fair to say it is impossible for an app to provide hints.
Whether giving the hints or not actually has a useful effect (usually)  may be a matter of debate.

	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611600</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_01_0019233_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611760
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611600
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_01_0019233_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612138
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611522
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_01_0019233_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30618466
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_01_0019233_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611846
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611600
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_01_0019233_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612392
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611522
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_01_0019233_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611442
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_01_0019233_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30613388
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611722
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_01_01_0019233_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30617948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612196
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611764
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611544
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611350
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30613070
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611602
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612682
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611374
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611442
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611722
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30613388
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30618466
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612196
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30617948
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612070
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611320
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611522
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612392
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30612138
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611390
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_01_01_0019233.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611600
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611846
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_01_01_0019233.30611760
</commentlist>
</conversation>
