<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_11_11_2246226</id>
	<title>Remus Project Brings Transparent High Availability To Xen</title>
	<author>timothy</author>
	<datestamp>1257936480000</datestamp>
	<htmltext>An anonymous reader writes <i>"<a href="http://dsg.cs.ubc.ca/remus/">The Remus project</a> has just been incorporated into the <a href="http://xen.org/">Xen hypervisor</a>.  Developed at the University of British Columbia, Remus provides a thin layer that continuously replicates a running virtual machine onto a second physical host.  Remus requires no modifications to the OS or applications within the protected VM: on failure, Remus activates the replica on the second host, and the VM simply picks up where the original system died. Open TCP connections remain intact, and applications continue to run unaware of the failure.  It's pretty fun to yank the plug out on your web server and see everything continue to tick along.  This sort of HA has traditionally required either really expensive hardware, or very complex and invasive modifications to applications and OSes."</i></htmltext>
<tokenext>An anonymous reader writes " The Remus project has just been incorporated into the Xen hypervisor .
Developed at the University of British Columbia , Remus provides a thin layer that continuously replicates a running virtual machine onto a second physical host .
Remus requires no modifications to the OS or applications within the protected VM : on failure , Remus activates the replica on the second host , and the VM simply picks up where the original system died .
Open TCP connections remain intact , and applications continue to run unaware of the failure .
It 's pretty fun to yank the plug out on your web server and see everything continue to tick along .
This sort of HA has traditionally required either really expensive hardware , or very complex and invasive modifications to applications and OSes .
"</tokentext>
<sentencetext>An anonymous reader writes "The Remus project has just been incorporated into the Xen hypervisor.
Developed at the University of British Columbia, Remus provides a thin layer that continuously replicates a running virtual machine onto a second physical host.
Remus requires no modifications to the OS or applications within the protected VM: on failure, Remus activates the replica on the second host, and the VM simply picks up where the original system died.
Open TCP connections remain intact, and applications continue to run unaware of the failure.
It's pretty fun to yank the plug out on your web server and see everything continue to tick along.
This sort of HA has traditionally required either really expensive hardware, or very complex and invasive modifications to applications and OSes.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067076</id>
	<title>Question</title>
	<author>Anonymous</author>
	<datestamp>1257076800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Not immediately clear on the Remus page... Is this like a constantly going "live migration" (without actually switching hosts) in that it \_only\_ keeps a copy of the memory of the guest?  Or does this also keep a copy of the disk image?  It'd be nice to not need shared storage just to be able to migrate without downtime...</htmltext>
<tokenext>Not immediately clear on the Remus page... Is this like a constantly going " live migration " ( without actually switching hosts ) in that it \ _only \ _ keeps a copy of the memory of the guest ?
Or does this also keep a copy of the disk image ?
It 'd be nice to not need shared storage just to be able to migrate without downtime.. .</tokentext>
<sentencetext>Not immediately clear on the Remus page... Is this like a constantly going "live migration" (without actually switching hosts) in that it \_only\_ keeps a copy of the memory of the guest?
Or does this also keep a copy of the disk image?
It'd be nice to not need shared storage just to be able to migrate without downtime...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068266</id>
	<title>So it replicates the state to the new machine</title>
	<author>Anonymous</author>
	<datestamp>1257085560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>So it replicates the state to the new machine and then the new machine executes the same instructions and crashes the same way....</htmltext>
<tokenext>So it replicates the state to the new machine and then the new machine executes the same instructions and crashes the same way... .</tokentext>
<sentencetext>So it replicates the state to the new machine and then the new machine executes the same instructions and crashes the same way....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068382</id>
	<title>Re:Already done by VMware</title>
	<author>Cheaty</author>
	<datestamp>1257086340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This isn't lockstep. In storage terms, if you think of lockstep as synchronous replication, this is more akin to asynchronous snapshot-based replication. The metaphor falls apart a bit because the primary does wait for acknowledgment before modifying its external state (sending network packets or writing to disk), but can otherwise continue execution.</p></htmltext>
<tokenext>This is n't lockstep .
In storage terms , if you think of lockstep as synchronous replication , this is more akin to asynchronous snapshot-based replication .
The metaphor falls apart a bit because the primary does wait for acknowledgment before modifying its external state ( sending network packets or writing to disk ) , but can otherwise continue execution .</tokentext>
<sentencetext>This isn't lockstep.
In storage terms, if you think of lockstep as synchronous replication, this is more akin to asynchronous snapshot-based replication.
The metaphor falls apart a bit because the primary does wait for acknowledgment before modifying its external state (sending network packets or writing to disk), but can otherwise continue execution.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068276</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068232</id>
	<title>Wrong place to put a failsafe?</title>
	<author>mattbee</author>
	<datestamp>1257085080000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>Surely there is a strong possibility of a failure where both VMs run at once- the original image thinking it has lost touch with a dead backup, and the backup thinking the master is dead, and so starting to execute independently?  If they're connected to the same storage / network segment, it could cause data loss, bring down the network service and so on.  I've not investigated these types of lockstep VMs, but it seems you have to make some pretty strong assumptions about failure modes, which always break eventually commodity hardware (I've seen bad backplanes, network chips, CPU caches, RAM of course, switches...).  How can you possibly handle these cases to avoid having to mop up after your VM is accidentally cloned?</p></htmltext>
<tokenext>Surely there is a strong possibility of a failure where both VMs run at once- the original image thinking it has lost touch with a dead backup , and the backup thinking the master is dead , and so starting to execute independently ?
If they 're connected to the same storage / network segment , it could cause data loss , bring down the network service and so on .
I 've not investigated these types of lockstep VMs , but it seems you have to make some pretty strong assumptions about failure modes , which always break eventually commodity hardware ( I 've seen bad backplanes , network chips , CPU caches , RAM of course , switches... ) .
How can you possibly handle these cases to avoid having to mop up after your VM is accidentally cloned ?</tokentext>
<sentencetext>Surely there is a strong possibility of a failure where both VMs run at once- the original image thinking it has lost touch with a dead backup, and the backup thinking the master is dead, and so starting to execute independently?
If they're connected to the same storage / network segment, it could cause data loss, bring down the network service and so on.
I've not investigated these types of lockstep VMs, but it seems you have to make some pretty strong assumptions about failure modes, which always break eventually commodity hardware (I've seen bad backplanes, network chips, CPU caches, RAM of course, switches...).
How can you possibly handle these cases to avoid having to mop up after your VM is accidentally cloned?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066974</id>
	<title>RONALDO</title>
	<author>Anonymous</author>
	<datestamp>1257076260000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Brilha muito no Corinthians!</p></htmltext>
<tokenext>Brilha muito no Corinthians !</tokentext>
<sentencetext>Brilha muito no Corinthians!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950</id>
	<title>Already done by VMware</title>
	<author>Lurching</author>
	<datestamp>1257076200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext>They may have a patent too!!</htmltext>
<tokenext>They may have a patent too !
!</tokentext>
<sentencetext>They may have a patent too!
!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067672</id>
	<title>Re:Himalaya</title>
	<author>Cheaty</author>
	<datestamp>1257080640000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Actually, after reading the paper, this is no threat to Stratus or other players in the space like Marathon or VMWare's FT. The performance impact is pretty significant - by their own benchmarks there was a 50\% perf hit in a kernel compile test, and 75\% in a web server benchmark.</p><p>This is an interesting approach and seems to handle multiple vCPU's in the VM which I haven't seen done by the software approaches like Marathon and VMware FT, but I think it will mainly be used in applications that would have never been considered for a more expensive solution anyway.</p></htmltext>
<tokenext>Actually , after reading the paper , this is no threat to Stratus or other players in the space like Marathon or VMWare 's FT. The performance impact is pretty significant - by their own benchmarks there was a 50 \ % perf hit in a kernel compile test , and 75 \ % in a web server benchmark.This is an interesting approach and seems to handle multiple vCPU 's in the VM which I have n't seen done by the software approaches like Marathon and VMware FT , but I think it will mainly be used in applications that would have never been considered for a more expensive solution anyway .</tokentext>
<sentencetext>Actually, after reading the paper, this is no threat to Stratus or other players in the space like Marathon or VMWare's FT. The performance impact is pretty significant - by their own benchmarks there was a 50\% perf hit in a kernel compile test, and 75\% in a web server benchmark.This is an interesting approach and seems to handle multiple vCPU's in the VM which I haven't seen done by the software approaches like Marathon and VMware FT, but I think it will mainly be used in applications that would have never been considered for a more expensive solution anyway.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067294</id>
	<title>Re:Already done by VMware</title>
	<author>Anonymous</author>
	<datestamp>1257078240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext>Yeah, and at a <b>great</b> price point. *rolleyes*<br>
<br>
IIRC, to get this kind of functionality from ESX or vSphere you have to pay licenses numbering in the thousands of dollars for each VM host as well as a separate license fee for their centralized Virtual Center management system. I'm glad to see that this is finally making it into the Xen mainline.</htmltext>
<tokenext>Yeah , and at a great price point .
* rolleyes * IIRC , to get this kind of functionality from ESX or vSphere you have to pay licenses numbering in the thousands of dollars for each VM host as well as a separate license fee for their centralized Virtual Center management system .
I 'm glad to see that this is finally making it into the Xen mainline .</tokentext>
<sentencetext>Yeah, and at a great price point.
*rolleyes*

IIRC, to get this kind of functionality from ESX or vSphere you have to pay licenses numbering in the thousands of dollars for each VM host as well as a separate license fee for their centralized Virtual Center management system.
I'm glad to see that this is finally making it into the Xen mainline.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30070722</id>
	<title>Re:Already done by VMware</title>
	<author>oreaq</author>
	<datestamp>1258024380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No. IBM has done this kind of things on mainframes 20 years ago. This stuff is actually pretty old.</htmltext>
<tokenext>No .
IBM has done this kind of things on mainframes 20 years ago .
This stuff is actually pretty old .</tokentext>
<sentencetext>No.
IBM has done this kind of things on mainframes 20 years ago.
This stuff is actually pretty old.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068454</id>
	<title>Left VMware ESX for Xenserver 5.5</title>
	<author>Anonymous</author>
	<datestamp>1257086940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I left VMware ESX 3.5 for XenServer 5.5 and I have never been happier.</p><p>I am running 4 DL585 servers with (so far) 42 production guests (Linux &amp; win2k3)and have really great, more predictable performance .</p><p>If someone is running VMware and is worried about the cost or performance they need to consider Citrix XenServer.</p></htmltext>
<tokenext>I left VMware ESX 3.5 for XenServer 5.5 and I have never been happier.I am running 4 DL585 servers with ( so far ) 42 production guests ( Linux &amp; win2k3 ) and have really great , more predictable performance .If someone is running VMware and is worried about the cost or performance they need to consider Citrix XenServer .</tokentext>
<sentencetext>I left VMware ESX 3.5 for XenServer 5.5 and I have never been happier.I am running 4 DL585 servers with (so far) 42 production guests (Linux &amp; win2k3)and have really great, more predictable performance .If someone is running VMware and is worried about the cost or performance they need to consider Citrix XenServer.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067956</id>
	<title>Re:Already done by VMware</title>
	<author>Anonymous</author>
	<datestamp>1257082680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I just went through this at my company. VMWare calls it "Fault Tolerance" You're looking at $2K+ <i>per CPU socket</i>. A rack of 24 server with dual socket is over $50K in licences. Of course, that's just the hypervisor license with no support. Plus you need the management server licenses add another $xxK for 24 servers (I don't remember the cost on that. It was $6K for three hosts or something like that).</p></htmltext>
<tokenext>I just went through this at my company .
VMWare calls it " Fault Tolerance " You 're looking at $ 2K + per CPU socket .
A rack of 24 server with dual socket is over $ 50K in licences .
Of course , that 's just the hypervisor license with no support .
Plus you need the management server licenses add another $ xxK for 24 servers ( I do n't remember the cost on that .
It was $ 6K for three hosts or something like that ) .</tokentext>
<sentencetext>I just went through this at my company.
VMWare calls it "Fault Tolerance" You're looking at $2K+ per CPU socket.
A rack of 24 server with dual socket is over $50K in licences.
Of course, that's just the hypervisor license with no support.
Plus you need the management server licenses add another $xxK for 24 servers (I don't remember the cost on that.
It was $6K for three hosts or something like that).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067294</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067424</id>
	<title>Re:state transfer</title>
	<author>Garridan</author>
	<datestamp>1257078960000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>Mountain dew spilled on top of the unit, for example.</p></div><p>FTFS:</p><p><div class="quote"><p>Remus provides a thin layer that continuously replicates a running virtual machine onto a second physical host.</p></div><p>Wow!  This software is *incredible* if mountain dew spilled on top of one machine is instantly replicated on the other machine!  I'm gonna go read the source immediately, this has huge ramifications!  In particular, if an officemate gets coffee and I also want coffee, only one of us needs to actually purchase a cup!</p></div>
	</htmltext>
<tokenext>Mountain dew spilled on top of the unit , for example.FTFS : Remus provides a thin layer that continuously replicates a running virtual machine onto a second physical host.Wow !
This software is * incredible * if mountain dew spilled on top of one machine is instantly replicated on the other machine !
I 'm gon na go read the source immediately , this has huge ramifications !
In particular , if an officemate gets coffee and I also want coffee , only one of us needs to actually purchase a cup !</tokentext>
<sentencetext>Mountain dew spilled on top of the unit, for example.FTFS:Remus provides a thin layer that continuously replicates a running virtual machine onto a second physical host.Wow!
This software is *incredible* if mountain dew spilled on top of one machine is instantly replicated on the other machine!
I'm gonna go read the source immediately, this has huge ramifications!
In particular, if an officemate gets coffee and I also want coffee, only one of us needs to actually purchase a cup!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067110</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067302</id>
	<title>Blakes 7</title>
	<author>Anonymous</author>
	<datestamp>1257078300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Xen ? The computer of the Liberator?</p></htmltext>
<tokenext>Xen ?
The computer of the Liberator ?</tokentext>
<sentencetext>Xen ?
The computer of the Liberator?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069868</id>
	<title>Re:It's pretty fun</title>
	<author>Jeremi</author>
	<datestamp>1257102720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Or an ordinary, every day run of the mill 'off the shelf' plain jane beige UPS. or a Ghetto one, if you'd like.</i></p><p>Sure, but power failure isn't the only thing that can stop your server from running -- it's just the easiest one to reproduce without permanently damaging anything.  If you'd like a better example, yank the CPU out of your web server's motherboard instead.  Your UPS won't save you then!<nobr> <wbr></nobr>:^)</p></htmltext>
<tokenext>Or an ordinary , every day run of the mill 'off the shelf ' plain jane beige UPS .
or a Ghetto one , if you 'd like.Sure , but power failure is n't the only thing that can stop your server from running -- it 's just the easiest one to reproduce without permanently damaging anything .
If you 'd like a better example , yank the CPU out of your web server 's motherboard instead .
Your UPS wo n't save you then !
: ^ )</tokentext>
<sentencetext>Or an ordinary, every day run of the mill 'off the shelf' plain jane beige UPS.
or a Ghetto one, if you'd like.Sure, but power failure isn't the only thing that can stop your server from running -- it's just the easiest one to reproduce without permanently damaging anything.
If you'd like a better example, yank the CPU out of your web server's motherboard instead.
Your UPS won't save you then!
:^)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30090870</id>
	<title>Re:Answer</title>
	<author>BitZtream</author>
	<datestamp>1258145640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>From <a href="http://www.usenix.org/events/nsdi/tech/full\_papers/cully/cully\_html/index.html" title="usenix.org">http://www.usenix.org/events/nsdi/tech/full\_papers/cully/cully\_html/index.html</a> [usenix.org] </p><blockquote><div><p>We then evaluate the overhead of the system on application performance across very different workloads. We find that a general-purpose task such as kernel compilation incurs approximately a 50\% performance penalty when checkpointed 20 times per second, while network-dependent workloads as represented by SPECweb perform at somewhat more than one quarter native speed. The additional overhead in this case is largely due to output-commit delay on the network interface.</p><p>Based on this analysis, we conclude that although Remus is efficient at state replication, it does introduce significant network delay, particularly for applications that exhibit poor locality in memory writes. Thus, applications that are very sensitive to network latency may not be well suited to this type of high availability service (although there are a number of optimizations which have the potential to noticeably reduce network delay, some of which we discuss in more detail following the benchmark results). We feel that we have been conservative in our evaluation, using benchmark-driven workloads which are significantly more intensive than would be expected in a typical virtualized system; the consolidation opportunities such an environment presents are particularly attractive because system load is variable.</p></div></blockquote><p>So with 20 checkpoints a second you turn a normal  compile time into twice as long as the original time.  With a web server, just serving web pages and not pulling data off an external database you're doing to 25\% or the original speed.  If you throw in database access its not just going to be 25\% of 25\%, its going to be far worse due to the two way communications with the database having to wait on checkpointing.</p><p>Thats at 20 checkpoints a second, at 1-2 seconds per checkpoint I imagine the kernel compile would probably go faster if you're not using NFS, but networked IO is going to be unusable.</p><p>Its cool that you've done it, but its really not useful for many applications.</p></div>
	</htmltext>
<tokenext>From http : //www.usenix.org/events/nsdi/tech/full \ _papers/cully/cully \ _html/index.html [ usenix.org ] We then evaluate the overhead of the system on application performance across very different workloads .
We find that a general-purpose task such as kernel compilation incurs approximately a 50 \ % performance penalty when checkpointed 20 times per second , while network-dependent workloads as represented by SPECweb perform at somewhat more than one quarter native speed .
The additional overhead in this case is largely due to output-commit delay on the network interface.Based on this analysis , we conclude that although Remus is efficient at state replication , it does introduce significant network delay , particularly for applications that exhibit poor locality in memory writes .
Thus , applications that are very sensitive to network latency may not be well suited to this type of high availability service ( although there are a number of optimizations which have the potential to noticeably reduce network delay , some of which we discuss in more detail following the benchmark results ) .
We feel that we have been conservative in our evaluation , using benchmark-driven workloads which are significantly more intensive than would be expected in a typical virtualized system ; the consolidation opportunities such an environment presents are particularly attractive because system load is variable.So with 20 checkpoints a second you turn a normal compile time into twice as long as the original time .
With a web server , just serving web pages and not pulling data off an external database you 're doing to 25 \ % or the original speed .
If you throw in database access its not just going to be 25 \ % of 25 \ % , its going to be far worse due to the two way communications with the database having to wait on checkpointing.Thats at 20 checkpoints a second , at 1-2 seconds per checkpoint I imagine the kernel compile would probably go faster if you 're not using NFS , but networked IO is going to be unusable.Its cool that you 've done it , but its really not useful for many applications .</tokentext>
<sentencetext>From http://www.usenix.org/events/nsdi/tech/full\_papers/cully/cully\_html/index.html [usenix.org] We then evaluate the overhead of the system on application performance across very different workloads.
We find that a general-purpose task such as kernel compilation incurs approximately a 50\% performance penalty when checkpointed 20 times per second, while network-dependent workloads as represented by SPECweb perform at somewhat more than one quarter native speed.
The additional overhead in this case is largely due to output-commit delay on the network interface.Based on this analysis, we conclude that although Remus is efficient at state replication, it does introduce significant network delay, particularly for applications that exhibit poor locality in memory writes.
Thus, applications that are very sensitive to network latency may not be well suited to this type of high availability service (although there are a number of optimizations which have the potential to noticeably reduce network delay, some of which we discuss in more detail following the benchmark results).
We feel that we have been conservative in our evaluation, using benchmark-driven workloads which are significantly more intensive than would be expected in a typical virtualized system; the consolidation opportunities such an environment presents are particularly attractive because system load is variable.So with 20 checkpoints a second you turn a normal  compile time into twice as long as the original time.
With a web server, just serving web pages and not pulling data off an external database you're doing to 25\% or the original speed.
If you throw in database access its not just going to be 25\% of 25\%, its going to be far worse due to the two way communications with the database having to wait on checkpointing.Thats at 20 checkpoints a second, at 1-2 seconds per checkpoint I imagine the kernel compile would probably go faster if you're not using NFS, but networked IO is going to be unusable.Its cool that you've done it, but its really not useful for many applications.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067310</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30084718</id>
	<title>Re:Already done by VMware</title>
	<author>pedershk</author>
	<datestamp>1258108020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No. <a href="http://www.marathontechnologies.com/high\_availability\_xenserver.html" title="marathontechnologies.com" rel="nofollow">http://www.marathontechnologies.com/high\_availability\_xenserver.html</a> [marathontechnologies.com]</p></htmltext>
<tokenext>No .
http : //www.marathontechnologies.com/high \ _availability \ _xenserver.html [ marathontechnologies.com ]</tokentext>
<sentencetext>No.
http://www.marathontechnologies.com/high\_availability\_xenserver.html [marathontechnologies.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30073056</id>
	<title>Nothing new under the sun</title>
	<author>Anonymous</author>
	<datestamp>1258043040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This is nothing new, simply a modern implementation of a classic idea.</p><p>See "Hypervisor-based Fault Tolerance" by Bressoud and Schneider (SIGOPS 1995).</p><p>http://www.cs.cornell.edu/fbs/publications/HyperFTol.pdf</p><p>Every now and then, someone has to come along and pretend to do something new, either out of ignorance or the academic "publish or perish" pressure.<br>Just the other day, we were looking at yet another implementation of a transactional operating system (TXOS).</p></htmltext>
<tokenext>This is nothing new , simply a modern implementation of a classic idea.See " Hypervisor-based Fault Tolerance " by Bressoud and Schneider ( SIGOPS 1995 ) .http : //www.cs.cornell.edu/fbs/publications/HyperFTol.pdfEvery now and then , someone has to come along and pretend to do something new , either out of ignorance or the academic " publish or perish " pressure.Just the other day , we were looking at yet another implementation of a transactional operating system ( TXOS ) .</tokentext>
<sentencetext>This is nothing new, simply a modern implementation of a classic idea.See "Hypervisor-based Fault Tolerance" by Bressoud and Schneider (SIGOPS 1995).http://www.cs.cornell.edu/fbs/publications/HyperFTol.pdfEvery now and then, someone has to come along and pretend to do something new, either out of ignorance or the academic "publish or perish" pressure.Just the other day, we were looking at yet another implementation of a transactional operating system (TXOS).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069510</id>
	<title>Re:Already done by VMware</title>
	<author>ckaminski</author>
	<datestamp>1257098460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Prior art by HP which used to do this in Pentium-based Netservers?<br><br>Granted real hardware, as opposed to software, but perhaps?</htmltext>
<tokenext>Prior art by HP which used to do this in Pentium-based Netservers ? Granted real hardware , as opposed to software , but perhaps ?</tokentext>
<sentencetext>Prior art by HP which used to do this in Pentium-based Netservers?Granted real hardware, as opposed to software, but perhaps?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068276</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067310</id>
	<title>Answer</title>
	<author>Anonymous</author>
	<datestamp>1257078360000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>I've worked with Remus, so I can answer your question.</p><p>It's not "constantly going" into live migration. The backup image is constantly kept in a "paused" state. It doesn't come out of the paused state until communication with the original is broken.</p><p>Until the backup goes live, the shadow pages for memory are updated, via checkpoints. The checkpointing interval is somewhat variable, but it's actually hardcoded into the Xen software (at present - this will change), regardless of what the user level utility tells you.</p><p>As it is, the subsecond checking doesn't work too well. But intervals of about 1-2 seconds works great. Getting subsecond checkpointing can be done (I've done it), but you need extra code than what Remus currently provides.</p><p>Similar comments are applicable to the storage updating. This works absolutely superbly if you're using something like DRBD for the storage replication.</p><p>Remus is pretty cool technology, and it serves as a very solid foundation for taking things to the next level.</p><p>The folks at UBC have done a superb job here, and should be well congratulated.</p></htmltext>
<tokenext>I 've worked with Remus , so I can answer your question.It 's not " constantly going " into live migration .
The backup image is constantly kept in a " paused " state .
It does n't come out of the paused state until communication with the original is broken.Until the backup goes live , the shadow pages for memory are updated , via checkpoints .
The checkpointing interval is somewhat variable , but it 's actually hardcoded into the Xen software ( at present - this will change ) , regardless of what the user level utility tells you.As it is , the subsecond checking does n't work too well .
But intervals of about 1-2 seconds works great .
Getting subsecond checkpointing can be done ( I 've done it ) , but you need extra code than what Remus currently provides.Similar comments are applicable to the storage updating .
This works absolutely superbly if you 're using something like DRBD for the storage replication.Remus is pretty cool technology , and it serves as a very solid foundation for taking things to the next level.The folks at UBC have done a superb job here , and should be well congratulated .</tokentext>
<sentencetext>I've worked with Remus, so I can answer your question.It's not "constantly going" into live migration.
The backup image is constantly kept in a "paused" state.
It doesn't come out of the paused state until communication with the original is broken.Until the backup goes live, the shadow pages for memory are updated, via checkpoints.
The checkpointing interval is somewhat variable, but it's actually hardcoded into the Xen software (at present - this will change), regardless of what the user level utility tells you.As it is, the subsecond checking doesn't work too well.
But intervals of about 1-2 seconds works great.
Getting subsecond checkpointing can be done (I've done it), but you need extra code than what Remus currently provides.Similar comments are applicable to the storage updating.
This works absolutely superbly if you're using something like DRBD for the storage replication.Remus is pretty cool technology, and it serves as a very solid foundation for taking things to the next level.The folks at UBC have done a superb job here, and should be well congratulated.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067248</id>
	<title>Re:Himalaya</title>
	<author>Anonymous</author>
	<datestamp>1257077880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>How does this compare to a "big iron" solution like Tandem/Himalaya/NonStop/whatever-it's-called-nowadays.</p></div><p>Precisely.</p><p>It's actually pretty cool from a computing history aspect. Once upon our time, the mainframes were the bad-assed machines. Hot-swapping power supplies and core modules. Several nines of uptime. Now we're doing it in software.</p><p>I see it as a mirror to what's happening with data storage and the whole "cloud computing" thing. Going back and fourth between big hosted machines and dumb clients to smaller smarter machines. It's like we flip back and fourth every few years when it comes to computer ideology.</p><p>I guess, what I'm trying to get at is...I can't think of anything too insightful to say. The only thing that comes to mind is: It's pretty damn cool how old ideas become new ideas. How the archaic way of doing things suddenly finds a place with new technology.</p></div>
	</htmltext>
<tokenext>How does this compare to a " big iron " solution like Tandem/Himalaya/NonStop/whatever-it 's-called-nowadays.Precisely.It 's actually pretty cool from a computing history aspect .
Once upon our time , the mainframes were the bad-assed machines .
Hot-swapping power supplies and core modules .
Several nines of uptime .
Now we 're doing it in software.I see it as a mirror to what 's happening with data storage and the whole " cloud computing " thing .
Going back and fourth between big hosted machines and dumb clients to smaller smarter machines .
It 's like we flip back and fourth every few years when it comes to computer ideology.I guess , what I 'm trying to get at is...I ca n't think of anything too insightful to say .
The only thing that comes to mind is : It 's pretty damn cool how old ideas become new ideas .
How the archaic way of doing things suddenly finds a place with new technology .</tokentext>
<sentencetext>How does this compare to a "big iron" solution like Tandem/Himalaya/NonStop/whatever-it's-called-nowadays.Precisely.It's actually pretty cool from a computing history aspect.
Once upon our time, the mainframes were the bad-assed machines.
Hot-swapping power supplies and core modules.
Several nines of uptime.
Now we're doing it in software.I see it as a mirror to what's happening with data storage and the whole "cloud computing" thing.
Going back and fourth between big hosted machines and dumb clients to smaller smarter machines.
It's like we flip back and fourth every few years when it comes to computer ideology.I guess, what I'm trying to get at is...I can't think of anything too insightful to say.
The only thing that comes to mind is: It's pretty damn cool how old ideas become new ideas.
How the archaic way of doing things suddenly finds a place with new technology.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067042</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30072556</id>
	<title>Be honest - did anyone actually understand this?</title>
	<author>rclandrum</author>
	<datestamp>1258041060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>After reading this announcement, I tried to imagine the earliest possible year in which a technical reader would be able to comprehend what is being described. 2004? 1998? Last week?  Never heard of either the Remus Project or the Xen hypervisor, and yet here I sit, merrily cranking out successful commercial software products, as I've been doing for the past 30 years.  It took me a bit of browsing to understand what was being described.<br> <br>

I wonder how many readers completely understood this announcement at face value without doing a little digging.  5?  10? Everybody but me?<br> <br>

I think if you tried keeping up with all the technology/terms in our field, it would be a full time job.</htmltext>
<tokenext>After reading this announcement , I tried to imagine the earliest possible year in which a technical reader would be able to comprehend what is being described .
2004 ? 1998 ?
Last week ?
Never heard of either the Remus Project or the Xen hypervisor , and yet here I sit , merrily cranking out successful commercial software products , as I 've been doing for the past 30 years .
It took me a bit of browsing to understand what was being described .
I wonder how many readers completely understood this announcement at face value without doing a little digging .
5 ? 10 ?
Everybody but me ?
I think if you tried keeping up with all the technology/terms in our field , it would be a full time job .</tokentext>
<sentencetext>After reading this announcement, I tried to imagine the earliest possible year in which a technical reader would be able to comprehend what is being described.
2004? 1998?
Last week?
Never heard of either the Remus Project or the Xen hypervisor, and yet here I sit, merrily cranking out successful commercial software products, as I've been doing for the past 30 years.
It took me a bit of browsing to understand what was being described.
I wonder how many readers completely understood this announcement at face value without doing a little digging.
5?  10?
Everybody but me?
I think if you tried keeping up with all the technology/terms in our field, it would be a full time job.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30072488</id>
	<title>Re:state transfer</title>
	<author>MistrBlank</author>
	<datestamp>1258040640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You're confusing high availabilty with disaster recovery.  Don't worry, my managers can't get it right either.</p></htmltext>
<tokenext>You 're confusing high availabilty with disaster recovery .
Do n't worry , my managers ca n't get it right either .</tokentext>
<sentencetext>You're confusing high availabilty with disaster recovery.
Don't worry, my managers can't get it right either.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067442</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067042</id>
	<title>Himalaya</title>
	<author>mwvdlee</author>
	<datestamp>1257076620000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>How does this compare to a "big iron" solution like Tandem/Himalaya/NonStop/whatever-it's-called-nowadays.</p></htmltext>
<tokenext>How does this compare to a " big iron " solution like Tandem/Himalaya/NonStop/whatever-it 's-called-nowadays .</tokentext>
<sentencetext>How does this compare to a "big iron" solution like Tandem/Himalaya/NonStop/whatever-it's-called-nowadays.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067756</id>
	<title>Re:Intact?</title>
	<author>stefanlasiewski</author>
	<datestamp>1257081120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Your complaint shows a lack of tact<nobr> <wbr></nobr>;)</p></htmltext>
<tokenext>Your complaint shows a lack of tact ; )</tokentext>
<sentencetext>Your complaint shows a lack of tact ;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067078</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067616</id>
	<title>make dom0 support for recent kernels first</title>
	<author>Anonymous</author>
	<datestamp>1257080340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>it is absolutely unbelievable that the official xen kernel is still 2.6.18. there's a lot of modern hardware that isnt supported by it. this is an absolute show stopper.</p></htmltext>
<tokenext>it is absolutely unbelievable that the official xen kernel is still 2.6.18. there 's a lot of modern hardware that isnt supported by it .
this is an absolute show stopper .</tokentext>
<sentencetext>it is absolutely unbelievable that the official xen kernel is still 2.6.18. there's a lot of modern hardware that isnt supported by it.
this is an absolute show stopper.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067110</id>
	<title>state transfer</title>
	<author>Anonymous</author>
	<datestamp>1257076980000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>... Of course, this ignores the fact that if it's a software glitch, it'll happily replicate the bug into the copy. Also, there are certain hardware bugs that will also replicate: Mountain dew spilled on top of the unit, for example. There's this huge push for virtualization, but it only solves a few classes of failure conditions. No amount of virtualization will save you if the server room starts on fire and the primary system and backup are colocated. Keep this in mind when talking about "High Availability" systems.</p><p>On a different note, nothing that's claimed to be transparent in IT ever is. Whenever I hear that word, I usually cancel my afternoon appointments... Nothing is ever transparent in this industry. Only managers use that word. The rest of us use the term "hopefully".</p></htmltext>
<tokenext>... Of course , this ignores the fact that if it 's a software glitch , it 'll happily replicate the bug into the copy .
Also , there are certain hardware bugs that will also replicate : Mountain dew spilled on top of the unit , for example .
There 's this huge push for virtualization , but it only solves a few classes of failure conditions .
No amount of virtualization will save you if the server room starts on fire and the primary system and backup are colocated .
Keep this in mind when talking about " High Availability " systems.On a different note , nothing that 's claimed to be transparent in IT ever is .
Whenever I hear that word , I usually cancel my afternoon appointments... Nothing is ever transparent in this industry .
Only managers use that word .
The rest of us use the term " hopefully " .</tokentext>
<sentencetext>... Of course, this ignores the fact that if it's a software glitch, it'll happily replicate the bug into the copy.
Also, there are certain hardware bugs that will also replicate: Mountain dew spilled on top of the unit, for example.
There's this huge push for virtualization, but it only solves a few classes of failure conditions.
No amount of virtualization will save you if the server room starts on fire and the primary system and backup are colocated.
Keep this in mind when talking about "High Availability" systems.On a different note, nothing that's claimed to be transparent in IT ever is.
Whenever I hear that word, I usually cancel my afternoon appointments... Nothing is ever transparent in this industry.
Only managers use that word.
The rest of us use the term "hopefully".</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067760</id>
	<title>Re:Already done by VMware</title>
	<author>Anonymous</author>
	<datestamp>1257081180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This? http://www.vmware.com/products/fault-tolerance/</p></htmltext>
<tokenext>This ?
http : //www.vmware.com/products/fault-tolerance/</tokentext>
<sentencetext>This?
http://www.vmware.com/products/fault-tolerance/</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069350</id>
	<title>Re:It's pretty fun</title>
	<author>smash</author>
	<datestamp>1257096780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>a UPS does not protect against CPU/motherboard/ram hardware failure.  This sort of HA does.</htmltext>
<tokenext>a UPS does not protect against CPU/motherboard/ram hardware failure .
This sort of HA does .</tokentext>
<sentencetext>a UPS does not protect against CPU/motherboard/ram hardware failure.
This sort of HA does.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067410</id>
	<title>Nope</title>
	<author>Anonymous</author>
	<datestamp>1257078900000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Remus presented their software well before VMware came out with their product.</p><p>What's different now is that the Remus patches have finally been incorporated into the Xen source tree.</p><p>If VMware has any patents, they'll have to jump over the hurdle of being before the Remus work was originally published, which was a while ago.</p><p>Besides, Remus can be used in more ways than what VMware offers, since you have the source code.</p></htmltext>
<tokenext>Remus presented their software well before VMware came out with their product.What 's different now is that the Remus patches have finally been incorporated into the Xen source tree.If VMware has any patents , they 'll have to jump over the hurdle of being before the Remus work was originally published , which was a while ago.Besides , Remus can be used in more ways than what VMware offers , since you have the source code .</tokentext>
<sentencetext>Remus presented their software well before VMware came out with their product.What's different now is that the Remus patches have finally been incorporated into the Xen source tree.If VMware has any patents, they'll have to jump over the hurdle of being before the Remus work was originally published, which was a while ago.Besides, Remus can be used in more ways than what VMware offers, since you have the source code.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068386</id>
	<title>Re:Intact?</title>
	<author>martin-boundary</author>
	<datestamp>1257086400000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><p>  Intact is one word</p></div>
</blockquote><p>
That was before someone gave Romulus a shovel!</p></div>
	</htmltext>
<tokenext>Intact is one word That was before someone gave Romulus a shovel !</tokentext>
<sentencetext>  Intact is one word

That was before someone gave Romulus a shovel!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067078</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069318</id>
	<title>I don't know how Dr. Breen is doing it. . .</title>
	<author>MagusSlurpy</author>
	<datestamp>1257096300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>but taking transparent high-availability to <a href="http://en.wikipedia.org/wiki/Xen\_(Half-Life)#Xen" title="wikipedia.org">Xen</a> [wikipedia.org] can't bode well for Gordon <i>or</i> the Vortigaunts. . .</htmltext>
<tokenext>but taking transparent high-availability to Xen [ wikipedia.org ] ca n't bode well for Gordon or the Vortigaunts .
. .</tokentext>
<sentencetext>but taking transparent high-availability to Xen [wikipedia.org] can't bode well for Gordon or the Vortigaunts.
. .</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30070910</id>
	<title>Re:Nope</title>
	<author>Anonymous</author>
	<datestamp>1258027200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That's plain WRONG.  VMWARE demoed this at VMWARE 2007 while the REMUS paper wasn't published till 2008.</p></htmltext>
<tokenext>That 's plain WRONG .
VMWARE demoed this at VMWARE 2007 while the REMUS paper was n't published till 2008 .</tokentext>
<sentencetext>That's plain WRONG.
VMWARE demoed this at VMWARE 2007 while the REMUS paper wasn't published till 2008.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067410</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067002</id>
	<title>It's pretty fun</title>
	<author>Anonymous</author>
	<datestamp>1257076380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p> It's pretty fun to yank the plug out on your web server and see everything continue to tick along. "</p></div><p>

Or an ordinary, every day run of the mill 'off the shelf' plain jane beige UPS. or a <a href="http://www.dansdata.com/diyups.htm" title="dansdata.com">Ghetto one</a> [dansdata.com], if you'd like.<br> <br>

Still its pretty cool, just wondering how much overhead there is by setting up this system</p></div>
	</htmltext>
<tokenext>It 's pretty fun to yank the plug out on your web server and see everything continue to tick along .
" Or an ordinary , every day run of the mill 'off the shelf ' plain jane beige UPS .
or a Ghetto one [ dansdata.com ] , if you 'd like .
Still its pretty cool , just wondering how much overhead there is by setting up this system</tokentext>
<sentencetext> It's pretty fun to yank the plug out on your web server and see everything continue to tick along.
"

Or an ordinary, every day run of the mill 'off the shelf' plain jane beige UPS.
or a Ghetto one [dansdata.com], if you'd like.
Still its pretty cool, just wondering how much overhead there is by setting up this system
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068462</id>
	<title>Re:Wrong place to put a failsafe?</title>
	<author>dido</author>
	<datestamp>1257087060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>This is something that the much simpler Linux-HA environment deals with by using something they call STONITH, which basically means to Shoot The Other Node In The Head.  STONITH peripherals are devices that can completely shut down a server physically, e.g. a power strip that can be controlled via a serial port. If you wind up with a partitioned cluster, which they more colorfully call a 'split brain' condition, where each node thinks the other one is dead, each of them uses the STONITH device to make sure, if it is able.  One of them will activate the STONITH device before the other, and the one which wins keeps on running, while the one that loses really kicks the bucket if it isn't fully dead.  I imagine that Remus must have similar mechanisms to guard against split brain conditions as well.  I've had several Linux-HA clusters go split brain on me, and I tell you it's never pretty.  The best case is that they only both try to grab the same IP address and get an IP address conflict, in the worst case, they both try to mount and write to the same fiberchannel disk at the same time and bollix the file system.  If a Remus-based cluster split brains, I can imagine that you'll get mayhem just as awful unless you have a STONITH-like system to prevent it from happening.</p></htmltext>
<tokenext>This is something that the much simpler Linux-HA environment deals with by using something they call STONITH , which basically means to Shoot The Other Node In The Head .
STONITH peripherals are devices that can completely shut down a server physically , e.g .
a power strip that can be controlled via a serial port .
If you wind up with a partitioned cluster , which they more colorfully call a 'split brain ' condition , where each node thinks the other one is dead , each of them uses the STONITH device to make sure , if it is able .
One of them will activate the STONITH device before the other , and the one which wins keeps on running , while the one that loses really kicks the bucket if it is n't fully dead .
I imagine that Remus must have similar mechanisms to guard against split brain conditions as well .
I 've had several Linux-HA clusters go split brain on me , and I tell you it 's never pretty .
The best case is that they only both try to grab the same IP address and get an IP address conflict , in the worst case , they both try to mount and write to the same fiberchannel disk at the same time and bollix the file system .
If a Remus-based cluster split brains , I can imagine that you 'll get mayhem just as awful unless you have a STONITH-like system to prevent it from happening .</tokentext>
<sentencetext>This is something that the much simpler Linux-HA environment deals with by using something they call STONITH, which basically means to Shoot The Other Node In The Head.
STONITH peripherals are devices that can completely shut down a server physically, e.g.
a power strip that can be controlled via a serial port.
If you wind up with a partitioned cluster, which they more colorfully call a 'split brain' condition, where each node thinks the other one is dead, each of them uses the STONITH device to make sure, if it is able.
One of them will activate the STONITH device before the other, and the one which wins keeps on running, while the one that loses really kicks the bucket if it isn't fully dead.
I imagine that Remus must have similar mechanisms to guard against split brain conditions as well.
I've had several Linux-HA clusters go split brain on me, and I tell you it's never pretty.
The best case is that they only both try to grab the same IP address and get an IP address conflict, in the worst case, they both try to mount and write to the same fiberchannel disk at the same time and bollix the file system.
If a Remus-based cluster split brains, I can imagine that you'll get mayhem just as awful unless you have a STONITH-like system to prevent it from happening.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068232</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069682</id>
	<title>Re:state transfer</title>
	<author>shmlco</author>
	<datestamp>1257100320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>"If your primary and secondary systems are physically located next to each other then they aren't in the category of highly available."</p><p>High availability covers more than just distributed data centers. Load-balancing, fail-over, clustering, mirroring, reduntant switches, routers, and other hardware: all are zero-point-of-failure, high availability solutions.</p></htmltext>
<tokenext>" If your primary and secondary systems are physically located next to each other then they are n't in the category of highly available .
" High availability covers more than just distributed data centers .
Load-balancing , fail-over , clustering , mirroring , reduntant switches , routers , and other hardware : all are zero-point-of-failure , high availability solutions .</tokentext>
<sentencetext>"If your primary and secondary systems are physically located next to each other then they aren't in the category of highly available.
"High availability covers more than just distributed data centers.
Load-balancing, fail-over, clustering, mirroring, reduntant switches, routers, and other hardware: all are zero-point-of-failure, high availability solutions.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067442</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067442</id>
	<title>Re:state transfer</title>
	<author>Vancorps</author>
	<datestamp>1257079080000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>If your primary and secondary systems are physically located next to each other then they aren't in the category of highly available. Furthermore with storage replication and regular snapshotting you can have your virtual infrastructure at your DR site on the cheap while gaining enterprise availability and most importantly, business continuity. </p><p>I'll agree with being skeptical about transparency although how many people already have this? I went with XenServer and Citrix Essentials for it, I already have this fail-over and I can tell you that it works. I physically pulled a blade out of the chassis and sure enough, by the time I got back to my desk the servers were functioning having dropped a whole packet. Further tweaking of the underlying network infrastructure resulted in keeping the packet with just a momentary rise in latency. </p><p>Enterprise availability is fast coming to the little guys. </p></htmltext>
<tokenext>If your primary and secondary systems are physically located next to each other then they are n't in the category of highly available .
Furthermore with storage replication and regular snapshotting you can have your virtual infrastructure at your DR site on the cheap while gaining enterprise availability and most importantly , business continuity .
I 'll agree with being skeptical about transparency although how many people already have this ?
I went with XenServer and Citrix Essentials for it , I already have this fail-over and I can tell you that it works .
I physically pulled a blade out of the chassis and sure enough , by the time I got back to my desk the servers were functioning having dropped a whole packet .
Further tweaking of the underlying network infrastructure resulted in keeping the packet with just a momentary rise in latency .
Enterprise availability is fast coming to the little guys .</tokentext>
<sentencetext>If your primary and secondary systems are physically located next to each other then they aren't in the category of highly available.
Furthermore with storage replication and regular snapshotting you can have your virtual infrastructure at your DR site on the cheap while gaining enterprise availability and most importantly, business continuity.
I'll agree with being skeptical about transparency although how many people already have this?
I went with XenServer and Citrix Essentials for it, I already have this fail-over and I can tell you that it works.
I physically pulled a blade out of the chassis and sure enough, by the time I got back to my desk the servers were functioning having dropped a whole packet.
Further tweaking of the underlying network infrastructure resulted in keeping the packet with just a momentary rise in latency.
Enterprise availability is fast coming to the little guys. </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067110</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067078</id>
	<title>Intact?</title>
	<author>Anonymous</author>
	<datestamp>1257076800000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>Intact is one word, O ye editors...</htmltext>
<tokenext>Intact is one word , O ye editors.. .</tokentext>
<sentencetext>Intact is one word, O ye editors...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067718</id>
	<title>Re:Already done by VMware</title>
	<author>nurb432</author>
	<datestamp>1257081000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And it didn't require any "really expensive hardware, or very complex and invasive modifications" to do it. Not saying its going to run on some old beat up Pentium Pro from 10 years ago, but the hardware i see it run on every day isn't out of line for a modern data-center.</p><p>And it requires ZERO changes to the OS.</p><p>( at risk here of sounding like a Vmware fanboy, but come on.. at least they can present facts when tooting their horn )</p></htmltext>
<tokenext>And it did n't require any " really expensive hardware , or very complex and invasive modifications " to do it .
Not saying its going to run on some old beat up Pentium Pro from 10 years ago , but the hardware i see it run on every day is n't out of line for a modern data-center.And it requires ZERO changes to the OS .
( at risk here of sounding like a Vmware fanboy , but come on.. at least they can present facts when tooting their horn )</tokentext>
<sentencetext>And it didn't require any "really expensive hardware, or very complex and invasive modifications" to do it.
Not saying its going to run on some old beat up Pentium Pro from 10 years ago, but the hardware i see it run on every day isn't out of line for a modern data-center.And it requires ZERO changes to the OS.
( at risk here of sounding like a Vmware fanboy, but come on.. at least they can present facts when tooting their horn )</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30070374</id>
	<title>Xen</title>
	<author>Anonymous</author>
	<datestamp>1258019280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Remus Project Brings Transparent High Availability To Xen</p></div><p>But does it solve those awful jumping puzzles?</p></div>
	</htmltext>
<tokenext>Remus Project Brings Transparent High Availability To XenBut does it solve those awful jumping puzzles ?</tokentext>
<sentencetext>Remus Project Brings Transparent High Availability To XenBut does it solve those awful jumping puzzles?
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067194</id>
	<title>Re:Himalaya</title>
	<author>Anonymous</author>
	<datestamp>1257077580000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>I was just thinking that...</p><p>Tandems may still have other advantages, though; back in the day, we built a database on Himalayas/NSK because, availability aside, it outperformed Sybase, Oracle, and other solutions.  (They implemented SQL down at the drive controller level; it was ridiculously efficient.) No idea if that's still the case.</p><p>But Tandem required you to build their availability hooks into your app; it wasn't transparent. OTOH, Stratus's approach is;a Stratus server is like having RAID-1 for every component of your server.  I gotta think this will cut into their business.</p></htmltext>
<tokenext>I was just thinking that...Tandems may still have other advantages , though ; back in the day , we built a database on Himalayas/NSK because , availability aside , it outperformed Sybase , Oracle , and other solutions .
( They implemented SQL down at the drive controller level ; it was ridiculously efficient .
) No idea if that 's still the case.But Tandem required you to build their availability hooks into your app ; it was n't transparent .
OTOH , Stratus 's approach is ; a Stratus server is like having RAID-1 for every component of your server .
I got ta think this will cut into their business .</tokentext>
<sentencetext>I was just thinking that...Tandems may still have other advantages, though; back in the day, we built a database on Himalayas/NSK because, availability aside, it outperformed Sybase, Oracle, and other solutions.
(They implemented SQL down at the drive controller level; it was ridiculously efficient.
) No idea if that's still the case.But Tandem required you to build their availability hooks into your app; it wasn't transparent.
OTOH, Stratus's approach is;a Stratus server is like having RAID-1 for every component of your server.
I gotta think this will cut into their business.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067042</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30084376</id>
	<title>Re:Already done by VMware</title>
	<author>Jacques Chester</author>
	<datestamp>1258145340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'd be surprised if the whole field isn't absolutely blanketed with patents by IBM. Mainframes have this since the 80s or 90s, I think.</htmltext>
<tokenext>I 'd be surprised if the whole field is n't absolutely blanketed with patents by IBM .
Mainframes have this since the 80s or 90s , I think .</tokentext>
<sentencetext>I'd be surprised if the whole field isn't absolutely blanketed with patents by IBM.
Mainframes have this since the 80s or 90s, I think.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067316</id>
	<title>Re:Himalaya</title>
	<author>Anonymous</author>
	<datestamp>1257078420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I think Stratus still has some differentiation versus this approach since there's no hypervisor involved. However, this is very similar to what Marathon was already doing with Xen in their latest everRun products and this doesn't require the VM to be running windows.</p></htmltext>
<tokenext>I think Stratus still has some differentiation versus this approach since there 's no hypervisor involved .
However , this is very similar to what Marathon was already doing with Xen in their latest everRun products and this does n't require the VM to be running windows .</tokentext>
<sentencetext>I think Stratus still has some differentiation versus this approach since there's no hypervisor involved.
However, this is very similar to what Marathon was already doing with Xen in their latest everRun products and this doesn't require the VM to be running windows.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067398</id>
	<title>How does it deal with replication latency?</title>
	<author>melted</author>
	<datestamp>1257078780000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I'm pretty sure that if I just yank the cable, not everything will be replicated.<nobr> <wbr></nobr>:-)</p></htmltext>
<tokenext>I 'm pretty sure that if I just yank the cable , not everything will be replicated .
: - )</tokentext>
<sentencetext>I'm pretty sure that if I just yank the cable, not everything will be replicated.
:-)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068618</id>
	<title>Re:Already done by VMware</title>
	<author>jipn4</author>
	<datestamp>1257088560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This sort of stuff is far older; it goes back to mainframe days and supercomputing.</p><p>Furthermore, the idea of running two machines in lockstep and failing over shouldn't be patentable at all.  Specific, particularly clever implementations of it might be, but those shouldn't preclude from others being able to create other implementations of the same functionality.</p></htmltext>
<tokenext>This sort of stuff is far older ; it goes back to mainframe days and supercomputing.Furthermore , the idea of running two machines in lockstep and failing over should n't be patentable at all .
Specific , particularly clever implementations of it might be , but those should n't preclude from others being able to create other implementations of the same functionality .</tokentext>
<sentencetext>This sort of stuff is far older; it goes back to mainframe days and supercomputing.Furthermore, the idea of running two machines in lockstep and failing over shouldn't be patentable at all.
Specific, particularly clever implementations of it might be, but those shouldn't preclude from others being able to create other implementations of the same functionality.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068276</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068276</id>
	<title>Re:Already done by VMware</title>
	<author>Anonymous</author>
	<datestamp>1257085680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>beaten.  ESX 4.0 has vmware FT, and "lockstep" is patented i believe...</htmltext>
<tokenext>beaten .
ESX 4.0 has vmware FT , and " lockstep " is patented i believe.. .</tokentext>
<sentencetext>beaten.
ESX 4.0 has vmware FT, and "lockstep" is patented i believe...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30073250</id>
	<title>Re:Nope</title>
	<author>spotter</author>
	<datestamp>1258043760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>the remus paper references vmware's high availibility.  (also was published in 2008 about 1.5 years ago, though dont know when it first started to be used, possibly before then)</p><p>however, incremental checkpoint precedes both.  See (pulling from my bibtex for paper I helped write)</p><p>author = "J. S. Plank and J. Xu and R. H. B. Netzer",<br>title = "{Compressed Differences: An Algorithm for Fast<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Incremental Checkpointing}",</p><p>author = {Roberto Gioiosa and Jose Carlos Sancho and Song Jiang and Fabrizio Petrini},<br>title = "{Transparent, Incremental Checkpointing at Kernel Level: a Foundation for Fault Tolerance for Parallel Computers}",</p><p>author = {Ashok Joshi and William Bridge and Juan Loaiza and Tirthankar Lahiri},<br>title = "{Checkpointing in Oracle}",</p><p>author = "Angkul Kongmunvattana and Santipong Tanchatchawal and Nian-Feng Tzeng",<br>title = "{Coherence-based Coordinated Checkpointing for Software Distributed Shared Memory Systems}",</p><p>as well as a paper I was a coauthor on where we continuously checkpointed a regular gnome desktop (along with its file system) and enabled you to restart it at any point in the past.</p><p>author = "Oren Laadan and Ricardo Baratto and Dan Phung and Shaya Potter and Jason Nieh",<br>title = {{DejaView: A Personal Virtual Computer Recorder}},</p></htmltext>
<tokenext>the remus paper references vmware 's high availibility .
( also was published in 2008 about 1.5 years ago , though dont know when it first started to be used , possibly before then ) however , incremental checkpoint precedes both .
See ( pulling from my bibtex for paper I helped write ) author = " J. S. Plank and J. Xu and R. H. B. Netzer " ,title = " { Compressed Differences : An Algorithm for Fast                                 Incremental Checkpointing } " ,author = { Roberto Gioiosa and Jose Carlos Sancho and Song Jiang and Fabrizio Petrini } ,title = " { Transparent , Incremental Checkpointing at Kernel Level : a Foundation for Fault Tolerance for Parallel Computers } " ,author = { Ashok Joshi and William Bridge and Juan Loaiza and Tirthankar Lahiri } ,title = " { Checkpointing in Oracle } " ,author = " Angkul Kongmunvattana and Santipong Tanchatchawal and Nian-Feng Tzeng " ,title = " { Coherence-based Coordinated Checkpointing for Software Distributed Shared Memory Systems } " ,as well as a paper I was a coauthor on where we continuously checkpointed a regular gnome desktop ( along with its file system ) and enabled you to restart it at any point in the past.author = " Oren Laadan and Ricardo Baratto and Dan Phung and Shaya Potter and Jason Nieh " ,title = { { DejaView : A Personal Virtual Computer Recorder } } ,</tokentext>
<sentencetext>the remus paper references vmware's high availibility.
(also was published in 2008 about 1.5 years ago, though dont know when it first started to be used, possibly before then)however, incremental checkpoint precedes both.
See (pulling from my bibtex for paper I helped write)author = "J. S. Plank and J. Xu and R. H. B. Netzer",title = "{Compressed Differences: An Algorithm for Fast
                                Incremental Checkpointing}",author = {Roberto Gioiosa and Jose Carlos Sancho and Song Jiang and Fabrizio Petrini},title = "{Transparent, Incremental Checkpointing at Kernel Level: a Foundation for Fault Tolerance for Parallel Computers}",author = {Ashok Joshi and William Bridge and Juan Loaiza and Tirthankar Lahiri},title = "{Checkpointing in Oracle}",author = "Angkul Kongmunvattana and Santipong Tanchatchawal and Nian-Feng Tzeng",title = "{Coherence-based Coordinated Checkpointing for Software Distributed Shared Memory Systems}",as well as a paper I was a coauthor on where we continuously checkpointed a regular gnome desktop (along with its file system) and enabled you to restart it at any point in the past.author = "Oren Laadan and Ricardo Baratto and Dan Phung and Shaya Potter and Jason Nieh",title = {{DejaView: A Personal Virtual Computer Recorder}},</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067410</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067622</id>
	<title>Re:Already done by VMware</title>
	<author>TheRaven64</author>
	<datestamp>1257080400000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>I know that a company called Marathon Technologies owns a few patents in this area.  A few of their developers were at the XenSummit in 2007 where the project was originally presented.</htmltext>
<tokenext>I know that a company called Marathon Technologies owns a few patents in this area .
A few of their developers were at the XenSummit in 2007 where the project was originally presented .</tokentext>
<sentencetext>I know that a company called Marathon Technologies owns a few patents in this area.
A few of their developers were at the XenSummit in 2007 where the project was originally presented.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069682
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067442
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067110
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30073250
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067410
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068462
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068232
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067248
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067042
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30090870
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067310
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067076
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067956
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067294
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067316
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067042
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067760
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068382
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068276
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30084376
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067622
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068386
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067078
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30070722
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067672
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067042
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069350
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067002
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069510
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068276
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30070910
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067410
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068618
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068276
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067002
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30084718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067424
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067110
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067756
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067078
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_11_2246226_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30072488
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067442
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067110
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_11_2246226.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067110
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067424
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067442
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069682
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30072488
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_11_2246226.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067042
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067194
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067316
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067672
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067248
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_11_2246226.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30072556
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_11_2246226.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067002
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069350
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069868
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_11_2246226.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30066950
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067622
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067718
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30084376
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30070722
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30084718
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067760
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067410
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30070910
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30073250
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068276
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068382
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30069510
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068618
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067294
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067956
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_11_2246226.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067398
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_11_2246226.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068266
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_11_2246226.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068232
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068462
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_11_2246226.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067076
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067310
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30090870
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_11_2246226.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067078
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30068386
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_11_2246226.30067756
</commentlist>
</conversation>
