<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_06_02_0043258</id>
	<title>When VMware Performance Fails, Try BSD Jails</title>
	<author>kdawson</author>
	<datestamp>1243950360000</datestamp>
	<htmltext><a href="http://www.norwinter.com/" rel="nofollow">Siker</a> writes in to tell us about the experience of email transfer service YippieMove, which <a href="http://www.playingwithwire.com/2009/06/virtual-failure-yippiemove-switches-from-vmware-to-freebsd-jails/">ditched VMware and switched to FreeBSD jails</a>. <i>"We doubled the amount of memory per server, we quadrupled SQLite's internal buffers, we turned off SQLite auto-vacuuming, we turned off synchronization, we added more database indexes. We were confused. Certainly we had expected a performance difference between running our software in a VM compared to running on the metal, but that it could be as much as 10X was a wake-up call."</i></htmltext>
<tokenext>Siker writes in to tell us about the experience of email transfer service YippieMove , which ditched VMware and switched to FreeBSD jails .
" We doubled the amount of memory per server , we quadrupled SQLite 's internal buffers , we turned off SQLite auto-vacuuming , we turned off synchronization , we added more database indexes .
We were confused .
Certainly we had expected a performance difference between running our software in a VM compared to running on the metal , but that it could be as much as 10X was a wake-up call .
"</tokentext>
<sentencetext>Siker writes in to tell us about the experience of email transfer service YippieMove, which ditched VMware and switched to FreeBSD jails.
"We doubled the amount of memory per server, we quadrupled SQLite's internal buffers, we turned off SQLite auto-vacuuming, we turned off synchronization, we added more database indexes.
We were confused.
Certainly we had expected a performance difference between running our software in a VM compared to running on the metal, but that it could be as much as 10X was a wake-up call.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176817</id>
	<title>Not surprising</title>
	<author>Anonymous</author>
	<datestamp>1243869180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Virtual machines tend to be fast in theory, but slow in practice. Just look at Java.</p></htmltext>
<tokenext>Virtual machines tend to be fast in theory , but slow in practice .
Just look at Java .</tokentext>
<sentencetext>Virtual machines tend to be fast in theory, but slow in practice.
Just look at Java.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179383</id>
	<title>Re:Sounds about right</title>
	<author>busstop</author>
	<datestamp>1243939560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>imagine a beowolf cluster of these...<nobr> <wbr></nobr>... ooops - wrong decade - sorry!</p></htmltext>
<tokenext>imagine a beowolf cluster of these... ... ooops - wrong decade - sorry !</tokentext>
<sentencetext>imagine a beowolf cluster of these... ... ooops - wrong decade - sorry!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181925</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>rbanffy</author>
	<datestamp>1243956480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It was OS7, IIRC, not OS9<nobr> <wbr></nobr>;-)</p></htmltext>
<tokenext>It was OS7 , IIRC , not OS9 ; - )</tokentext>
<sentencetext>It was OS7, IIRC, not OS9 ;-)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179767</id>
	<title>Re:free beats fee most of the time</title>
	<author>pbhj</author>
	<datestamp>1243943820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's a barrier to entry<nobr> <wbr></nobr>.. if you know what LTSP is then the post might be relevant, if not then it certainly won't.</p><p>How had you not heard of LTSP?</p></htmltext>
<tokenext>It 's a barrier to entry .. if you know what LTSP is then the post might be relevant , if not then it certainly wo n't.How had you not heard of LTSP ?</tokentext>
<sentencetext>It's a barrier to entry .. if you know what LTSP is then the post might be relevant, if not then it certainly won't.How had you not heard of LTSP?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251</id>
	<title>Re:excellent sales story</title>
	<author>mysidia</author>
	<datestamp>1243872840000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Totally unnecessary.   If you want a 'virtual SAN', you can of course create one using various techniques. The author's biggest problem is he's running VMware Server 1, probably on top of Windows, and then tried VMware Server 1 on top of Ubuntu.
</p><p>
Running one OS on top of another full-blown OS, with several layers of filesystem virtualization, no wonder it's slow; a hypervisor like ESX would be more appropriate.
</p><p>
VMware Server is great for small-scale implementation and testing.  VMware server is NOT suitable for  mid to large-scale production grade consolidation loads.
</p><p>
ESX or ESXi is VMware's  solution for such loads.
And by the way, a free standalone license for ESXi is available, just like a free license is available for running standalone VMware server.
</p><p>
And the I/O performance is near-native.    With  ESX4, on platforms that support I/O virtualization ,  Vt-d/IOMMU,   in  fact,  the  virtualization is hardware-assisted.
</p><p>
The VMware environment should be designed and configured by someone who is familiar with the technology.     A simple configuration error can totally screw your performance.     In  VMware Server, you really need to disable memory overcommit and shut off page trimming, or you'll be sorry  --  and there are definitely other aspects of VMware server that make it not suitable at all  (at least by default)  for anything large scale.
</p><p>
It's more than "how much memory and CPU" you have.   Other considerations also matter,  many of them are the same considerations for all server workloads...  e.g. how many drive spindles do you have at what access latency,  what's your total IOPs?
</p><p>
In my humble opinion, someone who would want to apply a production load on VMware server  (instead of ESX) is not suitable briefed on the technology, doesn't understand how piss-poor VMware server's I/O performance is compared to ESXi, or just didn't bother to read all the documentation and other materials freely available.
</p><p>
Virtualization isn't a magic pill that lets you avoid properly understand the technology you're deploying,  make bad decisions, and still always get good results.
</p><p>
You get FreeBSD jails up and running, but you basically need to be skilled at FreeBSD, and understand how to properly deploy that OS in order to do it.
</p><p>
Otherwise, your jails might not work correctly, and someone else could conclude that FreeBSD jails suck,   stick with  OpenVZ  VPSes or  Solaris logical domains.
</p></htmltext>
<tokenext>Totally unnecessary .
If you want a 'virtual SAN ' , you can of course create one using various techniques .
The author 's biggest problem is he 's running VMware Server 1 , probably on top of Windows , and then tried VMware Server 1 on top of Ubuntu .
Running one OS on top of another full-blown OS , with several layers of filesystem virtualization , no wonder it 's slow ; a hypervisor like ESX would be more appropriate .
VMware Server is great for small-scale implementation and testing .
VMware server is NOT suitable for mid to large-scale production grade consolidation loads .
ESX or ESXi is VMware 's solution for such loads .
And by the way , a free standalone license for ESXi is available , just like a free license is available for running standalone VMware server .
And the I/O performance is near-native .
With ESX4 , on platforms that support I/O virtualization , Vt-d/IOMMU , in fact , the virtualization is hardware-assisted .
The VMware environment should be designed and configured by someone who is familiar with the technology .
A simple configuration error can totally screw your performance .
In VMware Server , you really need to disable memory overcommit and shut off page trimming , or you 'll be sorry -- and there are definitely other aspects of VMware server that make it not suitable at all ( at least by default ) for anything large scale .
It 's more than " how much memory and CPU " you have .
Other considerations also matter , many of them are the same considerations for all server workloads... e.g. how many drive spindles do you have at what access latency , what 's your total IOPs ?
In my humble opinion , someone who would want to apply a production load on VMware server ( instead of ESX ) is not suitable briefed on the technology , does n't understand how piss-poor VMware server 's I/O performance is compared to ESXi , or just did n't bother to read all the documentation and other materials freely available .
Virtualization is n't a magic pill that lets you avoid properly understand the technology you 're deploying , make bad decisions , and still always get good results .
You get FreeBSD jails up and running , but you basically need to be skilled at FreeBSD , and understand how to properly deploy that OS in order to do it .
Otherwise , your jails might not work correctly , and someone else could conclude that FreeBSD jails suck , stick with OpenVZ VPSes or Solaris logical domains .</tokentext>
<sentencetext>Totally unnecessary.
If you want a 'virtual SAN', you can of course create one using various techniques.
The author's biggest problem is he's running VMware Server 1, probably on top of Windows, and then tried VMware Server 1 on top of Ubuntu.
Running one OS on top of another full-blown OS, with several layers of filesystem virtualization, no wonder it's slow; a hypervisor like ESX would be more appropriate.
VMware Server is great for small-scale implementation and testing.
VMware server is NOT suitable for  mid to large-scale production grade consolidation loads.
ESX or ESXi is VMware's  solution for such loads.
And by the way, a free standalone license for ESXi is available, just like a free license is available for running standalone VMware server.
And the I/O performance is near-native.
With  ESX4, on platforms that support I/O virtualization ,  Vt-d/IOMMU,   in  fact,  the  virtualization is hardware-assisted.
The VMware environment should be designed and configured by someone who is familiar with the technology.
A simple configuration error can totally screw your performance.
In  VMware Server, you really need to disable memory overcommit and shut off page trimming, or you'll be sorry  --  and there are definitely other aspects of VMware server that make it not suitable at all  (at least by default)  for anything large scale.
It's more than "how much memory and CPU" you have.
Other considerations also matter,  many of them are the same considerations for all server workloads...  e.g. how many drive spindles do you have at what access latency,  what's your total IOPs?
In my humble opinion, someone who would want to apply a production load on VMware server  (instead of ESX) is not suitable briefed on the technology, doesn't understand how piss-poor VMware server's I/O performance is compared to ESXi, or just didn't bother to read all the documentation and other materials freely available.
Virtualization isn't a magic pill that lets you avoid properly understand the technology you're deploying,  make bad decisions, and still always get good results.
You get FreeBSD jails up and running, but you basically need to be skilled at FreeBSD, and understand how to properly deploy that OS in order to do it.
Otherwise, your jails might not work correctly, and someone else could conclude that FreeBSD jails suck,   stick with  OpenVZ  VPSes or  Solaris logical domains.
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177831</id>
	<title>Virtualization is a gift for Windows servers!</title>
	<author>JakFrost</author>
	<datestamp>1243878900000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>I've worked for many of the Fortune 10 (DB, GS, CS, JP, MS, etc.) banks on the Windows server side and they are all going full steam ahead for virtualization with VMWare or Xen exactly because they have been buying way too much hardware for their backend applications for the last decade.  The utilization on all of these servers hardly hits 5-10\% and the vast majority of time these systems sit idle.  The standard has always been rackmount servers with multiple processor/core systems with gigs of memory all sitting around being unused, mostly Compaq/HP systems with IBM xSeries servers and some Dells thrown in for good measure.</p><p>The reason that this over-capitization has been the requirement of the business line departments to choose only from four or five server models for their backend application.  These standard configs are usually configured in rackmount spaces 1U, 2U, 3U, and 4U sizes and with nearly maxed out specs for each size and the size of the server determines the performance you get.  You have a light web server you get a blade or a pizza box, you have a light backend application you get a 2U server with two processors or four cores even though you might have a single threaded app that was ported from MS-DOS a few years ago, you want something beefier you get the 4U server with 4 processors, 8 cores and 16 GB of RAM even though your application only runs two threads and allocates 512MB of ram maximum.  I've monitored thousands of these servers through IBM Director, InsightMangager, and NetIQ for performance and 99\% of the time these servers are at 2\% processor and memory utilization and only once in a while for a short amount of time one or two of the cores get hit with a low-mid work load for processing and then go back to doing nothing.  These were the Production servers.</p><p>Now consider the Development servers, where a bank has 500 servers dedicated for developer usage with the same specs as the production boxes and at any one time maybe a few of those servers get used for testing while the other few hundred sit around doing nothing while the developers get a new release ready for weeks at a time.  The first systems to get virtualized were the development servers because they were so underutilized that it was unthinkable.</p><p> <i>(Off topic: Funny and sad story from my days in 2007 at a top bank (CS) helping with VMWare virtualzation onto HP Blades and 3Par SAN storage for ~500 development servers.  The 3Par hardware and firmware was in such a shitty state that it crashed the entire SAN frame multiple times crashing hundreds of development servers at the same time during heavy I/O load.  The 3Par would play the blame game against other vendors accusing Brocade for faulty SAN fibre switches, Emulex for faulty hardware and drivers, HP Blade and IBM Blade for faulty server, and the Windows admins for incompetence.  Only to find that it was their SAN interface firmware causing the crashes.)</i> </p><p>VMWare solves the problem of running commercial backend applications on Windows servers since each application is so specific due to the requirements of the OS version, service pack, hotfixes, patches, configurations that the standard is always one-server to one-application and nobody every wanted to mix them because any issue would always be blamed on the other vendor's application on the server.  There were always talks from management about providing capacity to businesses that is scalable instead of providing them with single servers with a single OS.  That was five years ago and people wanted to use Windows Capacity Management features but they were a joke since they were based on per-process usage quotas and the of course nobody wanted to mix two different apps on the same box so those talks went nowhere.</p><p>That is until VMWare showed up and showed a real way to isolate each OS instance from another while it also allowed us to configure capacity requirements on each instance while letting us package all those shitty single threaded backend applications each running on a separate server onto on</p></htmltext>
<tokenext>I 've worked for many of the Fortune 10 ( DB , GS , CS , JP , MS , etc .
) banks on the Windows server side and they are all going full steam ahead for virtualization with VMWare or Xen exactly because they have been buying way too much hardware for their backend applications for the last decade .
The utilization on all of these servers hardly hits 5-10 \ % and the vast majority of time these systems sit idle .
The standard has always been rackmount servers with multiple processor/core systems with gigs of memory all sitting around being unused , mostly Compaq/HP systems with IBM xSeries servers and some Dells thrown in for good measure.The reason that this over-capitization has been the requirement of the business line departments to choose only from four or five server models for their backend application .
These standard configs are usually configured in rackmount spaces 1U , 2U , 3U , and 4U sizes and with nearly maxed out specs for each size and the size of the server determines the performance you get .
You have a light web server you get a blade or a pizza box , you have a light backend application you get a 2U server with two processors or four cores even though you might have a single threaded app that was ported from MS-DOS a few years ago , you want something beefier you get the 4U server with 4 processors , 8 cores and 16 GB of RAM even though your application only runs two threads and allocates 512MB of ram maximum .
I 've monitored thousands of these servers through IBM Director , InsightMangager , and NetIQ for performance and 99 \ % of the time these servers are at 2 \ % processor and memory utilization and only once in a while for a short amount of time one or two of the cores get hit with a low-mid work load for processing and then go back to doing nothing .
These were the Production servers.Now consider the Development servers , where a bank has 500 servers dedicated for developer usage with the same specs as the production boxes and at any one time maybe a few of those servers get used for testing while the other few hundred sit around doing nothing while the developers get a new release ready for weeks at a time .
The first systems to get virtualized were the development servers because they were so underutilized that it was unthinkable .
( Off topic : Funny and sad story from my days in 2007 at a top bank ( CS ) helping with VMWare virtualzation onto HP Blades and 3Par SAN storage for ~ 500 development servers .
The 3Par hardware and firmware was in such a shitty state that it crashed the entire SAN frame multiple times crashing hundreds of development servers at the same time during heavy I/O load .
The 3Par would play the blame game against other vendors accusing Brocade for faulty SAN fibre switches , Emulex for faulty hardware and drivers , HP Blade and IBM Blade for faulty server , and the Windows admins for incompetence .
Only to find that it was their SAN interface firmware causing the crashes .
) VMWare solves the problem of running commercial backend applications on Windows servers since each application is so specific due to the requirements of the OS version , service pack , hotfixes , patches , configurations that the standard is always one-server to one-application and nobody every wanted to mix them because any issue would always be blamed on the other vendor 's application on the server .
There were always talks from management about providing capacity to businesses that is scalable instead of providing them with single servers with a single OS .
That was five years ago and people wanted to use Windows Capacity Management features but they were a joke since they were based on per-process usage quotas and the of course nobody wanted to mix two different apps on the same box so those talks went nowhere.That is until VMWare showed up and showed a real way to isolate each OS instance from another while it also allowed us to configure capacity requirements on each instance while letting us package all those shitty single threaded backend applications each running on a separate server onto on</tokentext>
<sentencetext>I've worked for many of the Fortune 10 (DB, GS, CS, JP, MS, etc.
) banks on the Windows server side and they are all going full steam ahead for virtualization with VMWare or Xen exactly because they have been buying way too much hardware for their backend applications for the last decade.
The utilization on all of these servers hardly hits 5-10\% and the vast majority of time these systems sit idle.
The standard has always been rackmount servers with multiple processor/core systems with gigs of memory all sitting around being unused, mostly Compaq/HP systems with IBM xSeries servers and some Dells thrown in for good measure.The reason that this over-capitization has been the requirement of the business line departments to choose only from four or five server models for their backend application.
These standard configs are usually configured in rackmount spaces 1U, 2U, 3U, and 4U sizes and with nearly maxed out specs for each size and the size of the server determines the performance you get.
You have a light web server you get a blade or a pizza box, you have a light backend application you get a 2U server with two processors or four cores even though you might have a single threaded app that was ported from MS-DOS a few years ago, you want something beefier you get the 4U server with 4 processors, 8 cores and 16 GB of RAM even though your application only runs two threads and allocates 512MB of ram maximum.
I've monitored thousands of these servers through IBM Director, InsightMangager, and NetIQ for performance and 99\% of the time these servers are at 2\% processor and memory utilization and only once in a while for a short amount of time one or two of the cores get hit with a low-mid work load for processing and then go back to doing nothing.
These were the Production servers.Now consider the Development servers, where a bank has 500 servers dedicated for developer usage with the same specs as the production boxes and at any one time maybe a few of those servers get used for testing while the other few hundred sit around doing nothing while the developers get a new release ready for weeks at a time.
The first systems to get virtualized were the development servers because they were so underutilized that it was unthinkable.
(Off topic: Funny and sad story from my days in 2007 at a top bank (CS) helping with VMWare virtualzation onto HP Blades and 3Par SAN storage for ~500 development servers.
The 3Par hardware and firmware was in such a shitty state that it crashed the entire SAN frame multiple times crashing hundreds of development servers at the same time during heavy I/O load.
The 3Par would play the blame game against other vendors accusing Brocade for faulty SAN fibre switches, Emulex for faulty hardware and drivers, HP Blade and IBM Blade for faulty server, and the Windows admins for incompetence.
Only to find that it was their SAN interface firmware causing the crashes.
) VMWare solves the problem of running commercial backend applications on Windows servers since each application is so specific due to the requirements of the OS version, service pack, hotfixes, patches, configurations that the standard is always one-server to one-application and nobody every wanted to mix them because any issue would always be blamed on the other vendor's application on the server.
There were always talks from management about providing capacity to businesses that is scalable instead of providing them with single servers with a single OS.
That was five years ago and people wanted to use Windows Capacity Management features but they were a joke since they were based on per-process usage quotas and the of course nobody wanted to mix two different apps on the same box so those talks went nowhere.That is until VMWare showed up and showed a real way to isolate each OS instance from another while it also allowed us to configure capacity requirements on each instance while letting us package all those shitty single threaded backend applications each running on a separate server onto on</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177075</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176973</id>
	<title>Interesting...</title>
	<author>certain death</author>
	<datestamp>1243870140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I can see how running multiple processes would make Jail better for *BSD, but if you want to run an entirely different OS in a VM, it just isn't there.  That said, I don't think VMware is as awesome as Xen, but Xen has trouble running certain OSes that VMware can run without issue (within reason), so I think they all have their strong areas of coverage.</htmltext>
<tokenext>I can see how running multiple processes would make Jail better for * BSD , but if you want to run an entirely different OS in a VM , it just is n't there .
That said , I do n't think VMware is as awesome as Xen , but Xen has trouble running certain OSes that VMware can run without issue ( within reason ) , so I think they all have their strong areas of coverage .</tokentext>
<sentencetext>I can see how running multiple processes would make Jail better for *BSD, but if you want to run an entirely different OS in a VM, it just isn't there.
That said, I don't think VMware is as awesome as Xen, but Xen has trouble running certain OSes that VMware can run without issue (within reason), so I think they all have their strong areas of coverage.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28202077</id>
	<title>Re:Government IT is being poisoned by virtualizati</title>
	<author>Anonymous</author>
	<datestamp>1244026320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>The new buzzword of Virtualization has reached all corners of the US Government IT realm.  Blinded by the marketing hype of "consolidation" and "power savings" agencies of the three-letter variety are falling over themselves awarding contracts to "virtualize" the infrastucture.  Cross-domain security be damned, VMWare and Microsoft SoftGrid Hyper-v Softricity Whatevers will solve all their problems and help us go green at the very same time, for every application, in every environment, for no reason.</p></div><p>Both zones on Solaris 10 (+Trusted Extensions) and VMware are rating MLS-capable and can be used for cross-domain security. Solaris can also do CIPSO tagging on the network as well AFAIK.</p><p>If you want to take it for a spin, Solaris 10 (and TX) run under VMware just fine, and so you can play with MLS on your own system.</p></div>
	</htmltext>
<tokenext>The new buzzword of Virtualization has reached all corners of the US Government IT realm .
Blinded by the marketing hype of " consolidation " and " power savings " agencies of the three-letter variety are falling over themselves awarding contracts to " virtualize " the infrastucture .
Cross-domain security be damned , VMWare and Microsoft SoftGrid Hyper-v Softricity Whatevers will solve all their problems and help us go green at the very same time , for every application , in every environment , for no reason.Both zones on Solaris 10 ( + Trusted Extensions ) and VMware are rating MLS-capable and can be used for cross-domain security .
Solaris can also do CIPSO tagging on the network as well AFAIK.If you want to take it for a spin , Solaris 10 ( and TX ) run under VMware just fine , and so you can play with MLS on your own system .</tokentext>
<sentencetext>The new buzzword of Virtualization has reached all corners of the US Government IT realm.
Blinded by the marketing hype of "consolidation" and "power savings" agencies of the three-letter variety are falling over themselves awarding contracts to "virtualize" the infrastucture.
Cross-domain security be damned, VMWare and Microsoft SoftGrid Hyper-v Softricity Whatevers will solve all their problems and help us go green at the very same time, for every application, in every environment, for no reason.Both zones on Solaris 10 (+Trusted Extensions) and VMware are rating MLS-capable and can be used for cross-domain security.
Solaris can also do CIPSO tagging on the network as well AFAIK.If you want to take it for a spin, Solaris 10 (and TX) run under VMware just fine, and so you can play with MLS on your own system.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176825</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180987</id>
	<title>Server is a bad choice for a production server!</title>
	<author>DecepticonEazyE</author>
	<datestamp>1243952460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>They compared it to Server v1?  That's an unfair comparison.  When you stack a hypervisor on top of another OS, yeah, there's going to be overhead.  Probably too much overhead for a production server.  Compare it to ESX3.5 or even ESX3i, then we'll talk.</htmltext>
<tokenext>They compared it to Server v1 ?
That 's an unfair comparison .
When you stack a hypervisor on top of another OS , yeah , there 's going to be overhead .
Probably too much overhead for a production server .
Compare it to ESX3.5 or even ESX3i , then we 'll talk .</tokentext>
<sentencetext>They compared it to Server v1?
That's an unfair comparison.
When you stack a hypervisor on top of another OS, yeah, there's going to be overhead.
Probably too much overhead for a production server.
Compare it to ESX3.5 or even ESX3i, then we'll talk.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176841</id>
	<title>Virtualization != Performance</title>
	<author>gmuslera</author>
	<datestamp>1243869300000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>If you really need all the performance you can get for a service, don't virtualize it, or at least check that what you can get is enough, Virtualization have a lot of advantages, but dont give you the full resources of the real machine is running into (and if well how much you lose depend on the kind of virtualization you use, still wont be full). Maybe the 10x number could be VMWare fault or just a reasonable consequence of how is doing virtualization (maybe taking into account disk IO performance you could explain a good percent of that number).</htmltext>
<tokenext>If you really need all the performance you can get for a service , do n't virtualize it , or at least check that what you can get is enough , Virtualization have a lot of advantages , but dont give you the full resources of the real machine is running into ( and if well how much you lose depend on the kind of virtualization you use , still wont be full ) .
Maybe the 10x number could be VMWare fault or just a reasonable consequence of how is doing virtualization ( maybe taking into account disk IO performance you could explain a good percent of that number ) .</tokentext>
<sentencetext>If you really need all the performance you can get for a service, don't virtualize it, or at least check that what you can get is enough, Virtualization have a lot of advantages, but dont give you the full resources of the real machine is running into (and if well how much you lose depend on the kind of virtualization you use, still wont be full).
Maybe the 10x number could be VMWare fault or just a reasonable consequence of how is doing virtualization (maybe taking into account disk IO performance you could explain a good percent of that number).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176709</id>
	<title>-1, Flamebait</title>
	<author>Anonymous</author>
	<datestamp>1243868340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>TFA: "Error establishing a database connection"</p><p>So much for that.  Also, am I correct in assuming BSD's jail is the equivalent of Linux's chroot?  Is this another case of "Didn't know I should have been limiting processes instead of visualizing another OS for a single process" stories?  I mean<nobr> <wbr></nobr>.. isn't that, well, obvious?</p></htmltext>
<tokenext>TFA : " Error establishing a database connection " So much for that .
Also , am I correct in assuming BSD 's jail is the equivalent of Linux 's chroot ?
Is this another case of " Did n't know I should have been limiting processes instead of visualizing another OS for a single process " stories ?
I mean .. is n't that , well , obvious ?</tokentext>
<sentencetext>TFA: "Error establishing a database connection"So much for that.
Also, am I correct in assuming BSD's jail is the equivalent of Linux's chroot?
Is this another case of "Didn't know I should have been limiting processes instead of visualizing another OS for a single process" stories?
I mean .. isn't that, well, obvious?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177003</id>
	<title>Re:Is this a surprise?</title>
	<author>QuoteMstr</author>
	<datestamp>1243870380000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>0</modscore>
	<htmltext><p>Or <a href="http://www.ibm.com/developerworks/linux/library/l-lxc-containers/?ca=dgr-lnxw07Linux-Containers&amp;S\_TACT=105AGX59&amp;S\_CMP=grsitelnxw07" title="ibm.com">Linux containers</a> [ibm.com] for that matter.</p><p>(Or for something more mature today, but implemented as a large out-of-tree patch, <a href="http://wiki.openvz.org/Main\_Page" title="openvz.org">OpenVZ</a> [openvz.org])</p></htmltext>
<tokenext>Or Linux containers [ ibm.com ] for that matter .
( Or for something more mature today , but implemented as a large out-of-tree patch , OpenVZ [ openvz.org ] )</tokentext>
<sentencetext>Or Linux containers [ibm.com] for that matter.
(Or for something more mature today, but implemented as a large out-of-tree patch, OpenVZ [openvz.org])</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176889</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28189249</id>
	<title>Re:excellent sales story</title>
	<author>geniusj</author>
	<datestamp>1243943700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm going to assume you work for a mid-large business where most of the servers are idling most of the time or where most processing is batch and not real-time.  Virtualization can be a benefit in these types of environments, but it doesn't fit everywhere.  I'd hope that you wouldn't go to google, for example, and suggest that they move their servers to virtualized infrastructure.</p><p>Google is actually a great counter-example to your arguments here.  If you have the right processes in place and put some actual thought into your infrastructure, the result can be very manageable.  But I understand that a lot of companies don't want to pay for the talent to put these kinds of things in place and find it much cheaper to buy a software package from VMware.  They'll just pay a performance cost, but for many of them, it doesn't really matter.</p><p>It kind of reminds me of the JVM/CLR vs native code arguments.</p></htmltext>
<tokenext>I 'm going to assume you work for a mid-large business where most of the servers are idling most of the time or where most processing is batch and not real-time .
Virtualization can be a benefit in these types of environments , but it does n't fit everywhere .
I 'd hope that you would n't go to google , for example , and suggest that they move their servers to virtualized infrastructure.Google is actually a great counter-example to your arguments here .
If you have the right processes in place and put some actual thought into your infrastructure , the result can be very manageable .
But I understand that a lot of companies do n't want to pay for the talent to put these kinds of things in place and find it much cheaper to buy a software package from VMware .
They 'll just pay a performance cost , but for many of them , it does n't really matter.It kind of reminds me of the JVM/CLR vs native code arguments .</tokentext>
<sentencetext>I'm going to assume you work for a mid-large business where most of the servers are idling most of the time or where most processing is batch and not real-time.
Virtualization can be a benefit in these types of environments, but it doesn't fit everywhere.
I'd hope that you wouldn't go to google, for example, and suggest that they move their servers to virtualized infrastructure.Google is actually a great counter-example to your arguments here.
If you have the right processes in place and put some actual thought into your infrastructure, the result can be very manageable.
But I understand that a lot of companies don't want to pay for the talent to put these kinds of things in place and find it much cheaper to buy a software package from VMware.
They'll just pay a performance cost, but for many of them, it doesn't really matter.It kind of reminds me of the JVM/CLR vs native code arguments.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177459</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180691</id>
	<title>Re:Different tools for different jobs</title>
	<author>Anonymous</author>
	<datestamp>1243950660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"So I would love to RTFA to make sure about this, but their high-performance web servers running on FreeBSD jails are down, so I can't..."</p><p>No chance that all the bandwidth they have is being sucked up?</p><p>"FreeBSD hasn't been a supported OS on ESX Server."</p><p>Not applicable since when I RTFA it said they were running VMware Server 1.</p></htmltext>
<tokenext>" So I would love to RTFA to make sure about this , but their high-performance web servers running on FreeBSD jails are down , so I ca n't... " No chance that all the bandwidth they have is being sucked up ?
" FreeBSD has n't been a supported OS on ESX Server .
" Not applicable since when I RTFA it said they were running VMware Server 1 .</tokentext>
<sentencetext>"So I would love to RTFA to make sure about this, but their high-performance web servers running on FreeBSD jails are down, so I can't..."No chance that all the bandwidth they have is being sucked up?
"FreeBSD hasn't been a supported OS on ESX Server.
"Not applicable since when I RTFA it said they were running VMware Server 1.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176913</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180821</id>
	<title>Re:Virtualization is good enough</title>
	<author>Anonymous</author>
	<datestamp>1243951440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This isn't an issue of 10-20\% overhead, this is an issue of the application running ten times slower (RTFA). If you think a VM only adds 10-20\% overhead, you are deceiving yourself and have not actually measured. It's like the difference between a compiled and an interpreted language all over again.</p></htmltext>
<tokenext>This is n't an issue of 10-20 \ % overhead , this is an issue of the application running ten times slower ( RTFA ) .
If you think a VM only adds 10-20 \ % overhead , you are deceiving yourself and have not actually measured .
It 's like the difference between a compiled and an interpreted language all over again .</tokentext>
<sentencetext>This isn't an issue of 10-20\% overhead, this is an issue of the application running ten times slower (RTFA).
If you think a VM only adds 10-20\% overhead, you are deceiving yourself and have not actually measured.
It's like the difference between a compiled and an interpreted language all over again.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177075</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179301</id>
	<title>Re:free beats fee most of the time</title>
	<author>Anonymous</author>
	<datestamp>1243938660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I don't know, you should go set your CSS and SMTP preferences and the SLA servers should give you the correct BDSM.</htmltext>
<tokenext>I do n't know , you should go set your CSS and SMTP preferences and the SLA servers should give you the correct BDSM .</tokentext>
<sentencetext>I don't know, you should go set your CSS and SMTP preferences and the SLA servers should give you the correct BDSM.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179821</id>
	<title>Re:excellent sales story</title>
	<author>AigariusDebian</author>
	<datestamp>1243944300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Or they could use an actual database. One that is designed for performance. Like MySQL or even Oracle if they are so big. SQLite is not suited for any production deployment. It is a good database for development and, possibly, embedded work, but for anything bigger than an iPhone app you should use a real database in production.</p></htmltext>
<tokenext>Or they could use an actual database .
One that is designed for performance .
Like MySQL or even Oracle if they are so big .
SQLite is not suited for any production deployment .
It is a good database for development and , possibly , embedded work , but for anything bigger than an iPhone app you should use a real database in production .</tokentext>
<sentencetext>Or they could use an actual database.
One that is designed for performance.
Like MySQL or even Oracle if they are so big.
SQLite is not suited for any production deployment.
It is a good database for development and, possibly, embedded work, but for anything bigger than an iPhone app you should use a real database in production.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177157</id>
	<title>Well, duh!</title>
	<author>www.sorehands.com</author>
	<datestamp>1243872000000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>1</modscore>
	<htmltext><p>You ask that the OS be put into a virtual machine, would you not expect a big performance hit???  It is only common sense to anyone with any basic computer knowledge. You are adding another layer between the hardware and the program, what do you think would happen?</p></htmltext>
<tokenext>You ask that the OS be put into a virtual machine , would you not expect a big performance hit ? ? ?
It is only common sense to anyone with any basic computer knowledge .
You are adding another layer between the hardware and the program , what do you think would happen ?</tokentext>
<sentencetext>You ask that the OS be put into a virtual machine, would you not expect a big performance hit???
It is only common sense to anyone with any basic computer knowledge.
You are adding another layer between the hardware and the program, what do you think would happen?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176825</id>
	<title>Government IT is being poisoned by virtualization</title>
	<author>kriston</author>
	<datestamp>1243869240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>The new buzzword of Virtualization has reached all corners of the US Government IT realm.  Blinded by the marketing hype of "consolidation" and "power savings" agencies of the three-letter variety are falling over themselves awarding contracts to "virtualize" the infrastucture.  Cross-domain security be damned, VMWare and Microsoft SoftGrid Hyper-v Softricity Whatevers will solve all their problems and help us go green at the very same time, for every application, in every environment, for no reason.</p><p>This is the recovery from the client-server binge-and-purge of the 1990s.</p><p>Here we go again.</p></htmltext>
<tokenext>The new buzzword of Virtualization has reached all corners of the US Government IT realm .
Blinded by the marketing hype of " consolidation " and " power savings " agencies of the three-letter variety are falling over themselves awarding contracts to " virtualize " the infrastucture .
Cross-domain security be damned , VMWare and Microsoft SoftGrid Hyper-v Softricity Whatevers will solve all their problems and help us go green at the very same time , for every application , in every environment , for no reason.This is the recovery from the client-server binge-and-purge of the 1990s.Here we go again .</tokentext>
<sentencetext>The new buzzword of Virtualization has reached all corners of the US Government IT realm.
Blinded by the marketing hype of "consolidation" and "power savings" agencies of the three-letter variety are falling over themselves awarding contracts to "virtualize" the infrastucture.
Cross-domain security be damned, VMWare and Microsoft SoftGrid Hyper-v Softricity Whatevers will solve all their problems and help us go green at the very same time, for every application, in every environment, for no reason.This is the recovery from the client-server binge-and-purge of the 1990s.Here we go again.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178271</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>Anonymous</author>
	<datestamp>1243884780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>FreeBSD's jails make sense on paper, but make little sense -- especially from an administration point of view -- when implemented.  FreeBSD jails are nothing more than an overglorified chroot.</p><p>There are many userland utilities which break horribly with jails.</p><p>And the whole "copy<nobr> <wbr></nobr>/dev and random bullshit from<nobr> <wbr></nobr>/bin,<nobr> <wbr></nobr>/usr/libexec,<nobr> <wbr></nobr>/usr/lib, etc." concept is an absolute disgrace.  Good luck keeping all of that managed/updated the next time you build/install world.</p><p>Avoid FreeBSD jails.  Surely Linux has something like jails which doesn't involve this kind of ancient idiocy.</p></htmltext>
<tokenext>FreeBSD 's jails make sense on paper , but make little sense -- especially from an administration point of view -- when implemented .
FreeBSD jails are nothing more than an overglorified chroot.There are many userland utilities which break horribly with jails.And the whole " copy /dev and random bullshit from /bin , /usr/libexec , /usr/lib , etc .
" concept is an absolute disgrace .
Good luck keeping all of that managed/updated the next time you build/install world.Avoid FreeBSD jails .
Surely Linux has something like jails which does n't involve this kind of ancient idiocy .</tokentext>
<sentencetext>FreeBSD's jails make sense on paper, but make little sense -- especially from an administration point of view -- when implemented.
FreeBSD jails are nothing more than an overglorified chroot.There are many userland utilities which break horribly with jails.And the whole "copy /dev and random bullshit from /bin, /usr/libexec, /usr/lib, etc.
" concept is an absolute disgrace.
Good luck keeping all of that managed/updated the next time you build/install world.Avoid FreeBSD jails.
Surely Linux has something like jails which doesn't involve this kind of ancient idiocy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905</id>
	<title>Re:excellent sales story</title>
	<author>gfody</author>
	<datestamp>1243869720000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Most of the performance issues and I think also the issue faced in TFA have to do with IO performance when using virtual hard drives especially of the sparse-file, auto-growing variety. If they would configure their VMs to have direct access to a dedicated volume they would probably get their 10x performance back in DB applications.</p><p>It would be nice to see some sort of virtual SAN integrated into the VMs</p></htmltext>
<tokenext>Most of the performance issues and I think also the issue faced in TFA have to do with IO performance when using virtual hard drives especially of the sparse-file , auto-growing variety .
If they would configure their VMs to have direct access to a dedicated volume they would probably get their 10x performance back in DB applications.It would be nice to see some sort of virtual SAN integrated into the VMs</tokentext>
<sentencetext>Most of the performance issues and I think also the issue faced in TFA have to do with IO performance when using virtual hard drives especially of the sparse-file, auto-growing variety.
If they would configure their VMs to have direct access to a dedicated volume they would probably get their 10x performance back in DB applications.It would be nice to see some sort of virtual SAN integrated into the VMs</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178103</id>
	<title>Re:excellent sales story</title>
	<author>Spad</author>
	<datestamp>1243882320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://communities.vmware.com/community/beta/vcserver\_linux" title="vmware.com">http://communities.vmware.com/community/beta/vcserver\_linux</a> [vmware.com]</p><p>Beta, but available and you can run it in a VM on top of ESX so you don't have any additional costs other than the hit of running one extra (fairly low impact) guest OS in your ESX environment.</p></htmltext>
<tokenext>http : //communities.vmware.com/community/beta/vcserver \ _linux [ vmware.com ] Beta , but available and you can run it in a VM on top of ESX so you do n't have any additional costs other than the hit of running one extra ( fairly low impact ) guest OS in your ESX environment .</tokentext>
<sentencetext>http://communities.vmware.com/community/beta/vcserver\_linux [vmware.com]Beta, but available and you can run it in a VM on top of ESX so you don't have any additional costs other than the hit of running one extra (fairly low impact) guest OS in your ESX environment.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177715</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176997</id>
	<title>Re:XenServer worked for us</title>
	<author>00dave99</author>
	<datestamp>1243870380000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>XenServer has some good features, but you really can't compare VMware Server with XenServer.  I have many customers that were impressed to be able to run 4 or 5 VMs on VMware Server.  Once we got them moved to ESX on the same hardware they couldn't believe that they were running 20 to 25 VMs on the same hardware.  That being said back end disk configuration is the most important design consideration on any virutalization product.</htmltext>
<tokenext>XenServer has some good features , but you really ca n't compare VMware Server with XenServer .
I have many customers that were impressed to be able to run 4 or 5 VMs on VMware Server .
Once we got them moved to ESX on the same hardware they could n't believe that they were running 20 to 25 VMs on the same hardware .
That being said back end disk configuration is the most important design consideration on any virutalization product .</tokentext>
<sentencetext>XenServer has some good features, but you really can't compare VMware Server with XenServer.
I have many customers that were impressed to be able to run 4 or 5 VMs on VMware Server.
Once we got them moved to ESX on the same hardware they couldn't believe that they were running 20 to 25 VMs on the same hardware.
That being said back end disk configuration is the most important design consideration on any virutalization product.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28183115</id>
	<title>Re:excellent sales story</title>
	<author>jra</author>
	<datestamp>1243960500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"The X Window System".</p><p>"XWindows" is explicitly deprecated in all the documentation, presumably to make Microsoft happy, though I don't think anyone ever admitted to it in public.</p></htmltext>
<tokenext>" The X Window System " .
" XWindows " is explicitly deprecated in all the documentation , presumably to make Microsoft happy , though I do n't think anyone ever admitted to it in public .</tokentext>
<sentencetext>"The X Window System".
"XWindows" is explicitly deprecated in all the documentation, presumably to make Microsoft happy, though I don't think anyone ever admitted to it in public.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177485</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181413</id>
	<title>Re:excellent sales story</title>
	<author>DuckDodgers</author>
	<datestamp>1243954620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You're making an extremely solid argument for good virtualization, backups, and failover with redeployment.  <br> <br>
You're making a far weaker argument for a proprietary solution that does those things.  If some or all of your environments require Windows, then obviously you need a proprietary solution.   But properly configured, backups, VMs, and fast switching between different setups in BSD jails, Xen, KVM (Linux Kernel-based Virtual Machine, not Keyboard-Video-Mouse switching), and other options give you the same flexibility.   And a competent developer/admin can set them up for less than $15-$20k.</htmltext>
<tokenext>You 're making an extremely solid argument for good virtualization , backups , and failover with redeployment .
You 're making a far weaker argument for a proprietary solution that does those things .
If some or all of your environments require Windows , then obviously you need a proprietary solution .
But properly configured , backups , VMs , and fast switching between different setups in BSD jails , Xen , KVM ( Linux Kernel-based Virtual Machine , not Keyboard-Video-Mouse switching ) , and other options give you the same flexibility .
And a competent developer/admin can set them up for less than $ 15- $ 20k .</tokentext>
<sentencetext>You're making an extremely solid argument for good virtualization, backups, and failover with redeployment.
You're making a far weaker argument for a proprietary solution that does those things.
If some or all of your environments require Windows, then obviously you need a proprietary solution.
But properly configured, backups, VMs, and fast switching between different setups in BSD jails, Xen, KVM (Linux Kernel-based Virtual Machine, not Keyboard-Video-Mouse switching), and other options give you the same flexibility.
And a competent developer/admin can set them up for less than $15-$20k.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177459</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177223</id>
	<title>Re:I/O on the free "VMWare Server" sucks</title>
	<author>snookums</author>
	<datestamp>1243872600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There's overhead, but not 10x worse performance unless you're hitting the disk far more in the VM than you were in the native deployment.</p><p>The "gotcha" is that VMWare Server will, by default, use file-backed memory for your VMs so that you can get in a situation where the VM is "thrashing", but neither the host nor guest operating system shows any swap activity. The tell-tale sign is that a vmstat on the host OS will show massive numbers of buffered input and output blocks (i.e. disk activity) when you're doing things in the VM which should not require this amount of disk troughput.</p><p>A possible solution is:</p><p>1. Move the backing file to tmpfs*<br>2. Increase your mounted tmpfsto cover most of the host machine RAM (I'd say total RAM - 1 GB).<br>3. Allocate RAM to your VMs in such a way that you are not over-committed (total of all VMs not more than tmpfs size set at step 2).</p><p>*Take a look at the option mainMem.useNamedFile = "FALSE"</p></htmltext>
<tokenext>There 's overhead , but not 10x worse performance unless you 're hitting the disk far more in the VM than you were in the native deployment.The " gotcha " is that VMWare Server will , by default , use file-backed memory for your VMs so that you can get in a situation where the VM is " thrashing " , but neither the host nor guest operating system shows any swap activity .
The tell-tale sign is that a vmstat on the host OS will show massive numbers of buffered input and output blocks ( i.e .
disk activity ) when you 're doing things in the VM which should not require this amount of disk troughput.A possible solution is : 1 .
Move the backing file to tmpfs * 2 .
Increase your mounted tmpfsto cover most of the host machine RAM ( I 'd say total RAM - 1 GB ) .3 .
Allocate RAM to your VMs in such a way that you are not over-committed ( total of all VMs not more than tmpfs size set at step 2 ) .
* Take a look at the option mainMem.useNamedFile = " FALSE "</tokentext>
<sentencetext>There's overhead, but not 10x worse performance unless you're hitting the disk far more in the VM than you were in the native deployment.The "gotcha" is that VMWare Server will, by default, use file-backed memory for your VMs so that you can get in a situation where the VM is "thrashing", but neither the host nor guest operating system shows any swap activity.
The tell-tale sign is that a vmstat on the host OS will show massive numbers of buffered input and output blocks (i.e.
disk activity) when you're doing things in the VM which should not require this amount of disk troughput.A possible solution is:1.
Move the backing file to tmpfs*2.
Increase your mounted tmpfsto cover most of the host machine RAM (I'd say total RAM - 1 GB).3.
Allocate RAM to your VMs in such a way that you are not over-committed (total of all VMs not more than tmpfs size set at step 2).
*Take a look at the option mainMem.useNamedFile = "FALSE"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176937</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176919</id>
	<title>OpenVZ &amp; Virtuozzo are my favorite way to go</title>
	<author>pyite69</author>
	<datestamp>1243869840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I would expect that the BSD product is similar in design - basically chroot on steroids.</p></htmltext>
<tokenext>I would expect that the BSD product is similar in design - basically chroot on steroids .</tokentext>
<sentencetext>I would expect that the BSD product is similar in design - basically chroot on steroids.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176971</id>
	<title>I don't think you did your research.</title>
	<author>BagOBones</author>
	<datestamp>1243870140000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>If you are separating similar work loads like web apps and databases you are probably better off running them within the same os and database server and separating them via security as the poster realized.</p><p>However if you have a variety of services that do not do the same thing you can really benefit from separating them in virtual machines and have them share common hardware.</p><p>Virtualization also gives you some amazing fault tolerance options that are consistent across different OS and services that are much easier to manage than individual OS and service clustering options.</p></htmltext>
<tokenext>If you are separating similar work loads like web apps and databases you are probably better off running them within the same os and database server and separating them via security as the poster realized.However if you have a variety of services that do not do the same thing you can really benefit from separating them in virtual machines and have them share common hardware.Virtualization also gives you some amazing fault tolerance options that are consistent across different OS and services that are much easier to manage than individual OS and service clustering options .</tokentext>
<sentencetext>If you are separating similar work loads like web apps and databases you are probably better off running them within the same os and database server and separating them via security as the poster realized.However if you have a variety of services that do not do the same thing you can really benefit from separating them in virtual machines and have them share common hardware.Virtualization also gives you some amazing fault tolerance options that are consistent across different OS and services that are much easier to manage than individual OS and service clustering options.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177543</id>
	<title>Re:excellent sales story</title>
	<author>Night64</author>
	<datestamp>1243875420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Actually, Xen does paravirtualization very well. We use a flavor of it (a major enterprise one, but I'm not paid to tell names) with great success in a production environment. On our environment at least, it has a better performance than ESX in the same hardware.
We don't use Windows servers very much, but this flavor (hint, hint) works very well with Windows in a paravirtualized setup. A little better than ESX, but, as always, your mileage may vary.</htmltext>
<tokenext>Actually , Xen does paravirtualization very well .
We use a flavor of it ( a major enterprise one , but I 'm not paid to tell names ) with great success in a production environment .
On our environment at least , it has a better performance than ESX in the same hardware .
We do n't use Windows servers very much , but this flavor ( hint , hint ) works very well with Windows in a paravirtualized setup .
A little better than ESX , but , as always , your mileage may vary .</tokentext>
<sentencetext>Actually, Xen does paravirtualization very well.
We use a flavor of it (a major enterprise one, but I'm not paid to tell names) with great success in a production environment.
On our environment at least, it has a better performance than ESX in the same hardware.
We don't use Windows servers very much, but this flavor (hint, hint) works very well with Windows in a paravirtualized setup.
A little better than ESX, but, as always, your mileage may vary.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179183</id>
	<title>Re:excellent sales story</title>
	<author>leuk\_he</author>
	<datestamp>1243937340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>But you should not overlook the fact that NO virtulisation beats the performance of a virtulisation solution in 9 out of 10 times. If you load your servers already to 100\% virtulisation will only add load.</p><p>If you have multiple lightly loaded servers you can consolidate those in a virtulized solution and safe money.</p><p>IF you have some server that are under full load you do not want to add anything that adds load.</p><p>jail is a solution, but the fact that they did major apllicaiotn changes it could also be done in the application instead, ginving each run a seperate configuration set.</p></htmltext>
<tokenext>But you should not overlook the fact that NO virtulisation beats the performance of a virtulisation solution in 9 out of 10 times .
If you load your servers already to 100 \ % virtulisation will only add load.If you have multiple lightly loaded servers you can consolidate those in a virtulized solution and safe money.IF you have some server that are under full load you do not want to add anything that adds load.jail is a solution , but the fact that they did major apllicaiotn changes it could also be done in the application instead , ginving each run a seperate configuration set .</tokentext>
<sentencetext>But you should not overlook the fact that NO virtulisation beats the performance of a virtulisation solution in 9 out of 10 times.
If you load your servers already to 100\% virtulisation will only add load.If you have multiple lightly loaded servers you can consolidate those in a virtulized solution and safe money.IF you have some server that are under full load you do not want to add anything that adds load.jail is a solution, but the fact that they did major apllicaiotn changes it could also be done in the application instead, ginving each run a seperate configuration set.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177001</id>
	<title>Re:Sounds about right</title>
	<author>Anonymous</author>
	<datestamp>1243870380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Agree 100\%. We are a BSD shop, and we have been enjoying jails for quite a while. Sandboxing, virtualizing, security advantages. It's great!</p></htmltext>
<tokenext>Agree 100 \ % .
We are a BSD shop , and we have been enjoying jails for quite a while .
Sandboxing , virtualizing , security advantages .
It 's great !</tokentext>
<sentencetext>Agree 100\%.
We are a BSD shop, and we have been enjoying jails for quite a while.
Sandboxing, virtualizing, security advantages.
It's great!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178173</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>Macka</author>
	<datestamp>1243883520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p># Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical</p><blockquote><div><p>Actually this depends on your virtualization solution</p></div></blockquote></div> </blockquote><p>No it doesn't.  The parent is clearly talking/complaining about VMware, Xen, Kvm type virtualization, and guest OS instances for all those require their own kernel.  He isn't talking about jails/container solutions (FreeBSD Jails, OpenVZ, Solaris Containers, etc) or none of his points would make any sense.</p><blockquote><div><p># A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest</p><blockquote><div><p>You can often mount the virtual disks in a HOST OS. No different to needing software to access multiple partitions. As long as the software is available, it's not as big an issue.</p></div></blockquote></div> </blockquote><p>Not without shutting the guest down first.  If you mount a filesystem on a disk/partition twice and that filesystem is not a specially designed cluster filesystem, and the two OS instances are not part of the same cluster, then you WILL get data corruption.   The parent's point is valid !</p><p>You should have stopped at your list of what virtualization is good and not good for.  You let yourself down after that.</p></div>
	</htmltext>
<tokenext># Each guest needs its own kernel , so you need to allocate memory and disk space for all these kernels that are in fact identicalActually this depends on your virtualization solution No it does n't .
The parent is clearly talking/complaining about VMware , Xen , Kvm type virtualization , and guest OS instances for all those require their own kernel .
He is n't talking about jails/container solutions ( FreeBSD Jails , OpenVZ , Solaris Containers , etc ) or none of his points would make any sense. # A guest 's filesystem is on a virtual block device , so it 's hard to get at it without running some kind of fileserver on the guestYou can often mount the virtual disks in a HOST OS .
No different to needing software to access multiple partitions .
As long as the software is available , it 's not as big an issue .
Not without shutting the guest down first .
If you mount a filesystem on a disk/partition twice and that filesystem is not a specially designed cluster filesystem , and the two OS instances are not part of the same cluster , then you WILL get data corruption .
The parent 's point is valid ! You should have stopped at your list of what virtualization is good and not good for .
You let yourself down after that .</tokentext>
<sentencetext># Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identicalActually this depends on your virtualization solution No it doesn't.
The parent is clearly talking/complaining about VMware, Xen, Kvm type virtualization, and guest OS instances for all those require their own kernel.
He isn't talking about jails/container solutions (FreeBSD Jails, OpenVZ, Solaris Containers, etc) or none of his points would make any sense.# A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guestYou can often mount the virtual disks in a HOST OS.
No different to needing software to access multiple partitions.
As long as the software is available, it's not as big an issue.
Not without shutting the guest down first.
If you mount a filesystem on a disk/partition twice and that filesystem is not a specially designed cluster filesystem, and the two OS instances are not part of the same cluster, then you WILL get data corruption.
The parent's point is valid !You should have stopped at your list of what virtualization is good and not good for.
You let yourself down after that.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177203</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177679</id>
	<title>Re:Sounds about right</title>
	<author>Just Some Guy</author>
	<datestamp>1243876860000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Oh, I forgot to mention another much-loved jail use: giving applications their own customized execution environment.  Suppose you have some legacy app that requires, say, some ancient version of Perl and a database connector from 1998.  Jails are a great way to sandbox that crufty old environment without forcing those limitations onto the rest of your apps.</htmltext>
<tokenext>Oh , I forgot to mention another much-loved jail use : giving applications their own customized execution environment .
Suppose you have some legacy app that requires , say , some ancient version of Perl and a database connector from 1998 .
Jails are a great way to sandbox that crufty old environment without forcing those limitations onto the rest of your apps .</tokentext>
<sentencetext>Oh, I forgot to mention another much-loved jail use: giving applications their own customized execution environment.
Suppose you have some legacy app that requires, say, some ancient version of Perl and a database connector from 1998.
Jails are a great way to sandbox that crufty old environment without forcing those limitations onto the rest of your apps.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28192993</id>
	<title>Er, nope its the way vmWare does RAM in Linux Host</title>
	<author>IBitOBear</author>
	<datestamp>1244020620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I covered this in more detail in a top level post below. The actual performance problems for most vmWare machines (typically) is not the virtual disk. The way vm machine memory is backed by a mapped regular file, combined with the way large-memory VMs interact with the overcommit\_ratio on a Linux host OS (e.g. running vmware under linux, regardless of what the vm is running internally) produce almost all of the slowdowns.</p><p>The way VMWare does NAT through a userspace daemon isn't the best thing on the planet either.</p><p>(read the how-to post below for the remedies instead of looking here for re-pasted text, I'm not \_that\_ much of a karma whore 8-)</p></htmltext>
<tokenext>I covered this in more detail in a top level post below .
The actual performance problems for most vmWare machines ( typically ) is not the virtual disk .
The way vm machine memory is backed by a mapped regular file , combined with the way large-memory VMs interact with the overcommit \ _ratio on a Linux host OS ( e.g .
running vmware under linux , regardless of what the vm is running internally ) produce almost all of the slowdowns.The way VMWare does NAT through a userspace daemon is n't the best thing on the planet either .
( read the how-to post below for the remedies instead of looking here for re-pasted text , I 'm not \ _that \ _ much of a karma whore 8- )</tokentext>
<sentencetext>I covered this in more detail in a top level post below.
The actual performance problems for most vmWare machines (typically) is not the virtual disk.
The way vm machine memory is backed by a mapped regular file, combined with the way large-memory VMs interact with the overcommit\_ratio on a Linux host OS (e.g.
running vmware under linux, regardless of what the vm is running internally) produce almost all of the slowdowns.The way VMWare does NAT through a userspace daemon isn't the best thing on the planet either.
(read the how-to post below for the remedies instead of looking here for re-pasted text, I'm not \_that\_ much of a karma whore 8-)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176799</id>
	<title>free beats fee most of the time</title>
	<author>Anonymous</author>
	<datestamp>1243869060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>This is slightly off the server virtualization topic, but I had a similar experience with LTSP and some costly competitors.  Using LTSP we were able to put up 5X the number of stable Linux desktops on the same hardware.  I'd tell every organization out there to do a pilot bake-off as often as possible.  It won't happen all the time, but I suspect that more often than not, the free open solution, properly setup will beat the slickly marketed, closed proprietary solution.</htmltext>
<tokenext>This is slightly off the server virtualization topic , but I had a similar experience with LTSP and some costly competitors .
Using LTSP we were able to put up 5X the number of stable Linux desktops on the same hardware .
I 'd tell every organization out there to do a pilot bake-off as often as possible .
It wo n't happen all the time , but I suspect that more often than not , the free open solution , properly setup will beat the slickly marketed , closed proprietary solution .</tokentext>
<sentencetext>This is slightly off the server virtualization topic, but I had a similar experience with LTSP and some costly competitors.
Using LTSP we were able to put up 5X the number of stable Linux desktops on the same hardware.
I'd tell every organization out there to do a pilot bake-off as often as possible.
It won't happen all the time, but I suspect that more often than not, the free open solution, properly setup will beat the slickly marketed, closed proprietary solution.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177915</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>AcidPenguin9873</author>
	<datestamp>1243879620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Your points are all valid, but they are some of the areas that virtualization systems have addressed in the past 10 years (or longer if you were running an IBM system).<p><div class="quote"><p>Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical</p></div><p>I'm pretty sure VMWare can detect when the same block of the same file is mapped into multiple guests, and share the physical page.  Plus, the kernel's memory image is small compared to, say, the database server you're running on it.  I guess there's overhead like an extra set of page tables (either nested page tables managed by the guest, or shadow page tables managed by the host).  Overall a small effect I think.</p><p><div class="quote"><p>TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.</p></div><p>Any context switch between two userspace programs in a non-virtualized system needs a TLB flush too (BSD jails included).  Or, if you're using a processor that has a tagged TLB, you don't need to flush it, but your virtualized guest gets the no-TLB-flush benefit too.</p><p><div class="quote"><p>A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest</p></div><p>Again I don't think this is a huge deal.  Aren't there drivers to allow a host to see inside a guest's block device and/or filesystem?</p><p><div class="quote"><p>Memory management is an absolute clusterfuck.</p></div><p>In a naive hypervisor, yes.  In more mature hypervisors, not really.  See the following articles for solutions on fully virtualized and paravirtualized guests, respectively:
<a href="http://www.usenix.org/events/osdi02/tech/waldspurger/waldspurger\_html/node6.html" title="usenix.org">http://www.usenix.org/events/osdi02/tech/waldspurger/waldspurger\_html/node6.html</a> [usenix.org] <br>
<a href="http://lwn.net/Articles/198380/" title="lwn.net">http://lwn.net/Articles/198380/</a> [lwn.net]</p></div>
	</htmltext>
<tokenext>Your points are all valid , but they are some of the areas that virtualization systems have addressed in the past 10 years ( or longer if you were running an IBM system ) .Each guest needs its own kernel , so you need to allocate memory and disk space for all these kernels that are in fact identicalI 'm pretty sure VMWare can detect when the same block of the same file is mapped into multiple guests , and share the physical page .
Plus , the kernel 's memory image is small compared to , say , the database server you 're running on it .
I guess there 's overhead like an extra set of page tables ( either nested page tables managed by the guest , or shadow page tables managed by the host ) .
Overall a small effect I think.TLB flushes kill performance .
Recent x86 CPUs address the problem to some degree , but it 's still a problem.Any context switch between two userspace programs in a non-virtualized system needs a TLB flush too ( BSD jails included ) .
Or , if you 're using a processor that has a tagged TLB , you do n't need to flush it , but your virtualized guest gets the no-TLB-flush benefit too.A guest 's filesystem is on a virtual block device , so it 's hard to get at it without running some kind of fileserver on the guestAgain I do n't think this is a huge deal .
Are n't there drivers to allow a host to see inside a guest 's block device and/or filesystem ? Memory management is an absolute clusterfuck.In a naive hypervisor , yes .
In more mature hypervisors , not really .
See the following articles for solutions on fully virtualized and paravirtualized guests , respectively : http : //www.usenix.org/events/osdi02/tech/waldspurger/waldspurger \ _html/node6.html [ usenix.org ] http : //lwn.net/Articles/198380/ [ lwn.net ]</tokentext>
<sentencetext>Your points are all valid, but they are some of the areas that virtualization systems have addressed in the past 10 years (or longer if you were running an IBM system).Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identicalI'm pretty sure VMWare can detect when the same block of the same file is mapped into multiple guests, and share the physical page.
Plus, the kernel's memory image is small compared to, say, the database server you're running on it.
I guess there's overhead like an extra set of page tables (either nested page tables managed by the guest, or shadow page tables managed by the host).
Overall a small effect I think.TLB flushes kill performance.
Recent x86 CPUs address the problem to some degree, but it's still a problem.Any context switch between two userspace programs in a non-virtualized system needs a TLB flush too (BSD jails included).
Or, if you're using a processor that has a tagged TLB, you don't need to flush it, but your virtualized guest gets the no-TLB-flush benefit too.A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guestAgain I don't think this is a huge deal.
Aren't there drivers to allow a host to see inside a guest's block device and/or filesystem?Memory management is an absolute clusterfuck.In a naive hypervisor, yes.
In more mature hypervisors, not really.
See the following articles for solutions on fully virtualized and paravirtualized guests, respectively:
http://www.usenix.org/events/osdi02/tech/waldspurger/waldspurger\_html/node6.html [usenix.org] 
http://lwn.net/Articles/198380/ [lwn.net]
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28184285</id>
	<title>Re:excellent sales story</title>
	<author>Bourbonium</author>
	<datestamp>1243965300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This is something that is hammered at over and over in the comments at the end of the original article, as well as here on<nobr> <wbr></nobr>/.  They apparently did no research into virtualization before launching into this ill-advised kludge.  It took them so long to figure out that they were using the wrong technology, they could have saved themselves a ton of work just by doing some rudimentary investigation/evaluation of multiple virtualization methods before going down the VMWare Server road.  There are better "free" options than the one they chose, and probably some more appropriate options than the BSD Jails solution they eventually used.  Or they could have paid a consultant to advise them in the first place if they weren't such cheapskates.  I'm a notorious skinflint myself, but I know that doing your homework in advance is a better use of resources than the trial-and-error fiasco they endured.  And they did this in a <i>production environment</i> with their customers' live data!  Something tells me this story will <i>not</i> drive new business to their door.</p></htmltext>
<tokenext>This is something that is hammered at over and over in the comments at the end of the original article , as well as here on / .
They apparently did no research into virtualization before launching into this ill-advised kludge .
It took them so long to figure out that they were using the wrong technology , they could have saved themselves a ton of work just by doing some rudimentary investigation/evaluation of multiple virtualization methods before going down the VMWare Server road .
There are better " free " options than the one they chose , and probably some more appropriate options than the BSD Jails solution they eventually used .
Or they could have paid a consultant to advise them in the first place if they were n't such cheapskates .
I 'm a notorious skinflint myself , but I know that doing your homework in advance is a better use of resources than the trial-and-error fiasco they endured .
And they did this in a production environment with their customers ' live data !
Something tells me this story will not drive new business to their door .</tokentext>
<sentencetext>This is something that is hammered at over and over in the comments at the end of the original article, as well as here on /.
They apparently did no research into virtualization before launching into this ill-advised kludge.
It took them so long to figure out that they were using the wrong technology, they could have saved themselves a ton of work just by doing some rudimentary investigation/evaluation of multiple virtualization methods before going down the VMWare Server road.
There are better "free" options than the one they chose, and probably some more appropriate options than the BSD Jails solution they eventually used.
Or they could have paid a consultant to advise them in the first place if they weren't such cheapskates.
I'm a notorious skinflint myself, but I know that doing your homework in advance is a better use of resources than the trial-and-error fiasco they endured.
And they did this in a production environment with their customers' live data!
Something tells me this story will not drive new business to their door.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182341</id>
	<title>Re:Terrible name</title>
	<author>Just Some Guy</author>
	<datestamp>1243957860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>'Jail' is such a terrible metaphor to choose for a product. I want a happy metaphor like 'sandbox', not something redolent of brutality, despair and iron sorrows.</p></div><p>I want our applications to be too freaking terrified to even consider trying to escape.</p></div>
	</htmltext>
<tokenext>'Jail ' is such a terrible metaphor to choose for a product .
I want a happy metaphor like 'sandbox ' , not something redolent of brutality , despair and iron sorrows.I want our applications to be too freaking terrified to even consider trying to escape .</tokentext>
<sentencetext>'Jail' is such a terrible metaphor to choose for a product.
I want a happy metaphor like 'sandbox', not something redolent of brutality, despair and iron sorrows.I want our applications to be too freaking terrified to even consider trying to escape.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179249</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28200643</id>
	<title>Re:Sounds about right</title>
	<author>sanqui</author>
	<datestamp>1244021340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In what sense would "quite a few Linux distros [...] run perfectly" in this context?</p></htmltext>
<tokenext>In what sense would " quite a few Linux distros [ ... ] run perfectly " in this context ?</tokentext>
<sentencetext>In what sense would "quite a few Linux distros [...] run perfectly" in this context?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177169</id>
	<title>Re:Sounds about right</title>
	<author>d3matt</author>
	<datestamp>1243872060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Just as a curiosity...  Have you guys ever used jails for cross-compiles similar to <a href="http://www.scratchbox.org/" title="scratchbox.org" rel="nofollow">scratchbox</a> [scratchbox.org]?</htmltext>
<tokenext>Just as a curiosity... Have you guys ever used jails for cross-compiles similar to scratchbox [ scratchbox.org ] ?</tokentext>
<sentencetext>Just as a curiosity...  Have you guys ever used jails for cross-compiles similar to scratchbox [scratchbox.org]?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677</id>
	<title>excellent sales story</title>
	<author>OrangeTide</author>
	<datestamp>1243868040000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.</p><p>When applied to a problem it seems to create more performance issues than it solves. But it can make managing lots of services easier. I think that's the primary goal to these VMware-like products.</p><p>Things like Xen take a different approach and seem to have better performance for I/O intensive applications. But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.</p><p>VMware is more like how the Mainframe world has been slicing up mainframes into little bits to provide highly isolated applications for various services. VMware has not caught up to the capabilities and scalability to things IBM has been offering for decades though. Even though the raw CPU performance of a PC is better than a mid-range mainframe at 1\% of the cost (or less). But scalability and performance are two separate things, even though we would like both.</p></htmltext>
<tokenext>Virtualization is an excellent story to sell .
It is a process that can be applied to a wide range of problems.When applied to a problem it seems to create more performance issues than it solves .
But it can make managing lots of services easier .
I think that 's the primary goal to these VMware-like products.Things like Xen take a different approach and seem to have better performance for I/O intensive applications .
But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware 's monitor.VMware is more like how the Mainframe world has been slicing up mainframes into little bits to provide highly isolated applications for various services .
VMware has not caught up to the capabilities and scalability to things IBM has been offering for decades though .
Even though the raw CPU performance of a PC is better than a mid-range mainframe at 1 \ % of the cost ( or less ) .
But scalability and performance are two separate things , even though we would like both .</tokentext>
<sentencetext>Virtualization is an excellent story to sell.
It is a process that can be applied to a wide range of problems.When applied to a problem it seems to create more performance issues than it solves.
But it can make managing lots of services easier.
I think that's the primary goal to these VMware-like products.Things like Xen take a different approach and seem to have better performance for I/O intensive applications.
But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.VMware is more like how the Mainframe world has been slicing up mainframes into little bits to provide highly isolated applications for various services.
VMware has not caught up to the capabilities and scalability to things IBM has been offering for decades though.
Even though the raw CPU performance of a PC is better than a mid-range mainframe at 1\% of the cost (or less).
But scalability and performance are two separate things, even though we would like both.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177715</id>
	<title>Re:excellent sales story</title>
	<author>Anonymous</author>
	<datestamp>1243877340000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext>The OP is probably (Still can't get to it...) from someone who uses FreeBSD, which means they <b>WILL NOT</b> be impressed with ESX/ESXi which <b>REQUIRES WINDOWS</b> to run and admin. VMware has ignored the FreeBSD and Linux user base for it's "enterprise products".</htmltext>
<tokenext>The OP is probably ( Still ca n't get to it... ) from someone who uses FreeBSD , which means they WILL NOT be impressed with ESX/ESXi which REQUIRES WINDOWS to run and admin .
VMware has ignored the FreeBSD and Linux user base for it 's " enterprise products " .</tokentext>
<sentencetext>The OP is probably (Still can't get to it...) from someone who uses FreeBSD, which means they WILL NOT be impressed with ESX/ESXi which REQUIRES WINDOWS to run and admin.
VMware has ignored the FreeBSD and Linux user base for it's "enterprise products".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178169</id>
	<title>Re:excellent sales story</title>
	<author>aarggh</author>
	<datestamp>1243883400000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>I would actually say that the day ESXi became free, it made server completely obsolete for ANYTHING other than initial testing or building.</p><p>As you stated, this article really on every level is a ridicuously poorly designed implimentation, I don't get into flame wars as to what's the better OS, etc, etc, so far as I'm concerned whatever is best at doing what I need it to is the solution I aim for, and with ESX I must admit I have been extremely happy with the time and resource savings, as well as the GREATLY reduced management overhead. Throw in the HA, DRS, vMotion, and disaster recovery, and I now sleep a lot better at night, and get far fewer calls!</p></htmltext>
<tokenext>I would actually say that the day ESXi became free , it made server completely obsolete for ANYTHING other than initial testing or building.As you stated , this article really on every level is a ridicuously poorly designed implimentation , I do n't get into flame wars as to what 's the better OS , etc , etc , so far as I 'm concerned whatever is best at doing what I need it to is the solution I aim for , and with ESX I must admit I have been extremely happy with the time and resource savings , as well as the GREATLY reduced management overhead .
Throw in the HA , DRS , vMotion , and disaster recovery , and I now sleep a lot better at night , and get far fewer calls !</tokentext>
<sentencetext>I would actually say that the day ESXi became free, it made server completely obsolete for ANYTHING other than initial testing or building.As you stated, this article really on every level is a ridicuously poorly designed implimentation, I don't get into flame wars as to what's the better OS, etc, etc, so far as I'm concerned whatever is best at doing what I need it to is the solution I aim for, and with ESX I must admit I have been extremely happy with the time and resource savings, as well as the GREATLY reduced management overhead.
Throw in the HA, DRS, vMotion, and disaster recovery, and I now sleep a lot better at night, and get far fewer calls!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28192917</id>
	<title>howto: Rev-UP VMWare Server/Wkstn in Linux Host OS</title>
	<author>IBitOBear</author>
	<datestamp>1244019780000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Okay, I have been through this at work several times recently. There are two major slow-downs in the default (but reasonably bullet-proof) VMWare machines running on a Linux \_host\_.</p><p>1) If you are doing NAT attachment \_don't\_ use the vmware NAT daemon. It pulls each packet into userspace before deciding how to forward/nat it. So don't use the default nat device (e.g. vmnet8). Add a new custom made "host only" adapter (e.g. vmnet9-or-more) by adding another adapter, and then use regular firewalling (ip\_forward = 1 and iptables rules) so that the packets just pass through the Linux kernel and netfilters once. (you can use vmnet1 in a pinch but blarg! 8-)</p><p>1a) If you want/need to use the default nat engine (e.g. vmnet8) then put the nat daemon into a real-time scheduling group with "chrt --rr --pid 22 $(pgrep vmnet-natd)". Not quite a good as staying in the kernel all the way to your physical media.</p><p>1b) if you do item one, don't use the vmware-dhcpd, configure your regular dhcpd/dhcpd3 etc daemon because it will more easily integrate with your system as a whole.</p><p>(in other words, vmware-dhcpd is not magic, and vmware-natd is \_super\_ expensive)</p><p>2) VMWare makes a<nobr> <wbr></nobr>/path/to/your/machine/machine\_name.vmem file, which is a memory mapped file that represents the RAM in the guest. This is like having the whole vm living forever in your swap space. It's great there if you want to take a lot of snapshots and want to be more restart/crash/sleep safe. It \_sucks\_ for performance. If you use "mainmem.usenamedfile=FALSE" in your<nobr> <wbr></nobr>.vmx files. (you have to edit the files by hand). This will move the<nobr> <wbr></nobr>.vmem file into your temp directory and unlink it so it's anonymous and self-deleting. It slows down snapshots but...</p><p>2a) Make \_SURE\_ your<nobr> <wbr></nobr>/tmp file system is a mounted tmpfs with a size=whatever mount option that will let the tmpfs grow to at least 10\% larger than the (sum of the) memory size of (all of the) vritual machine(s) you are going to run at once. This will cause the "backing" of the virtual machine RAM to be actual RAM and you will get rational machine RAM speed.</p><p>2b) If you want/need to, there is a tmpDirectory=/wherever diretive to say where those files go. It gangswith the usenamedfile=FLASE and you can set up dedicated tmpfs files to back the machines specially/separately.</p><p>2c) If you want/need the backing or have a "better" drive you want to use with real backing, you can use the above in variations to move this performance limiter onto different spindle than your<nobr> <wbr></nobr>.vmd (virtual disk files).</p><p>3) No matter what, your virtual memory file counts against your overcommit\_ratio (/proc/sys/vm/overcommit\_ratio) compared to your ram. It defaults at 50\% for \_all\_ the accounted facilities system-wide. If you have 4Gig RAM and try to run a 3G vm while leaving your overcommit\_ratio at 50, you will suffer some unintended consequences in terms of paging/swapping pressure. Ajust your ratio to like 75 or 80 percent if your total VM memory size is 60 to 65 percent of real ram.  \_DONT\_ set this nubmer to more than 85\% unless you have experimented with the system stability at higher numbers. It can be \_quite\_ surprising.</p><p>Anyway, that's the three things (in many parts) you need to know to make VMWare work its best on your linux host OS. It doesn't matter what the Guest OS is, always consider the above.</p><p>Disclaimer: I don't work for VMWare etc, this is all practical knowledge and trial-n-error gained knowledge. I offer no warranty, but it will work...</p></htmltext>
<tokenext>Okay , I have been through this at work several times recently .
There are two major slow-downs in the default ( but reasonably bullet-proof ) VMWare machines running on a Linux \ _host \ _.1 ) If you are doing NAT attachment \ _do n't \ _ use the vmware NAT daemon .
It pulls each packet into userspace before deciding how to forward/nat it .
So do n't use the default nat device ( e.g .
vmnet8 ) . Add a new custom made " host only " adapter ( e.g .
vmnet9-or-more ) by adding another adapter , and then use regular firewalling ( ip \ _forward = 1 and iptables rules ) so that the packets just pass through the Linux kernel and netfilters once .
( you can use vmnet1 in a pinch but blarg !
8- ) 1a ) If you want/need to use the default nat engine ( e.g .
vmnet8 ) then put the nat daemon into a real-time scheduling group with " chrt --rr --pid 22 $ ( pgrep vmnet-natd ) " .
Not quite a good as staying in the kernel all the way to your physical media.1b ) if you do item one , do n't use the vmware-dhcpd , configure your regular dhcpd/dhcpd3 etc daemon because it will more easily integrate with your system as a whole .
( in other words , vmware-dhcpd is not magic , and vmware-natd is \ _super \ _ expensive ) 2 ) VMWare makes a /path/to/your/machine/machine \ _name.vmem file , which is a memory mapped file that represents the RAM in the guest .
This is like having the whole vm living forever in your swap space .
It 's great there if you want to take a lot of snapshots and want to be more restart/crash/sleep safe .
It \ _sucks \ _ for performance .
If you use " mainmem.usenamedfile = FALSE " in your .vmx files .
( you have to edit the files by hand ) .
This will move the .vmem file into your temp directory and unlink it so it 's anonymous and self-deleting .
It slows down snapshots but...2a ) Make \ _SURE \ _ your /tmp file system is a mounted tmpfs with a size = whatever mount option that will let the tmpfs grow to at least 10 \ % larger than the ( sum of the ) memory size of ( all of the ) vritual machine ( s ) you are going to run at once .
This will cause the " backing " of the virtual machine RAM to be actual RAM and you will get rational machine RAM speed.2b ) If you want/need to , there is a tmpDirectory = /wherever diretive to say where those files go .
It gangswith the usenamedfile = FLASE and you can set up dedicated tmpfs files to back the machines specially/separately.2c ) If you want/need the backing or have a " better " drive you want to use with real backing , you can use the above in variations to move this performance limiter onto different spindle than your .vmd ( virtual disk files ) .3 ) No matter what , your virtual memory file counts against your overcommit \ _ratio ( /proc/sys/vm/overcommit \ _ratio ) compared to your ram .
It defaults at 50 \ % for \ _all \ _ the accounted facilities system-wide .
If you have 4Gig RAM and try to run a 3G vm while leaving your overcommit \ _ratio at 50 , you will suffer some unintended consequences in terms of paging/swapping pressure .
Ajust your ratio to like 75 or 80 percent if your total VM memory size is 60 to 65 percent of real ram .
\ _DONT \ _ set this nubmer to more than 85 \ % unless you have experimented with the system stability at higher numbers .
It can be \ _quite \ _ surprising.Anyway , that 's the three things ( in many parts ) you need to know to make VMWare work its best on your linux host OS .
It does n't matter what the Guest OS is , always consider the above.Disclaimer : I do n't work for VMWare etc , this is all practical knowledge and trial-n-error gained knowledge .
I offer no warranty , but it will work.. .</tokentext>
<sentencetext>Okay, I have been through this at work several times recently.
There are two major slow-downs in the default (but reasonably bullet-proof) VMWare machines running on a Linux \_host\_.1) If you are doing NAT attachment \_don't\_ use the vmware NAT daemon.
It pulls each packet into userspace before deciding how to forward/nat it.
So don't use the default nat device (e.g.
vmnet8). Add a new custom made "host only" adapter (e.g.
vmnet9-or-more) by adding another adapter, and then use regular firewalling (ip\_forward = 1 and iptables rules) so that the packets just pass through the Linux kernel and netfilters once.
(you can use vmnet1 in a pinch but blarg!
8-)1a) If you want/need to use the default nat engine (e.g.
vmnet8) then put the nat daemon into a real-time scheduling group with "chrt --rr --pid 22 $(pgrep vmnet-natd)".
Not quite a good as staying in the kernel all the way to your physical media.1b) if you do item one, don't use the vmware-dhcpd, configure your regular dhcpd/dhcpd3 etc daemon because it will more easily integrate with your system as a whole.
(in other words, vmware-dhcpd is not magic, and vmware-natd is \_super\_ expensive)2) VMWare makes a /path/to/your/machine/machine\_name.vmem file, which is a memory mapped file that represents the RAM in the guest.
This is like having the whole vm living forever in your swap space.
It's great there if you want to take a lot of snapshots and want to be more restart/crash/sleep safe.
It \_sucks\_ for performance.
If you use "mainmem.usenamedfile=FALSE" in your .vmx files.
(you have to edit the files by hand).
This will move the .vmem file into your temp directory and unlink it so it's anonymous and self-deleting.
It slows down snapshots but...2a) Make \_SURE\_ your /tmp file system is a mounted tmpfs with a size=whatever mount option that will let the tmpfs grow to at least 10\% larger than the (sum of the) memory size of (all of the) vritual machine(s) you are going to run at once.
This will cause the "backing" of the virtual machine RAM to be actual RAM and you will get rational machine RAM speed.2b) If you want/need to, there is a tmpDirectory=/wherever diretive to say where those files go.
It gangswith the usenamedfile=FLASE and you can set up dedicated tmpfs files to back the machines specially/separately.2c) If you want/need the backing or have a "better" drive you want to use with real backing, you can use the above in variations to move this performance limiter onto different spindle than your .vmd (virtual disk files).3) No matter what, your virtual memory file counts against your overcommit\_ratio (/proc/sys/vm/overcommit\_ratio) compared to your ram.
It defaults at 50\% for \_all\_ the accounted facilities system-wide.
If you have 4Gig RAM and try to run a 3G vm while leaving your overcommit\_ratio at 50, you will suffer some unintended consequences in terms of paging/swapping pressure.
Ajust your ratio to like 75 or 80 percent if your total VM memory size is 60 to 65 percent of real ram.
\_DONT\_ set this nubmer to more than 85\% unless you have experimented with the system stability at higher numbers.
It can be \_quite\_ surprising.Anyway, that's the three things (in many parts) you need to know to make VMWare work its best on your linux host OS.
It doesn't matter what the Guest OS is, always consider the above.Disclaimer: I don't work for VMWare etc, this is all practical knowledge and trial-n-error gained knowledge.
I offer no warranty, but it will work...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909</id>
	<title>Re:free beats fee most of the time</title>
	<author>Anonymous</author>
	<datestamp>1243933860000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Great... but what's LTSP?</p><p>Why do sysadmins assume that everyone else is also a sysadmin who bothers to memorize all these stupid acronyms?</p><p>Sure, I googled it, and I hope you meant "Linux Terminal Server Project". But Why not just say so immediately?! Most people won't bother listening to what you have to say if they need too use a search engine to figure out key pieces of information just to understand the context of your words!</p></htmltext>
<tokenext>Great... but what 's LTSP ? Why do sysadmins assume that everyone else is also a sysadmin who bothers to memorize all these stupid acronyms ? Sure , I googled it , and I hope you meant " Linux Terminal Server Project " .
But Why not just say so immediately ? !
Most people wo n't bother listening to what you have to say if they need too use a search engine to figure out key pieces of information just to understand the context of your words !</tokentext>
<sentencetext>Great... but what's LTSP?Why do sysadmins assume that everyone else is also a sysadmin who bothers to memorize all these stupid acronyms?Sure, I googled it, and I hope you meant "Linux Terminal Server Project".
But Why not just say so immediately?!
Most people won't bother listening to what you have to say if they need too use a search engine to figure out key pieces of information just to understand the context of your words!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176799</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177375</id>
	<title>I've seen this before</title>
	<author>bertok</author>
	<datestamp>1243873980000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>I've seen similar hideous slowdowns on ESX before for database workloads, and it's not VMware's fault.</p><p>This kind of slowdown is almost always because of badly written chatty applications that use the database one-row-at-a-time, instead of simply executing a query.</p><p>I once benchmarked a Microsoft reporting tool on bare metal compared to ESX, and it ran 3x slower on ESX. The fault was that it was reading a 10M row database one row at a time, and performing a table join in the client VB code instead of the server. I tried running the exact same query as a pure T-SQL join, and it was something like 1000x faster - except now the ESX box was only 5\% slower instead of 3x slower.</p><p>The issue is that ESX has a small overhead to switching between VMs, and also a small overhead for estabilishing a TCP connection. The throughput is good, but it does add a few hundred microseconds of latency, all up. You get similar latency if your physical servers are in a datacenter environment and are seperated by a couple of a switches or a firewall. If you can't handle sub-millisecond latencies, it's time to revisit your application architecture!</p></htmltext>
<tokenext>I 've seen similar hideous slowdowns on ESX before for database workloads , and it 's not VMware 's fault.This kind of slowdown is almost always because of badly written chatty applications that use the database one-row-at-a-time , instead of simply executing a query.I once benchmarked a Microsoft reporting tool on bare metal compared to ESX , and it ran 3x slower on ESX .
The fault was that it was reading a 10M row database one row at a time , and performing a table join in the client VB code instead of the server .
I tried running the exact same query as a pure T-SQL join , and it was something like 1000x faster - except now the ESX box was only 5 \ % slower instead of 3x slower.The issue is that ESX has a small overhead to switching between VMs , and also a small overhead for estabilishing a TCP connection .
The throughput is good , but it does add a few hundred microseconds of latency , all up .
You get similar latency if your physical servers are in a datacenter environment and are seperated by a couple of a switches or a firewall .
If you ca n't handle sub-millisecond latencies , it 's time to revisit your application architecture !</tokentext>
<sentencetext>I've seen similar hideous slowdowns on ESX before for database workloads, and it's not VMware's fault.This kind of slowdown is almost always because of badly written chatty applications that use the database one-row-at-a-time, instead of simply executing a query.I once benchmarked a Microsoft reporting tool on bare metal compared to ESX, and it ran 3x slower on ESX.
The fault was that it was reading a 10M row database one row at a time, and performing a table join in the client VB code instead of the server.
I tried running the exact same query as a pure T-SQL join, and it was something like 1000x faster - except now the ESX box was only 5\% slower instead of 3x slower.The issue is that ESX has a small overhead to switching between VMs, and also a small overhead for estabilishing a TCP connection.
The throughput is good, but it does add a few hundred microseconds of latency, all up.
You get similar latency if your physical servers are in a datacenter environment and are seperated by a couple of a switches or a firewall.
If you can't handle sub-millisecond latencies, it's time to revisit your application architecture!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178619</id>
	<title>There can be huge differences in performance</title>
	<author>nickh01uk</author>
	<datestamp>1243974060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Theres a nice little article <a href="http://360is.com/cgi-bin/download.pl?doc=360is-CS-Multimedia.pdf" title="360is.com" rel="nofollow">here</a> [360is.com] (basic reg. required) contrasting VMware and Citrix XenServer, where the end user was forced to abandon VMware (their default choice) after suffering performance problems and after 6 months of back and forth with tech support and engineering at the vendor. In the end XenServer delivered 2x the real world performance on identical hardware with a default install. Not all workloads are equally well virtualized!


N.</htmltext>
<tokenext>Theres a nice little article here [ 360is.com ] ( basic reg .
required ) contrasting VMware and Citrix XenServer , where the end user was forced to abandon VMware ( their default choice ) after suffering performance problems and after 6 months of back and forth with tech support and engineering at the vendor .
In the end XenServer delivered 2x the real world performance on identical hardware with a default install .
Not all workloads are equally well virtualized !
N .</tokentext>
<sentencetext>Theres a nice little article here [360is.com] (basic reg.
required) contrasting VMware and Citrix XenServer, where the end user was forced to abandon VMware (their default choice) after suffering performance problems and after 6 months of back and forth with tech support and engineering at the vendor.
In the end XenServer delivered 2x the real world performance on identical hardware with a default install.
Not all workloads are equally well virtualized!
N.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182503</id>
	<title>Wait, What?</title>
	<author>ConallB</author>
	<datestamp>1243958400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>They were running thier outfit on VMWare Server - as in the free-runs-on/in-a-desktop version.

So basically thay benchmarked a virtual machine runnning on top of a full blown OS running on physical hardware.

Then they switched to running a virtualised environment on physical hardware.

I am amazed that they ony got a 10 fold increase!

Seriously, Try ESXi, the free hypervisor from VMWare and run the benchmarks again.

The author really should do more research before slating vmware.</htmltext>
<tokenext>They were running thier outfit on VMWare Server - as in the free-runs-on/in-a-desktop version .
So basically thay benchmarked a virtual machine runnning on top of a full blown OS running on physical hardware .
Then they switched to running a virtualised environment on physical hardware .
I am amazed that they ony got a 10 fold increase !
Seriously , Try ESXi , the free hypervisor from VMWare and run the benchmarks again .
The author really should do more research before slating vmware .</tokentext>
<sentencetext>They were running thier outfit on VMWare Server - as in the free-runs-on/in-a-desktop version.
So basically thay benchmarked a virtual machine runnning on top of a full blown OS running on physical hardware.
Then they switched to running a virtualised environment on physical hardware.
I am amazed that they ony got a 10 fold increase!
Seriously, Try ESXi, the free hypervisor from VMWare and run the benchmarks again.
The author really should do more research before slating vmware.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176969</id>
	<title>Re:XenServer worked for us</title>
	<author>Anonymous</author>
	<datestamp>1243870140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Shouldn't you be comparing XenServer to ESX instead of VMware's free hosted virtualization product?  I don't see how the comparison here is fair.  It's like saying Mercedes' Smart Car is too slow so you went to a BMW M3.</p></htmltext>
<tokenext>Should n't you be comparing XenServer to ESX instead of VMware 's free hosted virtualization product ?
I do n't see how the comparison here is fair .
It 's like saying Mercedes ' Smart Car is too slow so you went to a BMW M3 .</tokentext>
<sentencetext>Shouldn't you be comparing XenServer to ESX instead of VMware's free hosted virtualization product?
I don't see how the comparison here is fair.
It's like saying Mercedes' Smart Car is too slow so you went to a BMW M3.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177705</id>
	<title>Re:I don't think you did your research.</title>
	<author>Anonymous</author>
	<datestamp>1243877100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I've used UML, quemu, etc.. and eventually I ditched linux for server level stuff for the sole reason that linux doesn't have BSD jails the way FreeBSD does. (I tried the vserver project, but it was overly convoluted and not supported. Still have a legacy linux box that is "stuck" with vserver code)</p><p>I'm glad, as it turns out, FreeBSD has better performance for server level tasks anyway I still use virtualization when I need to run alternate OS's, of course. (I found that when using virtualization, you should give it a virtual disk on tmpfs for virtual swap)</p><p>Jails are FAR better in most cases, you can share memory, you don't need to run another instance of the kernel, unlike chroot, processes are hidden from each other. (and most freebsd stuff doesn't need the<nobr> <wbr></nobr>/proc filesystem so you don't have the escape chroot via proc issue)  you can use the unionfs to mount filesystems on and off a running jail (unlike the vserver project)</p><p>Plain old chroot doesn't quite compare, jails are an excellent way to isolate logical machines, services and processes with roughly the same overhead.</p><p>I'm glad slashdot is finally mentioning them!</p></htmltext>
<tokenext>I 've used UML , quemu , etc.. and eventually I ditched linux for server level stuff for the sole reason that linux does n't have BSD jails the way FreeBSD does .
( I tried the vserver project , but it was overly convoluted and not supported .
Still have a legacy linux box that is " stuck " with vserver code ) I 'm glad , as it turns out , FreeBSD has better performance for server level tasks anyway I still use virtualization when I need to run alternate OS 's , of course .
( I found that when using virtualization , you should give it a virtual disk on tmpfs for virtual swap ) Jails are FAR better in most cases , you can share memory , you do n't need to run another instance of the kernel , unlike chroot , processes are hidden from each other .
( and most freebsd stuff does n't need the /proc filesystem so you do n't have the escape chroot via proc issue ) you can use the unionfs to mount filesystems on and off a running jail ( unlike the vserver project ) Plain old chroot does n't quite compare , jails are an excellent way to isolate logical machines , services and processes with roughly the same overhead.I 'm glad slashdot is finally mentioning them !</tokentext>
<sentencetext>I've used UML, quemu, etc.. and eventually I ditched linux for server level stuff for the sole reason that linux doesn't have BSD jails the way FreeBSD does.
(I tried the vserver project, but it was overly convoluted and not supported.
Still have a legacy linux box that is "stuck" with vserver code)I'm glad, as it turns out, FreeBSD has better performance for server level tasks anyway I still use virtualization when I need to run alternate OS's, of course.
(I found that when using virtualization, you should give it a virtual disk on tmpfs for virtual swap)Jails are FAR better in most cases, you can share memory, you don't need to run another instance of the kernel, unlike chroot, processes are hidden from each other.
(and most freebsd stuff doesn't need the /proc filesystem so you don't have the escape chroot via proc issue)  you can use the unionfs to mount filesystems on and off a running jail (unlike the vserver project)Plain old chroot doesn't quite compare, jails are an excellent way to isolate logical machines, services and processes with roughly the same overhead.I'm glad slashdot is finally mentioning them!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176971</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047</id>
	<title>Re:excellent sales story</title>
	<author>Eil</author>
	<datestamp>1243870740000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><blockquote><div><p>But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.</p></div></blockquote><p>Actually, Xen is not at all similar to a BSD jail, no matter how you look at it. Xen does full OS virtualization from the kernel and drivers on down to userland. A FreeBSD is basically chroot on steroids. The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.</p><p>I guess the thing that bugged me about the most about TFA was the fact that they were using VMWare Server and actually expecting to get decent performance out of it. Somebody should have gotten fired for that. VMWare server is great for a number of things, but performance certainly isn't one of them. If they wanted to go with VMWare, they should have shelled out for ESX in the beginning instead of continually trying to go the cheap route.</p></div>
	</htmltext>
<tokenext>But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware 's monitor.Actually , Xen is not at all similar to a BSD jail , no matter how you look at it .
Xen does full OS virtualization from the kernel and drivers on down to userland .
A FreeBSD is basically chroot on steroids .
The " virtualized " processes run exactly the same as " native " ones , they just have some restrictions on their system calls , that 's all.I guess the thing that bugged me about the most about TFA was the fact that they were using VMWare Server and actually expecting to get decent performance out of it .
Somebody should have gotten fired for that .
VMWare server is great for a number of things , but performance certainly is n't one of them .
If they wanted to go with VMWare , they should have shelled out for ESX in the beginning instead of continually trying to go the cheap route .</tokentext>
<sentencetext>But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.Actually, Xen is not at all similar to a BSD jail, no matter how you look at it.
Xen does full OS virtualization from the kernel and drivers on down to userland.
A FreeBSD is basically chroot on steroids.
The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.I guess the thing that bugged me about the most about TFA was the fact that they were using VMWare Server and actually expecting to get decent performance out of it.
Somebody should have gotten fired for that.
VMWare server is great for a number of things, but performance certainly isn't one of them.
If they wanted to go with VMWare, they should have shelled out for ESX in the beginning instead of continually trying to go the cheap route.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177351</id>
	<title>Re:Well, duh!</title>
	<author>Anonymous</author>
	<datestamp>1243873800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Not necessarily.  No expert here, but the means by which you are virtualizing has an effect.  The hardware on which you are virtualizing makes a tremendous difference.  Visit Sun's site, and pull up everything on VirtualBox.  There is a downloadable PDF - "Virtualization for dummies" which I read through last night.  Other documents are available, just browse around, and grab them to read.  Feel free to search VMWare's site for similar documents, but read.</p><p>Yes, almost all VM's today take a performance hit.  But, I have two VM's running on my desktop right now at the same time.  I still have 20 \% real physical memory available, and the CPU jumps from 60\% to 80\%, depending on what I'm actually doing.</p><p>The machine is working pretty closer to capacity than it ever does with only the host machine running, and performance is "good" on all three.  Not "excellent", but "good".  I won't tolerate thrashing to virtual memory, so the trick is to have enough memory.</p><p>Adding one more VM would almost certainly overload my system, causing continous thrashing, and I would simply give up by closing one of them.</p><p>On a server, you don't go cheap on memory - you load the thing up.  It makes sense to virtualize a machine that sees little traffic, rather than buying all new hardware for it.</p><p>With VMWare infra, scripts can keep up with memory and CPU utilization, and actually start up an additional physical machine for the purpose of offloading one or more VM's when the load gets heavy.</p><p>As CPU's continue to be developed, and as the software evolves, you can expect virtualization to make more and more sense.</p></htmltext>
<tokenext>Not necessarily .
No expert here , but the means by which you are virtualizing has an effect .
The hardware on which you are virtualizing makes a tremendous difference .
Visit Sun 's site , and pull up everything on VirtualBox .
There is a downloadable PDF - " Virtualization for dummies " which I read through last night .
Other documents are available , just browse around , and grab them to read .
Feel free to search VMWare 's site for similar documents , but read.Yes , almost all VM 's today take a performance hit .
But , I have two VM 's running on my desktop right now at the same time .
I still have 20 \ % real physical memory available , and the CPU jumps from 60 \ % to 80 \ % , depending on what I 'm actually doing.The machine is working pretty closer to capacity than it ever does with only the host machine running , and performance is " good " on all three .
Not " excellent " , but " good " .
I wo n't tolerate thrashing to virtual memory , so the trick is to have enough memory.Adding one more VM would almost certainly overload my system , causing continous thrashing , and I would simply give up by closing one of them.On a server , you do n't go cheap on memory - you load the thing up .
It makes sense to virtualize a machine that sees little traffic , rather than buying all new hardware for it.With VMWare infra , scripts can keep up with memory and CPU utilization , and actually start up an additional physical machine for the purpose of offloading one or more VM 's when the load gets heavy.As CPU 's continue to be developed , and as the software evolves , you can expect virtualization to make more and more sense .</tokentext>
<sentencetext>Not necessarily.
No expert here, but the means by which you are virtualizing has an effect.
The hardware on which you are virtualizing makes a tremendous difference.
Visit Sun's site, and pull up everything on VirtualBox.
There is a downloadable PDF - "Virtualization for dummies" which I read through last night.
Other documents are available, just browse around, and grab them to read.
Feel free to search VMWare's site for similar documents, but read.Yes, almost all VM's today take a performance hit.
But, I have two VM's running on my desktop right now at the same time.
I still have 20 \% real physical memory available, and the CPU jumps from 60\% to 80\%, depending on what I'm actually doing.The machine is working pretty closer to capacity than it ever does with only the host machine running, and performance is "good" on all three.
Not "excellent", but "good".
I won't tolerate thrashing to virtual memory, so the trick is to have enough memory.Adding one more VM would almost certainly overload my system, causing continous thrashing, and I would simply give up by closing one of them.On a server, you don't go cheap on memory - you load the thing up.
It makes sense to virtualize a machine that sees little traffic, rather than buying all new hardware for it.With VMWare infra, scripts can keep up with memory and CPU utilization, and actually start up an additional physical machine for the purpose of offloading one or more VM's when the load gets heavy.As CPU's continue to be developed, and as the software evolves, you can expect virtualization to make more and more sense.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177157</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28183613</id>
	<title>VMWare Server 1.x is a dog</title>
	<author>shogarth</author>
	<datestamp>1243962300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Ever since the launch over a year ago we used VMware Server 1 for instantiating the YippieMove back-end software.</p></div></blockquote><p>
This says it all in one sentence.  VMWare Server (as opposed to ESX or ESXi) is a dog.  It barely ran with two WinXP installs and one RHEL5 on a 4-core server with 8 GB of RAM.  Life was a little better after upgrading to VMWare Server 2, but running it on top of an OS instead of using a hypervisor kills performance.  I switched the same box over to ESXi 3.5 and all three installs scream.  Additionally, the memory page deduplication driver means that I have capacity for probably another five to seven lightly loaded systems without worrying about the occasional load spike.

<br> <br>
As far as some jobs just not being well suited to virtualization, that's an obvious truth.  However, most work in that class is CPU bound compute work.  If you are not buying storage on a shoestring budget (i.e. you can run VMFS3 on a FC or trunked Gb SAN rather than fiddle with NFS) then you should have reasonable IO performance.  The OP doesn't give any detail on storage performance (either in bandwidth or IOPS) so there's no way to tell what it requires.  Having looked at his YippieMove service web page there doesn't seem to be a lot that is required.  It seems like they picked the low performance, free VMWare tool and when it didn't work did something completely different.  This says less about VMWare than it does about the OP's design/testing process.</p></div>
	</htmltext>
<tokenext>Ever since the launch over a year ago we used VMware Server 1 for instantiating the YippieMove back-end software .
This says it all in one sentence .
VMWare Server ( as opposed to ESX or ESXi ) is a dog .
It barely ran with two WinXP installs and one RHEL5 on a 4-core server with 8 GB of RAM .
Life was a little better after upgrading to VMWare Server 2 , but running it on top of an OS instead of using a hypervisor kills performance .
I switched the same box over to ESXi 3.5 and all three installs scream .
Additionally , the memory page deduplication driver means that I have capacity for probably another five to seven lightly loaded systems without worrying about the occasional load spike .
As far as some jobs just not being well suited to virtualization , that 's an obvious truth .
However , most work in that class is CPU bound compute work .
If you are not buying storage on a shoestring budget ( i.e .
you can run VMFS3 on a FC or trunked Gb SAN rather than fiddle with NFS ) then you should have reasonable IO performance .
The OP does n't give any detail on storage performance ( either in bandwidth or IOPS ) so there 's no way to tell what it requires .
Having looked at his YippieMove service web page there does n't seem to be a lot that is required .
It seems like they picked the low performance , free VMWare tool and when it did n't work did something completely different .
This says less about VMWare than it does about the OP 's design/testing process .</tokentext>
<sentencetext>Ever since the launch over a year ago we used VMware Server 1 for instantiating the YippieMove back-end software.
This says it all in one sentence.
VMWare Server (as opposed to ESX or ESXi) is a dog.
It barely ran with two WinXP installs and one RHEL5 on a 4-core server with 8 GB of RAM.
Life was a little better after upgrading to VMWare Server 2, but running it on top of an OS instead of using a hypervisor kills performance.
I switched the same box over to ESXi 3.5 and all three installs scream.
Additionally, the memory page deduplication driver means that I have capacity for probably another five to seven lightly loaded systems without worrying about the occasional load spike.
As far as some jobs just not being well suited to virtualization, that's an obvious truth.
However, most work in that class is CPU bound compute work.
If you are not buying storage on a shoestring budget (i.e.
you can run VMFS3 on a FC or trunked Gb SAN rather than fiddle with NFS) then you should have reasonable IO performance.
The OP doesn't give any detail on storage performance (either in bandwidth or IOPS) so there's no way to tell what it requires.
Having looked at his YippieMove service web page there doesn't seem to be a lot that is required.
It seems like they picked the low performance, free VMWare tool and when it didn't work did something completely different.
This says less about VMWare than it does about the OP's design/testing process.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</id>
	<title>Virtualization doesn't make sense</title>
	<author>Anonymous</author>
	<datestamp>1243870020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>Well, in one case it does: when you're trying to run a <i>different</i> operating system simultaneously on the same machine. But in most "enterprise" scenarios, you just want to set up several isolated environments on the same machine, all running the same operating system. In that case, virtualization is absofuckinglutely insane.</p><p>Operating systems have been multi-user for a long, long time now. The original use case for Unix involved several users sharing a large box. Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway?</p><p>Hypervisors have become more and more complex, and a plethora of APIs for virtualization-aware guests has appeared. <b>We're reinventing the kernel-userland split, and for no good reason</b>.</p><p>Technically, virtualizaiton is insane for a number of reasons:</p><ul> <li>Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical</li><li>TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.</li><li>A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest</li><li> <b>Memory management is an absolute clusterfuck.</b> From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself. This mutual myopia renders the usual page-cache algorithms absolutely useless. Each guest blithely performs memory management and caching on its own resulting in <b>severely</b> suboptimal decisions being made.<p>In having to set aside memory for each guest, we're returning to the OS9 memory mangement model. Not only are we reinventing the wheel, but we're reinventing a square one covered in jelly.</p></li></ul><p>FreeBSD's jails make a whole lot of sense. They allow several users to have their own userland while running under the same kenrel --- which vastly improves, well, pretty much everything. Linux's containers will eventually provide even better support.</p></htmltext>
<tokenext>Well , in one case it does : when you 're trying to run a different operating system simultaneously on the same machine .
But in most " enterprise " scenarios , you just want to set up several isolated environments on the same machine , all running the same operating system .
In that case , virtualization is absofuckinglutely insane.Operating systems have been multi-user for a long , long time now .
The original use case for Unix involved several users sharing a large box .
Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway ? Hypervisors have become more and more complex , and a plethora of APIs for virtualization-aware guests has appeared .
We 're reinventing the kernel-userland split , and for no good reason.Technically , virtualizaiton is insane for a number of reasons : Each guest needs its own kernel , so you need to allocate memory and disk space for all these kernels that are in fact identicalTLB flushes kill performance .
Recent x86 CPUs address the problem to some degree , but it 's still a problem.A guest 's filesystem is on a virtual block device , so it 's hard to get at it without running some kind of fileserver on the guest Memory management is an absolute clusterfuck .
From the point of view of the host , each guest 's memory is an opaque blob , and from the point of view of the guest , it has the machine to itself .
This mutual myopia renders the usual page-cache algorithms absolutely useless .
Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made.In having to set aside memory for each guest , we 're returning to the OS9 memory mangement model .
Not only are we reinventing the wheel , but we 're reinventing a square one covered in jelly.FreeBSD 's jails make a whole lot of sense .
They allow several users to have their own userland while running under the same kenrel --- which vastly improves , well , pretty much everything .
Linux 's containers will eventually provide even better support .</tokentext>
<sentencetext>Well, in one case it does: when you're trying to run a different operating system simultaneously on the same machine.
But in most "enterprise" scenarios, you just want to set up several isolated environments on the same machine, all running the same operating system.
In that case, virtualization is absofuckinglutely insane.Operating systems have been multi-user for a long, long time now.
The original use case for Unix involved several users sharing a large box.
Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway?Hypervisors have become more and more complex, and a plethora of APIs for virtualization-aware guests has appeared.
We're reinventing the kernel-userland split, and for no good reason.Technically, virtualizaiton is insane for a number of reasons: Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identicalTLB flushes kill performance.
Recent x86 CPUs address the problem to some degree, but it's still a problem.A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest Memory management is an absolute clusterfuck.
From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself.
This mutual myopia renders the usual page-cache algorithms absolutely useless.
Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made.In having to set aside memory for each guest, we're returning to the OS9 memory mangement model.
Not only are we reinventing the wheel, but we're reinventing a square one covered in jelly.FreeBSD's jails make a whole lot of sense.
They allow several users to have their own userland while running under the same kenrel --- which vastly improves, well, pretty much everything.
Linux's containers will eventually provide even better support.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179249</id>
	<title>Terrible name</title>
	<author>dugeen</author>
	<datestamp>1243938180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>'Jail' is such a terrible metaphor to choose for a product. I want a happy metaphor like 'sandbox', not something redolent of brutality, despair and iron sorrows.</htmltext>
<tokenext>'Jail ' is such a terrible metaphor to choose for a product .
I want a happy metaphor like 'sandbox ' , not something redolent of brutality , despair and iron sorrows .</tokentext>
<sentencetext>'Jail' is such a terrible metaphor to choose for a product.
I want a happy metaphor like 'sandbox', not something redolent of brutality, despair and iron sorrows.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178143</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>afidel</author>
	<datestamp>1243883040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Guess you don't live in the real world of IT where DLL/library/RPM/Deb hell is a way of life and the effort in testing changes on a box containing N services is ~(N!) Also many vendor's won't support multiple pieces of their solution running on the same OS install. Not to mention the fact that security is never absolute and having a harder boundary between services can be a good thing. Oh and VMWare solves the multiple kernel problem by using page sharing, but it does come at a small cost in CPU power. There are many thousands of companies saving real money and real watts by using virtualization, it might not be the ultimate solution for every problem but they sure are a good way to EASILY take better advantage of modern hardware.</htmltext>
<tokenext>Guess you do n't live in the real world of IT where DLL/library/RPM/Deb hell is a way of life and the effort in testing changes on a box containing N services is ~ ( N !
) Also many vendor 's wo n't support multiple pieces of their solution running on the same OS install .
Not to mention the fact that security is never absolute and having a harder boundary between services can be a good thing .
Oh and VMWare solves the multiple kernel problem by using page sharing , but it does come at a small cost in CPU power .
There are many thousands of companies saving real money and real watts by using virtualization , it might not be the ultimate solution for every problem but they sure are a good way to EASILY take better advantage of modern hardware .</tokentext>
<sentencetext>Guess you don't live in the real world of IT where DLL/library/RPM/Deb hell is a way of life and the effort in testing changes on a box containing N services is ~(N!
) Also many vendor's won't support multiple pieces of their solution running on the same OS install.
Not to mention the fact that security is never absolute and having a harder boundary between services can be a good thing.
Oh and VMWare solves the multiple kernel problem by using page sharing, but it does come at a small cost in CPU power.
There are many thousands of companies saving real money and real watts by using virtualization, it might not be the ultimate solution for every problem but they sure are a good way to EASILY take better advantage of modern hardware.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176797</id>
	<title>What's the diff between jail and zone?</title>
	<author>Vip</author>
	<datestamp>1243869060000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext><p>FTA, "Jails are a sort of lightweight virtualization technique available on the FreeBSD platform. They are like a chroot environment on steroids where not only the file system is isolated out but individual processes are confined to a virtual environment - like a virtual machine without the machine part."</p><p>Not knowing much about FreeBSD and it's complementary software, what is the difference between FreeBSD Jail and Solaris Zones?<br>A Solaris Zone could also be described the same way.</p><p>Vip</p></htmltext>
<tokenext>FTA , " Jails are a sort of lightweight virtualization technique available on the FreeBSD platform .
They are like a chroot environment on steroids where not only the file system is isolated out but individual processes are confined to a virtual environment - like a virtual machine without the machine part .
" Not knowing much about FreeBSD and it 's complementary software , what is the difference between FreeBSD Jail and Solaris Zones ? A Solaris Zone could also be described the same way.Vip</tokentext>
<sentencetext>FTA, "Jails are a sort of lightweight virtualization technique available on the FreeBSD platform.
They are like a chroot environment on steroids where not only the file system is isolated out but individual processes are confined to a virtual environment - like a virtual machine without the machine part.
"Not knowing much about FreeBSD and it's complementary software, what is the difference between FreeBSD Jail and Solaris Zones?A Solaris Zone could also be described the same way.Vip</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177931</id>
	<title>Re:XenServer worked for us</title>
	<author>BitZtream</author>
	<datestamp>1243879800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So you converted from Citrix to Citrix?  Or was that XenServer to XenServer?</p></htmltext>
<tokenext>So you converted from Citrix to Citrix ?
Or was that XenServer to XenServer ?</tokentext>
<sentencetext>So you converted from Citrix to Citrix?
Or was that XenServer to XenServer?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176949</id>
	<title>As for the database...</title>
	<author>orngjce223</author>
	<datestamp>1243870020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think they got<nobr> <wbr></nobr>/.'d.</p></htmltext>
<tokenext>I think they got / .
'd .</tokentext>
<sentencetext>I think they got /.
'd.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176709</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28183439</id>
	<title>this just in</title>
	<author>mistahkurtz</author>
	<datestamp>1243961580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>some things may not be a good candidate for virtualization. ESPECIALLY in a virtualization client that loads ON TOP OF AN OS.<br>
<br>
i'm wondering where they got their performance expectations from. FTFA, they didn't buy the vmware tools that would give them the performance and capabilities they desired.<br>
<br>
so, to sum it up, they virtualized an application or applications they possibly shouldn't have. they didn't pay for the set of tools that would have given them the best shot of acheiving their virtualization goals, and instead used what was quite possibly the worst tool for the job. now they're complaining about it, and pointing out how (apparently) just running a single OS on the servers, with multiple application instances.<br>
<br>
what a waste of time. this is not news. don't use wireload or yippiemove. they, according to their own words in TFA, don't have a clue what they're doing.</htmltext>
<tokenext>some things may not be a good candidate for virtualization .
ESPECIALLY in a virtualization client that loads ON TOP OF AN OS .
i 'm wondering where they got their performance expectations from .
FTFA , they did n't buy the vmware tools that would give them the performance and capabilities they desired .
so , to sum it up , they virtualized an application or applications they possibly should n't have .
they did n't pay for the set of tools that would have given them the best shot of acheiving their virtualization goals , and instead used what was quite possibly the worst tool for the job .
now they 're complaining about it , and pointing out how ( apparently ) just running a single OS on the servers , with multiple application instances .
what a waste of time .
this is not news .
do n't use wireload or yippiemove .
they , according to their own words in TFA , do n't have a clue what they 're doing .</tokentext>
<sentencetext>some things may not be a good candidate for virtualization.
ESPECIALLY in a virtualization client that loads ON TOP OF AN OS.
i'm wondering where they got their performance expectations from.
FTFA, they didn't buy the vmware tools that would give them the performance and capabilities they desired.
so, to sum it up, they virtualized an application or applications they possibly shouldn't have.
they didn't pay for the set of tools that would have given them the best shot of acheiving their virtualization goals, and instead used what was quite possibly the worst tool for the job.
now they're complaining about it, and pointing out how (apparently) just running a single OS on the servers, with multiple application instances.
what a waste of time.
this is not news.
don't use wireload or yippiemove.
they, according to their own words in TFA, don't have a clue what they're doing.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178843</id>
	<title>Re:Different Operating Systems</title>
	<author>DavidRawling</author>
	<datestamp>1243933260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I've run both on comparable hardware. Hyper-V was way, way better, performance-wise if you had &gt;3 or 4 running VMs simply because it didn't have the extra scheduling overhead.

In either case though, if your disks are slow the VMs will be too.</htmltext>
<tokenext>I 've run both on comparable hardware .
Hyper-V was way , way better , performance-wise if you had &gt; 3 or 4 running VMs simply because it did n't have the extra scheduling overhead .
In either case though , if your disks are slow the VMs will be too .</tokentext>
<sentencetext>I've run both on comparable hardware.
Hyper-V was way, way better, performance-wise if you had &gt;3 or 4 running VMs simply because it didn't have the extra scheduling overhead.
In either case though, if your disks are slow the VMs will be too.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176957</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179663</id>
	<title>VMWARE server is bad choice for this app</title>
	<author>Anonymous</author>
	<datestamp>1243942680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Why would anyone use VMware Server in a production environment? It is not meant to be a heavy-duty hypervisor as it runs on top of another OS.</p><p>ESX 3.5 and 4.0 are meant to run mission critical applications.</p><p>Anandtech ran up 8 VMs on top several server CPUs, including the new six-core. Those 8 VMs include a heavy Oracle OLTP database (2x) and 100 GB large SQL Server database:<br><a href="http://it.anandtech.com/IT/showdoc.aspx?i=3571" title="anandtech.com" rel="nofollow">ESX 3.5 and 4.0 benchmarks</a> [anandtech.com]</p><p>
&nbsp; VMware ESX is about running several instances of Windows and Linux on top of the same machine and manage them easily. FreeBSD Jails are of course a better solution if you just want lots and lots of small machines and the management overhead is zero. Jails are what is called "Container based virtualization".<br><a href="hhttp:itanandtechcomITshowdocaspxi3349" title="hhttp" rel="nofollow">Container based virtualization</a> [hhttp]</p></htmltext>
<tokenext>Why would anyone use VMware Server in a production environment ?
It is not meant to be a heavy-duty hypervisor as it runs on top of another OS.ESX 3.5 and 4.0 are meant to run mission critical applications.Anandtech ran up 8 VMs on top several server CPUs , including the new six-core .
Those 8 VMs include a heavy Oracle OLTP database ( 2x ) and 100 GB large SQL Server database : ESX 3.5 and 4.0 benchmarks [ anandtech.com ]   VMware ESX is about running several instances of Windows and Linux on top of the same machine and manage them easily .
FreeBSD Jails are of course a better solution if you just want lots and lots of small machines and the management overhead is zero .
Jails are what is called " Container based virtualization " .Container based virtualization [ hhttp ]</tokentext>
<sentencetext>Why would anyone use VMware Server in a production environment?
It is not meant to be a heavy-duty hypervisor as it runs on top of another OS.ESX 3.5 and 4.0 are meant to run mission critical applications.Anandtech ran up 8 VMs on top several server CPUs, including the new six-core.
Those 8 VMs include a heavy Oracle OLTP database (2x) and 100 GB large SQL Server database:ESX 3.5 and 4.0 benchmarks [anandtech.com]
  VMware ESX is about running several instances of Windows and Linux on top of the same machine and manage them easily.
FreeBSD Jails are of course a better solution if you just want lots and lots of small machines and the management overhead is zero.
Jails are what is called "Container based virtualization".Container based virtualization [hhttp]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178123</id>
	<title>Linux-Vserver</title>
	<author>Daniel15</author>
	<datestamp>1243882740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>For something similar for Linux, take a look at <a href="http://linux-vserver.org/" title="linux-vserver.org" rel="nofollow">Linux-Vserver</a> [linux-vserver.org]. I've been using it for a while, it's pretty good. A while ago, I wrote a howto showing <a href="http://d15.biz/blog/2006/11/linux-vserver-debian-etch/" title="d15.biz" rel="nofollow">how to install Linux-Vserver on Debian Etch</a> [d15.biz], most of it would still apply today<nobr> <wbr></nobr>:)</htmltext>
<tokenext>For something similar for Linux , take a look at Linux-Vserver [ linux-vserver.org ] .
I 've been using it for a while , it 's pretty good .
A while ago , I wrote a howto showing how to install Linux-Vserver on Debian Etch [ d15.biz ] , most of it would still apply today : )</tokentext>
<sentencetext>For something similar for Linux, take a look at Linux-Vserver [linux-vserver.org].
I've been using it for a while, it's pretty good.
A while ago, I wrote a howto showing how to install Linux-Vserver on Debian Etch [d15.biz], most of it would still apply today :)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177393</id>
	<title>Re:Virtualization is good enough</title>
	<author>Kjella</author>
	<datestamp>1243874100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>It's CYA in practise. Here's the usual chain of events:</p><p>1. Business makes requirements to vendor: We want X capacity/response time/whatever<br>2. Vendor to business side: Well, what will you do with it?<br>3. Business makes requirements to vendor: Maybe A, maybe B with maybe N or N^2 users<br>4. Vendor to business side: That was a lot of maybes. But with $CONFIG you'll be sure</p><p>Particularly if the required hardware upgrades aren't part of the negotiations with the vendor, then it's almost a certainty.</p></htmltext>
<tokenext>It 's CYA in practise .
Here 's the usual chain of events : 1 .
Business makes requirements to vendor : We want X capacity/response time/whatever2 .
Vendor to business side : Well , what will you do with it ? 3 .
Business makes requirements to vendor : Maybe A , maybe B with maybe N or N ^ 2 users4 .
Vendor to business side : That was a lot of maybes .
But with $ CONFIG you 'll be sureParticularly if the required hardware upgrades are n't part of the negotiations with the vendor , then it 's almost a certainty .</tokentext>
<sentencetext>It's CYA in practise.
Here's the usual chain of events:1.
Business makes requirements to vendor: We want X capacity/response time/whatever2.
Vendor to business side: Well, what will you do with it?3.
Business makes requirements to vendor: Maybe A, maybe B with maybe N or N^2 users4.
Vendor to business side: That was a lot of maybes.
But with $CONFIG you'll be sureParticularly if the required hardware upgrades aren't part of the negotiations with the vendor, then it's almost a certainty.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177075</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176675</id>
	<title>This is Ironic, right?</title>
	<author>Anonymous</author>
	<datestamp>1243868040000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>Oh the irony</p><blockquote><div><p>Safari can&#226;(TM)t open the page &#226;oehttp://www.playingwithwire.com/2009/06/virtual-failure-yippiemove-switches-from-vmware-to-freebsd-jails/&#226; because the server where this page is located isn&#226;(TM)t responding.</p></div></blockquote></div>
	</htmltext>
<tokenext>Oh the ironySafari can   ( TM ) t open the page   oehttp : //www.playingwithwire.com/2009/06/virtual-failure-yippiemove-switches-from-vmware-to-freebsd-jails/   because the server where this page is located isn   ( TM ) t responding .</tokentext>
<sentencetext>Oh the ironySafari canâ(TM)t open the page âoehttp://www.playingwithwire.com/2009/06/virtual-failure-yippiemove-switches-from-vmware-to-freebsd-jails/â because the server where this page is located isnâ(TM)t responding.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179073</id>
	<title>Re:free beats fee most of the time</title>
	<author>Colin Smith</author>
	<datestamp>1243936020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This is a feature of Unix/linux memory management. Now... If you were to separate out your applications and run each on it's own server (particularly the big bloated apps), you would be able to load the servers even more highly still, and the apps will run faster because more of their code will be shared between users and more will be resident in the cpu caches. e.g. Have an openoffice server or cluster, have a firefox server or cluster. Use something like gridengine to run jobs on the cluster you want.</p></htmltext>
<tokenext>This is a feature of Unix/linux memory management .
Now... If you were to separate out your applications and run each on it 's own server ( particularly the big bloated apps ) , you would be able to load the servers even more highly still , and the apps will run faster because more of their code will be shared between users and more will be resident in the cpu caches .
e.g. Have an openoffice server or cluster , have a firefox server or cluster .
Use something like gridengine to run jobs on the cluster you want .</tokentext>
<sentencetext>This is a feature of Unix/linux memory management.
Now... If you were to separate out your applications and run each on it's own server (particularly the big bloated apps), you would be able to load the servers even more highly still, and the apps will run faster because more of their code will be shared between users and more will be resident in the cpu caches.
e.g. Have an openoffice server or cluster, have a firefox server or cluster.
Use something like gridengine to run jobs on the cluster you want.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176799</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181079</id>
	<title>Re:excellent sales story</title>
	<author>SanityInAnarchy</author>
	<datestamp>1243952880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>According to TFA, which finally seems to be up, they actually considered this, and ran the entire thing on an NFS mount from a non-virtualized host.</p><p>What's particularly odd is that they don't actually mention trying a local, virtualized disk first -- they just rejected it out of hand. I would imagine NFS carries its own overhead...</p><p>BSD jails are actually sounding kind of cool, though.</p></htmltext>
<tokenext>According to TFA , which finally seems to be up , they actually considered this , and ran the entire thing on an NFS mount from a non-virtualized host.What 's particularly odd is that they do n't actually mention trying a local , virtualized disk first -- they just rejected it out of hand .
I would imagine NFS carries its own overhead...BSD jails are actually sounding kind of cool , though .</tokentext>
<sentencetext>According to TFA, which finally seems to be up, they actually considered this, and ran the entire thing on an NFS mount from a non-virtualized host.What's particularly odd is that they don't actually mention trying a local, virtualized disk first -- they just rejected it out of hand.
I would imagine NFS carries its own overhead...BSD jails are actually sounding kind of cool, though.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180215</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>Salamander</author>
	<datestamp>1243947660000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><blockquote><div><p> <i>We're reinventing the kernel-userland split, and for no good reason.</i></p></div> </blockquote><p>Thank you for saying that.  The purpose of a multi-tasking multi-user OS is to allow running multiple applications with full isolation from one another.  If we need some other piece of software - like a VM hypervisor - to do that, then the OS has failed in its duty.  But wait, some people say, it's not just about multiplexing hardware, it's about migration and HA and deploying system images easily.  These are <b>also</b> facilities the OS should be providing.  Again, if we need some other piece of software then the OS has failed.</p><p>One could argue that we've evolved to a point where the functions of an OS have been separated into two layers.  One layer takes care of multiplexing the hardware; the other takes care of providing an API and an environment for things like filesystems.  Better still, you get to mix and match instances of each layer.  OK, fine.  Given the Linux community's traditional attitude toward layering (and microkernels, which this approach also resembles) it's a bit inconsistent, but fine.  That interpretation does raise some interesting questions, though.  People are putting a lot of thought into where drivers should live, and since some drivers are part of "multiplexing the hardware" then it would make sense for them to live in the hypervisor with a stub in the guest OS - just as is being done, I know.  But what about the virtual-memory system?  That's also a form of hardware multiplexing, arguably the most important.  If virtualization is your primary means of isolating users and applications from one another, why not put practically all of the virtual-memory functionality into the hypervisor and run a faster, simpler single-address-space OS on top of that?</p><p>If we're going to delegate operating-system functionality to hypervisors, let's at least think about the implications and develop a coherent model of how the two interact instead of the disorganized and piecemeal approaches we see now.</p></div>
	</htmltext>
<tokenext>We 're reinventing the kernel-userland split , and for no good reason .
Thank you for saying that .
The purpose of a multi-tasking multi-user OS is to allow running multiple applications with full isolation from one another .
If we need some other piece of software - like a VM hypervisor - to do that , then the OS has failed in its duty .
But wait , some people say , it 's not just about multiplexing hardware , it 's about migration and HA and deploying system images easily .
These are also facilities the OS should be providing .
Again , if we need some other piece of software then the OS has failed.One could argue that we 've evolved to a point where the functions of an OS have been separated into two layers .
One layer takes care of multiplexing the hardware ; the other takes care of providing an API and an environment for things like filesystems .
Better still , you get to mix and match instances of each layer .
OK , fine .
Given the Linux community 's traditional attitude toward layering ( and microkernels , which this approach also resembles ) it 's a bit inconsistent , but fine .
That interpretation does raise some interesting questions , though .
People are putting a lot of thought into where drivers should live , and since some drivers are part of " multiplexing the hardware " then it would make sense for them to live in the hypervisor with a stub in the guest OS - just as is being done , I know .
But what about the virtual-memory system ?
That 's also a form of hardware multiplexing , arguably the most important .
If virtualization is your primary means of isolating users and applications from one another , why not put practically all of the virtual-memory functionality into the hypervisor and run a faster , simpler single-address-space OS on top of that ? If we 're going to delegate operating-system functionality to hypervisors , let 's at least think about the implications and develop a coherent model of how the two interact instead of the disorganized and piecemeal approaches we see now .</tokentext>
<sentencetext> We're reinventing the kernel-userland split, and for no good reason.
Thank you for saying that.
The purpose of a multi-tasking multi-user OS is to allow running multiple applications with full isolation from one another.
If we need some other piece of software - like a VM hypervisor - to do that, then the OS has failed in its duty.
But wait, some people say, it's not just about multiplexing hardware, it's about migration and HA and deploying system images easily.
These are also facilities the OS should be providing.
Again, if we need some other piece of software then the OS has failed.One could argue that we've evolved to a point where the functions of an OS have been separated into two layers.
One layer takes care of multiplexing the hardware; the other takes care of providing an API and an environment for things like filesystems.
Better still, you get to mix and match instances of each layer.
OK, fine.
Given the Linux community's traditional attitude toward layering (and microkernels, which this approach also resembles) it's a bit inconsistent, but fine.
That interpretation does raise some interesting questions, though.
People are putting a lot of thought into where drivers should live, and since some drivers are part of "multiplexing the hardware" then it would make sense for them to live in the hypervisor with a stub in the guest OS - just as is being done, I know.
But what about the virtual-memory system?
That's also a form of hardware multiplexing, arguably the most important.
If virtualization is your primary means of isolating users and applications from one another, why not put practically all of the virtual-memory functionality into the hypervisor and run a faster, simpler single-address-space OS on top of that?If we're going to delegate operating-system functionality to hypervisors, let's at least think about the implications and develop a coherent model of how the two interact instead of the disorganized and piecemeal approaches we see now.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177513</id>
	<title>Re:excellent sales story</title>
	<author>DaemonDazz</author>
	<datestamp>1243875060000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>Actually, Xen is not at all similar to a BSD jail, no matter how you look at it. Xen does full OS virtualization from the kernel and drivers on down to userland. A FreeBSD is basically chroot on steroids. The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.</p></div><p>Precisely.</p><p>Similar products in the Linux space are <a href="http://www.linux-vserver.org/" title="linux-vserver.org" rel="nofollow">Linux Vserver</a> [linux-vserver.org] (which I use) and <a href="http://www.openvz.org/" title="openvz.org" rel="nofollow">OpenVZ</a> [openvz.org].</p></div>
	</htmltext>
<tokenext>Actually , Xen is not at all similar to a BSD jail , no matter how you look at it .
Xen does full OS virtualization from the kernel and drivers on down to userland .
A FreeBSD is basically chroot on steroids .
The " virtualized " processes run exactly the same as " native " ones , they just have some restrictions on their system calls , that 's all.Precisely.Similar products in the Linux space are Linux Vserver [ linux-vserver.org ] ( which I use ) and OpenVZ [ openvz.org ] .</tokentext>
<sentencetext>Actually, Xen is not at all similar to a BSD jail, no matter how you look at it.
Xen does full OS virtualization from the kernel and drivers on down to userland.
A FreeBSD is basically chroot on steroids.
The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.Precisely.Similar products in the Linux space are Linux Vserver [linux-vserver.org] (which I use) and OpenVZ [openvz.org].
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178763</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>julesh</author>
	<datestamp>1243975500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Technically, virtualizaiton is insane for a number of reasons:</i></p><p><i>
&nbsp; &nbsp; &nbsp; &nbsp; * Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical</i></p><p>1. Who says the kernels are identical?  I can run each application on an environment tailored specifically for it.  I have a single server running applications that require W2K3, Linux 2.4 and Linux 2.6 here.  I also have WinXP, Solaris and OSX images for our developers' use in testing.  I don't think this scenario is as rare as you think; there are a hell of a lot of legacy applications out there that are extremely picky about OS versions they run on.  I've come across people still running netware because they need an app that depends on it; I'm pretty sure that could be put on a VM too.</p><p>2. AIUI, the hypervisor is able to detect identical memory pages and merge them.  Disk space is cheap, so I don't care that I need more of it.</p><p><i>TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.</i></p><p>Yes, but this is as much a problem with running multiple processes on the same OS, as you have a TLB flush on each user/kernel transition anyway.  Kernel/kernel transitions are rare so the additional performance overhead of virtualisation here is minimal.</p><p><i>A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest</i></p><p>Yes, this is a slight downside, but the administrative overhead involved here is small.  If the guest isn't running, I can easily mount its partition in another VM; if it is running, the cost of configuring a fileserver with admin-only access is minimal.</p><p><i>Memory management is an absolute clusterfuck. From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself. This mutual myopia renders the usual page-cache algorithms absolutely useless. Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made.</i></p><p>Yes &amp; no.  The VM is quite easily able to detect pages that have been mapped from disk (i.e. cache pages) and handle them in a sensible fashion, including merging the caches between different systems.  One problem with jails and similar techs is that an I/O intensive process running on one jail can effectively seize control of the entire cache, severely degrading performance of other jails.  This doesn't happen with VMs, as their cache is effectively partitioned.  But, on a system that's behaving properly this does result in needing more memory to achieve the same performance level, yes.</p></htmltext>
<tokenext>Technically , virtualizaiton is insane for a number of reasons :         * Each guest needs its own kernel , so you need to allocate memory and disk space for all these kernels that are in fact identical1 .
Who says the kernels are identical ?
I can run each application on an environment tailored specifically for it .
I have a single server running applications that require W2K3 , Linux 2.4 and Linux 2.6 here .
I also have WinXP , Solaris and OSX images for our developers ' use in testing .
I do n't think this scenario is as rare as you think ; there are a hell of a lot of legacy applications out there that are extremely picky about OS versions they run on .
I 've come across people still running netware because they need an app that depends on it ; I 'm pretty sure that could be put on a VM too.2 .
AIUI , the hypervisor is able to detect identical memory pages and merge them .
Disk space is cheap , so I do n't care that I need more of it.TLB flushes kill performance .
Recent x86 CPUs address the problem to some degree , but it 's still a problem.Yes , but this is as much a problem with running multiple processes on the same OS , as you have a TLB flush on each user/kernel transition anyway .
Kernel/kernel transitions are rare so the additional performance overhead of virtualisation here is minimal.A guest 's filesystem is on a virtual block device , so it 's hard to get at it without running some kind of fileserver on the guestYes , this is a slight downside , but the administrative overhead involved here is small .
If the guest is n't running , I can easily mount its partition in another VM ; if it is running , the cost of configuring a fileserver with admin-only access is minimal.Memory management is an absolute clusterfuck .
From the point of view of the host , each guest 's memory is an opaque blob , and from the point of view of the guest , it has the machine to itself .
This mutual myopia renders the usual page-cache algorithms absolutely useless .
Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made.Yes &amp; no .
The VM is quite easily able to detect pages that have been mapped from disk ( i.e .
cache pages ) and handle them in a sensible fashion , including merging the caches between different systems .
One problem with jails and similar techs is that an I/O intensive process running on one jail can effectively seize control of the entire cache , severely degrading performance of other jails .
This does n't happen with VMs , as their cache is effectively partitioned .
But , on a system that 's behaving properly this does result in needing more memory to achieve the same performance level , yes .</tokentext>
<sentencetext>Technically, virtualizaiton is insane for a number of reasons:
        * Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical1.
Who says the kernels are identical?
I can run each application on an environment tailored specifically for it.
I have a single server running applications that require W2K3, Linux 2.4 and Linux 2.6 here.
I also have WinXP, Solaris and OSX images for our developers' use in testing.
I don't think this scenario is as rare as you think; there are a hell of a lot of legacy applications out there that are extremely picky about OS versions they run on.
I've come across people still running netware because they need an app that depends on it; I'm pretty sure that could be put on a VM too.2.
AIUI, the hypervisor is able to detect identical memory pages and merge them.
Disk space is cheap, so I don't care that I need more of it.TLB flushes kill performance.
Recent x86 CPUs address the problem to some degree, but it's still a problem.Yes, but this is as much a problem with running multiple processes on the same OS, as you have a TLB flush on each user/kernel transition anyway.
Kernel/kernel transitions are rare so the additional performance overhead of virtualisation here is minimal.A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guestYes, this is a slight downside, but the administrative overhead involved here is small.
If the guest isn't running, I can easily mount its partition in another VM; if it is running, the cost of configuring a fileserver with admin-only access is minimal.Memory management is an absolute clusterfuck.
From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself.
This mutual myopia renders the usual page-cache algorithms absolutely useless.
Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made.Yes &amp; no.
The VM is quite easily able to detect pages that have been mapped from disk (i.e.
cache pages) and handle them in a sensible fashion, including merging the caches between different systems.
One problem with jails and similar techs is that an I/O intensive process running on one jail can effectively seize control of the entire cache, severely degrading performance of other jails.
This doesn't happen with VMs, as their cache is effectively partitioned.
But, on a system that's behaving properly this does result in needing more memory to achieve the same performance level, yes.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177727</id>
	<title>Re:excellent sales story</title>
	<author>BaldingByMicrosoft</author>
	<datestamp>1243877460000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>TFA wasn't running ESXi?  Thanks, now I can skip the read entirely.  Silly TFA.</p><p>Anyway, isn't "virtualization" so last year?  "Local cloud" is the groove.</p></htmltext>
<tokenext>TFA was n't running ESXi ?
Thanks , now I can skip the read entirely .
Silly TFA.Anyway , is n't " virtualization " so last year ?
" Local cloud " is the groove .</tokentext>
<sentencetext>TFA wasn't running ESXi?
Thanks, now I can skip the read entirely.
Silly TFA.Anyway, isn't "virtualization" so last year?
"Local cloud" is the groove.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177895</id>
	<title>Re:excellent sales story</title>
	<author>rachit</author>
	<datestamp>1243879560000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>The only limitation I can think of is the 4 virtual NIC's, it would be good for some of our products to be able to provide a much higher number.</p></div><p>ESX 4 (very recently released) supports 10 NICs.</p></div>
	</htmltext>
<tokenext>The only limitation I can think of is the 4 virtual NIC 's , it would be good for some of our products to be able to provide a much higher number.ESX 4 ( very recently released ) supports 10 NICs .</tokentext>
<sentencetext>The only limitation I can think of is the 4 virtual NIC's, it would be good for some of our products to be able to provide a much higher number.ESX 4 (very recently released) supports 10 NICs.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177459</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177373</id>
	<title>Re:Virtualization is good enough</title>
	<author>Mr. Flibble</author>
	<datestamp>1243873980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have done consulting for a number of $LARGE\_BANKS and seen exactly what you describe. I am dealing with one large company now that has 3 servers allocated to run a SINGLE piece of software. They don't need 3 servers to run it, and I suggested that they incorporate these 3 machines into their ESX network, but like you just mentioned, they want "failover" and "enough resources". Never mind that I have the same software running now on a single lower server that is running ESXi 3.5 with 9 other VMs on the same machine.</p></htmltext>
<tokenext>I have done consulting for a number of $ LARGE \ _BANKS and seen exactly what you describe .
I am dealing with one large company now that has 3 servers allocated to run a SINGLE piece of software .
They do n't need 3 servers to run it , and I suggested that they incorporate these 3 machines into their ESX network , but like you just mentioned , they want " failover " and " enough resources " .
Never mind that I have the same software running now on a single lower server that is running ESXi 3.5 with 9 other VMs on the same machine .</tokentext>
<sentencetext>I have done consulting for a number of $LARGE\_BANKS and seen exactly what you describe.
I am dealing with one large company now that has 3 servers allocated to run a SINGLE piece of software.
They don't need 3 servers to run it, and I suggested that they incorporate these 3 machines into their ESX network, but like you just mentioned, they want "failover" and "enough resources".
Never mind that I have the same software running now on a single lower server that is running ESXi 3.5 with 9 other VMs on the same machine.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177075</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177485</id>
	<title>Re:excellent sales story</title>
	<author>Anonymous</author>
	<datestamp>1243874820000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>
ESXi is free, and they could have used that.   The overhead for most I/O is a fraction that of VMware server's.
</p><p>
If they did this so long ago that ESXi wasn't available for free, then their basis for discussing problems with VMware is way outdated too, a lot changes in 14 months....
</p><p>
VMware Server simply has many issues: layering the VM filesystem on top of a bulky host filesystem.   Relying on a general purpose os to schedule VM execution, memory fragmentation, slow memory ops, contention for memory and disk  (VS  inappropriate host OS caching/swapping), etc.
</p><p>
And it bears repeating:   Virtualization is not a magic pill.</p><p>
You can't deploy the technology and have it just work.     You have to understand the technology, make good design decisions starting at the lowest level (your hardware, your network, storage design, etc), configure, and deploy it properly.
</p><p>
It's not incredibly hard to deploy virtualization properly,  but it still takes expertise, and it's not going to work correctly if you don't do it right.
</p><p>
Your FreeBSD jail mail server might not work that well either, if you chose a notoriously-inefficient MTA written in Java  that only runs on top of XWindows.
</p></htmltext>
<tokenext>ESXi is free , and they could have used that .
The overhead for most I/O is a fraction that of VMware server 's .
If they did this so long ago that ESXi was n't available for free , then their basis for discussing problems with VMware is way outdated too , a lot changes in 14 months... . VMware Server simply has many issues : layering the VM filesystem on top of a bulky host filesystem .
Relying on a general purpose os to schedule VM execution , memory fragmentation , slow memory ops , contention for memory and disk ( VS inappropriate host OS caching/swapping ) , etc .
And it bears repeating : Virtualization is not a magic pill .
You ca n't deploy the technology and have it just work .
You have to understand the technology , make good design decisions starting at the lowest level ( your hardware , your network , storage design , etc ) , configure , and deploy it properly .
It 's not incredibly hard to deploy virtualization properly , but it still takes expertise , and it 's not going to work correctly if you do n't do it right .
Your FreeBSD jail mail server might not work that well either , if you chose a notoriously-inefficient MTA written in Java that only runs on top of XWindows .</tokentext>
<sentencetext>
ESXi is free, and they could have used that.
The overhead for most I/O is a fraction that of VMware server's.
If they did this so long ago that ESXi wasn't available for free, then their basis for discussing problems with VMware is way outdated too, a lot changes in 14 months....

VMware Server simply has many issues: layering the VM filesystem on top of a bulky host filesystem.
Relying on a general purpose os to schedule VM execution, memory fragmentation, slow memory ops, contention for memory and disk  (VS  inappropriate host OS caching/swapping), etc.
And it bears repeating:   Virtualization is not a magic pill.
You can't deploy the technology and have it just work.
You have to understand the technology, make good design decisions starting at the lowest level (your hardware, your network, storage design, etc), configure, and deploy it properly.
It's not incredibly hard to deploy virtualization properly,  but it still takes expertise, and it's not going to work correctly if you don't do it right.
Your FreeBSD jail mail server might not work that well either, if you chose a notoriously-inefficient MTA written in Java  that only runs on top of XWindows.
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28198741</id>
	<title>Re:excellent sales story</title>
	<author>thanasakis</author>
	<datestamp>1244055840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You mean Solaris <b>Zones</b>. LDoms is a different thing.</p></htmltext>
<tokenext>You mean Solaris Zones .
LDoms is a different thing .</tokentext>
<sentencetext>You mean Solaris Zones.
LDoms is a different thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177927</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>chez69</author>
	<datestamp>1243879740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Right now your looking at it from a completely x86 view.  Look at it from teh point of view from a hardware based system that's been doing it for years.  A lot of these problems have been solved already.</p><p>Let's say you have a lot of servers at big corp.  Each runs a specialized application, and each application is required to be isolated from the rest.  A good VM system like zVM can help you a ton. You get a hardware platform that has tons of mature disaster recovery solutions, and a hypervisor that can dynamically allocate resorces between different VMs to the point where you don't even see it.</p><p>I mention zVM a lot because I know a lot of folks that are involved with large scale rollouts of it, in production, with great results.</p><p>The downside is that you need people who know what they're doing, and the hardware is expensive as hell.</p></htmltext>
<tokenext>Right now your looking at it from a completely x86 view .
Look at it from teh point of view from a hardware based system that 's been doing it for years .
A lot of these problems have been solved already.Let 's say you have a lot of servers at big corp. Each runs a specialized application , and each application is required to be isolated from the rest .
A good VM system like zVM can help you a ton .
You get a hardware platform that has tons of mature disaster recovery solutions , and a hypervisor that can dynamically allocate resorces between different VMs to the point where you do n't even see it.I mention zVM a lot because I know a lot of folks that are involved with large scale rollouts of it , in production , with great results.The downside is that you need people who know what they 're doing , and the hardware is expensive as hell .</tokentext>
<sentencetext>Right now your looking at it from a completely x86 view.
Look at it from teh point of view from a hardware based system that's been doing it for years.
A lot of these problems have been solved already.Let's say you have a lot of servers at big corp.  Each runs a specialized application, and each application is required to be isolated from the rest.
A good VM system like zVM can help you a ton.
You get a hardware platform that has tons of mature disaster recovery solutions, and a hypervisor that can dynamically allocate resorces between different VMs to the point where you don't even see it.I mention zVM a lot because I know a lot of folks that are involved with large scale rollouts of it, in production, with great results.The downside is that you need people who know what they're doing, and the hardware is expensive as hell.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179069</id>
	<title>Oh my God</title>
	<author>Slashcrap</author>
	<datestamp>1243936020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>They actually used VMware Server 1 for a production site. No, seriously they really did. That's what's being compared here. Is FreeBSD even a supported guest?</p><p>FreeBSD users are either a well organised trolling group or ridiculously bad at advocacy. Surely this was designed to cause outrage amongst those who know what they're doing? I just can't believe it's accidental.</p><p>How does your OS do when it isn't racing a crippled child?</p></htmltext>
<tokenext>They actually used VMware Server 1 for a production site .
No , seriously they really did .
That 's what 's being compared here .
Is FreeBSD even a supported guest ? FreeBSD users are either a well organised trolling group or ridiculously bad at advocacy .
Surely this was designed to cause outrage amongst those who know what they 're doing ?
I just ca n't believe it 's accidental.How does your OS do when it is n't racing a crippled child ?</tokentext>
<sentencetext>They actually used VMware Server 1 for a production site.
No, seriously they really did.
That's what's being compared here.
Is FreeBSD even a supported guest?FreeBSD users are either a well organised trolling group or ridiculously bad at advocacy.
Surely this was designed to cause outrage amongst those who know what they're doing?
I just can't believe it's accidental.How does your OS do when it isn't racing a crippled child?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177857</id>
	<title>Re:Sounds about right</title>
	<author>Anonymous</author>
	<datestamp>1243879200000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>look how cool i am i posted command line examples!11!!!</p></htmltext>
<tokenext>look how cool i am i posted command line examples ! 11 ! !
!</tokentext>
<sentencetext>look how cool i am i posted command line examples!11!!
!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176957</id>
	<title>Different Operating Systems</title>
	<author>Anonymous</author>
	<datestamp>1243870080000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext>In my experience, Chroots and jails are pretty crap at running different operating systems on the one box... last time I checked you couldn't run up a Windows 2008 server in a BSD jail.
<br> <br>
I think perhaps that might be something that still goes in VMWare's favour?
<br> <br>
Anyway I have a (semi) related VMWare performance question and I am going to try my luck asking it here. I have some consumer (i.e. desktop grade) hardware. ESXi is not supported on it (i.e. it can't recognise the SATA controller or the network card). So my choices for virtualisation (I want to run multiple OSs for development purposes) are VMWare Server 2.0 running on a Linux host or Microsoft Hyper-V 2008 (which qualifies as a true bare-metal hypervisor). I cannot find any reasonable comparisons of performance between these two options online. Does anyone know which is likely to get better performance, in terms of Disk I/O, network, CPU, memory, etc?</htmltext>
<tokenext>In my experience , Chroots and jails are pretty crap at running different operating systems on the one box... last time I checked you could n't run up a Windows 2008 server in a BSD jail .
I think perhaps that might be something that still goes in VMWare 's favour ?
Anyway I have a ( semi ) related VMWare performance question and I am going to try my luck asking it here .
I have some consumer ( i.e .
desktop grade ) hardware .
ESXi is not supported on it ( i.e .
it ca n't recognise the SATA controller or the network card ) .
So my choices for virtualisation ( I want to run multiple OSs for development purposes ) are VMWare Server 2.0 running on a Linux host or Microsoft Hyper-V 2008 ( which qualifies as a true bare-metal hypervisor ) .
I can not find any reasonable comparisons of performance between these two options online .
Does anyone know which is likely to get better performance , in terms of Disk I/O , network , CPU , memory , etc ?</tokentext>
<sentencetext>In my experience, Chroots and jails are pretty crap at running different operating systems on the one box... last time I checked you couldn't run up a Windows 2008 server in a BSD jail.
I think perhaps that might be something that still goes in VMWare's favour?
Anyway I have a (semi) related VMWare performance question and I am going to try my luck asking it here.
I have some consumer (i.e.
desktop grade) hardware.
ESXi is not supported on it (i.e.
it can't recognise the SATA controller or the network card).
So my choices for virtualisation (I want to run multiple OSs for development purposes) are VMWare Server 2.0 running on a Linux host or Microsoft Hyper-V 2008 (which qualifies as a true bare-metal hypervisor).
I cannot find any reasonable comparisons of performance between these two options online.
Does anyone know which is likely to get better performance, in terms of Disk I/O, network, CPU, memory, etc?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177339</id>
	<title>It's the Apps more than the OS!</title>
	<author>Anonymous</author>
	<datestamp>1243873680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Disclaimer: TFA was down (slashdotted) so I didn't bother to read it.</p><p>You're thinking too much about the OS and not enough about the apps, which are the entire reason why we have computers.</p><p>If applications were written well, and played nice with others, and had realistic sizing requirements/guidelines then I could see your point.  However a lot of apps frankly are poorly written with this idea of 'I can do anything on my OS that I want to.'  That leads to having to silo applications as well as oversized servers (just throw lots of hardware at my inefficient program).</p><p>Not to mention that in general there are certain workloads that are more appropriate for certain varying OSes, which can also vary depending on the IT staff supporting said applications.  I don't want to have separate hardware for each different OS.</p><p>The underlying OS architecture can matter somewhat, but until developers write better apps it's not as big a deal as one might think would be my 2 cents.</p><p>Crappy programing beats virtualization overhead cost.  (though I'd love if that wasn't the case!)</p></htmltext>
<tokenext>Disclaimer : TFA was down ( slashdotted ) so I did n't bother to read it.You 're thinking too much about the OS and not enough about the apps , which are the entire reason why we have computers.If applications were written well , and played nice with others , and had realistic sizing requirements/guidelines then I could see your point .
However a lot of apps frankly are poorly written with this idea of 'I can do anything on my OS that I want to .
' That leads to having to silo applications as well as oversized servers ( just throw lots of hardware at my inefficient program ) .Not to mention that in general there are certain workloads that are more appropriate for certain varying OSes , which can also vary depending on the IT staff supporting said applications .
I do n't want to have separate hardware for each different OS.The underlying OS architecture can matter somewhat , but until developers write better apps it 's not as big a deal as one might think would be my 2 cents.Crappy programing beats virtualization overhead cost .
( though I 'd love if that was n't the case !
)</tokentext>
<sentencetext>Disclaimer: TFA was down (slashdotted) so I didn't bother to read it.You're thinking too much about the OS and not enough about the apps, which are the entire reason why we have computers.If applications were written well, and played nice with others, and had realistic sizing requirements/guidelines then I could see your point.
However a lot of apps frankly are poorly written with this idea of 'I can do anything on my OS that I want to.
'  That leads to having to silo applications as well as oversized servers (just throw lots of hardware at my inefficient program).Not to mention that in general there are certain workloads that are more appropriate for certain varying OSes, which can also vary depending on the IT staff supporting said applications.
I don't want to have separate hardware for each different OS.The underlying OS architecture can matter somewhat, but until developers write better apps it's not as big a deal as one might think would be my 2 cents.Crappy programing beats virtualization overhead cost.
(though I'd love if that wasn't the case!
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178059</id>
	<title>Re:excellent sales story</title>
	<author>OrangeTide</author>
	<datestamp>1243881420000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>I disagree. I consider Xen to be a kernel which other kernels are modified to run inside of, it is just a guest kernel making requests(read system calls) to a hypervisor(a special sort of kernel) that then translates it into requests to the host kernel. But mostly I feel this way because of the way I/O is handled in Xen is very much unlike the way VMware does it (go find my resume, I used to be an ESX developer at VMware).</p><p>Because Xen was originally designed to function without special hardware extensions to support virtualization it is a virtual machine in the same sense that Unix is a virtual machine(processes were literally virtual machines from day 1 in Unix). Xen just jams one more layer above processes.</p><p>BSD Jails are just a more Unix way of virtualizing a set of processes than Xen is. Xen requires an entire kernel to encapsulate the virtualization, BSD jails do not. In my opinion that is where they differ the most, but that difference is almost unimportant.</p></htmltext>
<tokenext>I disagree .
I consider Xen to be a kernel which other kernels are modified to run inside of , it is just a guest kernel making requests ( read system calls ) to a hypervisor ( a special sort of kernel ) that then translates it into requests to the host kernel .
But mostly I feel this way because of the way I/O is handled in Xen is very much unlike the way VMware does it ( go find my resume , I used to be an ESX developer at VMware ) .Because Xen was originally designed to function without special hardware extensions to support virtualization it is a virtual machine in the same sense that Unix is a virtual machine ( processes were literally virtual machines from day 1 in Unix ) .
Xen just jams one more layer above processes.BSD Jails are just a more Unix way of virtualizing a set of processes than Xen is .
Xen requires an entire kernel to encapsulate the virtualization , BSD jails do not .
In my opinion that is where they differ the most , but that difference is almost unimportant .</tokentext>
<sentencetext>I disagree.
I consider Xen to be a kernel which other kernels are modified to run inside of, it is just a guest kernel making requests(read system calls) to a hypervisor(a special sort of kernel) that then translates it into requests to the host kernel.
But mostly I feel this way because of the way I/O is handled in Xen is very much unlike the way VMware does it (go find my resume, I used to be an ESX developer at VMware).Because Xen was originally designed to function without special hardware extensions to support virtualization it is a virtual machine in the same sense that Unix is a virtual machine(processes were literally virtual machines from day 1 in Unix).
Xen just jams one more layer above processes.BSD Jails are just a more Unix way of virtualizing a set of processes than Xen is.
Xen requires an entire kernel to encapsulate the virtualization, BSD jails do not.
In my opinion that is where they differ the most, but that difference is almost unimportant.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180563</id>
	<title>DBAs and Virtualization</title>
	<author>Bigmilt8</author>
	<datestamp>1243949880000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>You wasted your time.  I'm a DBA with a programming background.  Virtualization is not suitable for mid- to large- database environments.  Database software is designed to handle all IO and memory issues internally.  The virtualization software just gets in the way.</htmltext>
<tokenext>You wasted your time .
I 'm a DBA with a programming background .
Virtualization is not suitable for mid- to large- database environments .
Database software is designed to handle all IO and memory issues internally .
The virtualization software just gets in the way .</tokentext>
<sentencetext>You wasted your time.
I'm a DBA with a programming background.
Virtualization is not suitable for mid- to large- database environments.
Database software is designed to handle all IO and memory issues internally.
The virtualization software just gets in the way.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177283</id>
	<title>Re:excellent sales story</title>
	<author>Thundersnatch</author>
	<datestamp>1243873260000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>It would be nice to see some sort of virtual SAN integrated into the VMs</p></div><p>Something <a href="http://h18006.www1.hp.com/products/storage/software/vsa/index.html" title="hp.com">like this</a> [hp.com] you mean? Turns the local storage on any VMware host into part of a full-featured, clustered, iSCSI SAN. Not cheap though (about $2500 per TB)</p></div>
	</htmltext>
<tokenext>It would be nice to see some sort of virtual SAN integrated into the VMsSomething like this [ hp.com ] you mean ?
Turns the local storage on any VMware host into part of a full-featured , clustered , iSCSI SAN .
Not cheap though ( about $ 2500 per TB )</tokentext>
<sentencetext>It would be nice to see some sort of virtual SAN integrated into the VMsSomething like this [hp.com] you mean?
Turns the local storage on any VMware host into part of a full-featured, clustered, iSCSI SAN.
Not cheap though (about $2500 per TB)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178719</id>
	<title>NFS</title>
	<author>kasperd</author>
	<datestamp>1243975020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>We had heard before that I/O performance and disk performance are the weaknesses of virtualization but we thought we could work around that by putting the job databases on an NFS export from a non virtualized server.</p></div></blockquote><p>Sounds to me like they heard about some potential performance problem, and without understanding that problem or trying to compare performance of various solutions, they decided NFS was the solution for that performance problem.<br> <br>
Did they ever try using the virtual block devices provided by the virtualization rather than the NFS solution? My guess is that NFS was actually the reason for their performance problems.</p></div>
	</htmltext>
<tokenext>We had heard before that I/O performance and disk performance are the weaknesses of virtualization but we thought we could work around that by putting the job databases on an NFS export from a non virtualized server.Sounds to me like they heard about some potential performance problem , and without understanding that problem or trying to compare performance of various solutions , they decided NFS was the solution for that performance problem .
Did they ever try using the virtual block devices provided by the virtualization rather than the NFS solution ?
My guess is that NFS was actually the reason for their performance problems .</tokentext>
<sentencetext>We had heard before that I/O performance and disk performance are the weaknesses of virtualization but we thought we could work around that by putting the job databases on an NFS export from a non virtualized server.Sounds to me like they heard about some potential performance problem, and without understanding that problem or trying to compare performance of various solutions, they decided NFS was the solution for that performance problem.
Did they ever try using the virtual block devices provided by the virtualization rather than the NFS solution?
My guess is that NFS was actually the reason for their performance problems.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176923</id>
	<title>Re:-1, Flamebait</title>
	<author>larry bagina</author>
	<datestamp>1243869840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Linux's chroot is actually BSD's chroot.  Bill Joy invented it.</htmltext>
<tokenext>Linux 's chroot is actually BSD 's chroot .
Bill Joy invented it .</tokentext>
<sentencetext>Linux's chroot is actually BSD's chroot.
Bill Joy invented it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176709</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176625</id>
	<title>Silly faggots</title>
	<author>Anonymous</author>
	<datestamp>1243867620000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Don't you know?  BSD is dead.  Lick my nigger-flavoured ballsack you dirty kikes.</p></htmltext>
<tokenext>Do n't you know ?
BSD is dead .
Lick my nigger-flavoured ballsack you dirty kikes .</tokentext>
<sentencetext>Don't you know?
BSD is dead.
Lick my nigger-flavoured ballsack you dirty kikes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176937</id>
	<title>I/O on the free "VMWare Server" sucks</title>
	<author>mrbill</author>
	<datestamp>1243869960000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>The I/O performance on the free "VMWare Server" product *sucks* - because it's running on top of a host OS, and not on the bare metal.<br>I'm not surprised that FreeBSD Jails had better performance.  VMWare Server is great for test environments and such, but I wouldn't ever use it in production.<br>It's not at all near the same class of product as the VMWare Infrastructure stuff (ESX, ESXi, etc.)<br><br>VMWare offers VMWare ESXi as a free download, and I/O performance under it would have been orders of magnitude better.<br>However, it does have the drawback of requiring a Windows machine (or a Windows VM) to run the VMWare Infrastructure management client.</htmltext>
<tokenext>The I/O performance on the free " VMWare Server " product * sucks * - because it 's running on top of a host OS , and not on the bare metal.I 'm not surprised that FreeBSD Jails had better performance .
VMWare Server is great for test environments and such , but I would n't ever use it in production.It 's not at all near the same class of product as the VMWare Infrastructure stuff ( ESX , ESXi , etc .
) VMWare offers VMWare ESXi as a free download , and I/O performance under it would have been orders of magnitude better.However , it does have the drawback of requiring a Windows machine ( or a Windows VM ) to run the VMWare Infrastructure management client .</tokentext>
<sentencetext>The I/O performance on the free "VMWare Server" product *sucks* - because it's running on top of a host OS, and not on the bare metal.I'm not surprised that FreeBSD Jails had better performance.
VMWare Server is great for test environments and such, but I wouldn't ever use it in production.It's not at all near the same class of product as the VMWare Infrastructure stuff (ESX, ESXi, etc.
)VMWare offers VMWare ESXi as a free download, and I/O performance under it would have been orders of magnitude better.However, it does have the drawback of requiring a Windows machine (or a Windows VM) to run the VMWare Infrastructure management client.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178901</id>
	<title>OpenVZ</title>
	<author>billysara</author>
	<datestamp>1243933800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>OpenVZ is often overlooked for this kind of workload.  \_Kind\_ of similar to a jail environment.  We use it for a lot of "light" servers - project websites, that kind of thing but it will handle a lot more than that.  <a href="http://wiki.openvz.org/Main\_Page" title="openvz.org" rel="nofollow">http://wiki.openvz.org/Main\_Page</a> [openvz.org] .  Easy to install, really easy to configure &amp; manage.</p></htmltext>
<tokenext>OpenVZ is often overlooked for this kind of workload .
\ _Kind \ _ of similar to a jail environment .
We use it for a lot of " light " servers - project websites , that kind of thing but it will handle a lot more than that .
http : //wiki.openvz.org/Main \ _Page [ openvz.org ] .
Easy to install , really easy to configure &amp; manage .</tokentext>
<sentencetext>OpenVZ is often overlooked for this kind of workload.
\_Kind\_ of similar to a jail environment.
We use it for a lot of "light" servers - project websites, that kind of thing but it will handle a lot more than that.
http://wiki.openvz.org/Main\_Page [openvz.org] .
Easy to install, really easy to configure &amp; manage.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176913</id>
	<title>Different tools for different jobs</title>
	<author>ErMaC</author>
	<datestamp>1243869780000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>So I would love to RTFA to make sure about this, but their high-performance web servers running on FreeBSD jails are down, so I can't...</p><p>But here's what I do know. FreeBSD hasn't been a supported OS on ESX Server until vSphere came out less than two weeks ago. That means that either:<br>A) They were running on the Hosted VMware Server product, whose performance is NOT that impressive (it is a Hosted Virtualization product, not a true Hypervisor)<br>or B) They were running the unsupported OS on ESX Server, which means there was no VMware Tools available. The drivers included in the Tools package vastly improve things like storage and network performance, which means no wonder their performance stunk.</p><p>But moreover, Jails (and other OS-virtualization schemes) are different tools entirely - comparing them to VMware is an apples-to-oranges comparison. Parallels Virtuozzo would be a much more apt comparison.</p><p>OS-Virtualization has some performance advantages, for sure. But do you want to run Windows and Linux on the same physical server? Sorry, no luck there, you're virtualizing the OS, not virtual machines. Do you want some of the features like live migration, high availability, and now features like Fault Tolerance? Those don't exist yet. I'm sure they will one day, but today they don't, or at least not with the same level of support that VMware has (or Citrix, Oracle or MS).</p><p>If you're a company that's trying to do web hosting, or run lots of very very similar systems that do the same, performance-centric task, then yes! OS Virtualization is for you! If you're like 95\% of datacenters out there that have mixed workloads, mixed OS versions, and require deep features that are provided from a real system-level virtualization platform, use those.</p><p>Disclosure: I work for a VMware and Microsoft reseller, but I also run Parallels Virtuozzo in our lab, where it does an excellent job of OS-Virtualization on Itanium for multiple SQL servers...</p></htmltext>
<tokenext>So I would love to RTFA to make sure about this , but their high-performance web servers running on FreeBSD jails are down , so I ca n't...But here 's what I do know .
FreeBSD has n't been a supported OS on ESX Server until vSphere came out less than two weeks ago .
That means that either : A ) They were running on the Hosted VMware Server product , whose performance is NOT that impressive ( it is a Hosted Virtualization product , not a true Hypervisor ) or B ) They were running the unsupported OS on ESX Server , which means there was no VMware Tools available .
The drivers included in the Tools package vastly improve things like storage and network performance , which means no wonder their performance stunk.But moreover , Jails ( and other OS-virtualization schemes ) are different tools entirely - comparing them to VMware is an apples-to-oranges comparison .
Parallels Virtuozzo would be a much more apt comparison.OS-Virtualization has some performance advantages , for sure .
But do you want to run Windows and Linux on the same physical server ?
Sorry , no luck there , you 're virtualizing the OS , not virtual machines .
Do you want some of the features like live migration , high availability , and now features like Fault Tolerance ?
Those do n't exist yet .
I 'm sure they will one day , but today they do n't , or at least not with the same level of support that VMware has ( or Citrix , Oracle or MS ) .If you 're a company that 's trying to do web hosting , or run lots of very very similar systems that do the same , performance-centric task , then yes !
OS Virtualization is for you !
If you 're like 95 \ % of datacenters out there that have mixed workloads , mixed OS versions , and require deep features that are provided from a real system-level virtualization platform , use those.Disclosure : I work for a VMware and Microsoft reseller , but I also run Parallels Virtuozzo in our lab , where it does an excellent job of OS-Virtualization on Itanium for multiple SQL servers.. .</tokentext>
<sentencetext>So I would love to RTFA to make sure about this, but their high-performance web servers running on FreeBSD jails are down, so I can't...But here's what I do know.
FreeBSD hasn't been a supported OS on ESX Server until vSphere came out less than two weeks ago.
That means that either:A) They were running on the Hosted VMware Server product, whose performance is NOT that impressive (it is a Hosted Virtualization product, not a true Hypervisor)or B) They were running the unsupported OS on ESX Server, which means there was no VMware Tools available.
The drivers included in the Tools package vastly improve things like storage and network performance, which means no wonder their performance stunk.But moreover, Jails (and other OS-virtualization schemes) are different tools entirely - comparing them to VMware is an apples-to-oranges comparison.
Parallels Virtuozzo would be a much more apt comparison.OS-Virtualization has some performance advantages, for sure.
But do you want to run Windows and Linux on the same physical server?
Sorry, no luck there, you're virtualizing the OS, not virtual machines.
Do you want some of the features like live migration, high availability, and now features like Fault Tolerance?
Those don't exist yet.
I'm sure they will one day, but today they don't, or at least not with the same level of support that VMware has (or Citrix, Oracle or MS).If you're a company that's trying to do web hosting, or run lots of very very similar systems that do the same, performance-centric task, then yes!
OS Virtualization is for you!
If you're like 95\% of datacenters out there that have mixed workloads, mixed OS versions, and require deep features that are provided from a real system-level virtualization platform, use those.Disclosure: I work for a VMware and Microsoft reseller, but I also run Parallels Virtuozzo in our lab, where it does an excellent job of OS-Virtualization on Itanium for multiple SQL servers...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177381</id>
	<title>Not saying anything bad about BSD Jails....</title>
	<author>Anonymous</author>
	<datestamp>1243874040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"we added more database indexes..."</p><p>I have no experience with BSD Jails so I can't comment, but...</p><p>YES IF YOU ADD NEEDED INDEXES TO A SIGNIFICANTLY SIZED DATABASE A 10X PERFORMANCE INCREASE (OR EVEN FAR GREATER) IS NOT UNHEARD OF</p></htmltext>
<tokenext>" we added more database indexes... " I have no experience with BSD Jails so I ca n't comment , but...YES IF YOU ADD NEEDED INDEXES TO A SIGNIFICANTLY SIZED DATABASE A 10X PERFORMANCE INCREASE ( OR EVEN FAR GREATER ) IS NOT UNHEARD OF</tokentext>
<sentencetext>"we added more database indexes..."I have no experience with BSD Jails so I can't comment, but...YES IF YOU ADD NEEDED INDEXES TO A SIGNIFICANTLY SIZED DATABASE A 10X PERFORMANCE INCREASE (OR EVEN FAR GREATER) IS NOT UNHEARD OF</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182911</id>
	<title>Don't run a database in a VM</title>
	<author>Evro</author>
	<datestamp>1243959660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I learned this myself, and just don't do it.  VMWare is awesome for CPU- or memory-intensive tasks, but for IO-intensive stuff like databases it's horrible.  At least that's my experience, with ESX 3.5 and an iSCSI SAN.</p></htmltext>
<tokenext>I learned this myself , and just do n't do it .
VMWare is awesome for CPU- or memory-intensive tasks , but for IO-intensive stuff like databases it 's horrible .
At least that 's my experience , with ESX 3.5 and an iSCSI SAN .</tokentext>
<sentencetext>I learned this myself, and just don't do it.
VMWare is awesome for CPU- or memory-intensive tasks, but for IO-intensive stuff like databases it's horrible.
At least that's my experience, with ESX 3.5 and an iSCSI SAN.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177311</id>
	<title>Re:XenServer worked for us</title>
	<author>machine321</author>
	<datestamp>1243873500000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>management is significantly better.</p></div><p>That usually solves a lot of performance problems.</p></div>
	</htmltext>
<tokenext>management is significantly better.That usually solves a lot of performance problems .</tokentext>
<sentencetext>management is significantly better.That usually solves a lot of performance problems.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178629</id>
	<title>ICore Virtual Accounts</title>
	<author>Ostracus</author>
	<datestamp>1243974120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://en.wikipedia.org/wiki/ICore\_Virtual\_Accounts" title="wikipedia.org">iCore Virtual Accounts</a> [wikipedia.org] Container based virtualization for Windows.</p></htmltext>
<tokenext>iCore Virtual Accounts [ wikipedia.org ] Container based virtualization for Windows .</tokentext>
<sentencetext>iCore Virtual Accounts [wikipedia.org] Container based virtualization for Windows.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28193617</id>
	<title>BSD is alive?</title>
	<author>Anonymous</author>
	<datestamp>1244029620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So BSD is still alive huh?</p></htmltext>
<tokenext>So BSD is still alive huh ?</tokentext>
<sentencetext>So BSD is still alive huh?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28188467</id>
	<title>Re:excellent sales story</title>
	<author>atamido</author>
	<datestamp>1243939800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Xen says you simply need a CPU that supports VT (or whatever the AMD equivalent is) to run Windows VMs.  You should grab a copy of Vista/2008/7 to install and test as you don't need a license key or activation for a few weeks.  We haven't had any issues running them ourselves.</p></htmltext>
<tokenext>Xen says you simply need a CPU that supports VT ( or whatever the AMD equivalent is ) to run Windows VMs .
You should grab a copy of Vista/2008/7 to install and test as you do n't need a license key or activation for a few weeks .
We have n't had any issues running them ourselves .</tokentext>
<sentencetext>Xen says you simply need a CPU that supports VT (or whatever the AMD equivalent is) to run Windows VMs.
You should grab a copy of Vista/2008/7 to install and test as you don't need a license key or activation for a few weeks.
We haven't had any issues running them ourselves.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180285</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177439</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>Anonymous</author>
	<datestamp>1243874460000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>"Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical"</p><p>Wrong - transparent page sharing and linked cloning address both of these "problems," which BTW also exist in a physical world. Keeping the kernels separate is a good thing when dealing with the typical shit applications that get installed in the average datacenter. (Yes, I know TPS and linked clones are only available on one product.)</p><p>"TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem."</p><p>Wrong - Hardware virtualization (AMD-V and Intel VT) address this nicely. (And also paravirt to a lesser extent.)</p><p>"A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest"</p><p>WTF are you even talking about there? Get at it from where?</p><p>"From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself."</p><p>Wrong - tools installed in the guest give the host a window into the VM, which the hypervisor can use to make smart decisions about memory allocation.</p><p>"FreeBSD's jails make a whole lot of sense."</p><p>Maybe for FreeBSD apps, but what percentage of datacenter apps run on FreeBSD? Maybe 10 percent? (Probably far less.)</p><p>"Operating systems have been multi-user for a long, long time now. The original use case for Unix involved several users sharing a large box. Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway?"</p><p>Virtualization is not about users sharing the box, it's about applications co-existing on the box, even if those applications require 50 different operating systems. Jails and virtualization solve very different problems. Besides, nobody says that you can't use jails where appropriate and virtualization where appropriate.</p></htmltext>
<tokenext>" Each guest needs its own kernel , so you need to allocate memory and disk space for all these kernels that are in fact identical " Wrong - transparent page sharing and linked cloning address both of these " problems , " which BTW also exist in a physical world .
Keeping the kernels separate is a good thing when dealing with the typical shit applications that get installed in the average datacenter .
( Yes , I know TPS and linked clones are only available on one product .
) " TLB flushes kill performance .
Recent x86 CPUs address the problem to some degree , but it 's still a problem .
" Wrong - Hardware virtualization ( AMD-V and Intel VT ) address this nicely .
( And also paravirt to a lesser extent .
) " A guest 's filesystem is on a virtual block device , so it 's hard to get at it without running some kind of fileserver on the guest " WTF are you even talking about there ?
Get at it from where ?
" From the point of view of the host , each guest 's memory is an opaque blob , and from the point of view of the guest , it has the machine to itself .
" Wrong - tools installed in the guest give the host a window into the VM , which the hypervisor can use to make smart decisions about memory allocation .
" FreeBSD 's jails make a whole lot of sense .
" Maybe for FreeBSD apps , but what percentage of datacenter apps run on FreeBSD ?
Maybe 10 percent ?
( Probably far less .
) " Operating systems have been multi-user for a long , long time now .
The original use case for Unix involved several users sharing a large box .
Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway ?
" Virtualization is not about users sharing the box , it 's about applications co-existing on the box , even if those applications require 50 different operating systems .
Jails and virtualization solve very different problems .
Besides , nobody says that you ca n't use jails where appropriate and virtualization where appropriate .</tokentext>
<sentencetext>"Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical"Wrong - transparent page sharing and linked cloning address both of these "problems," which BTW also exist in a physical world.
Keeping the kernels separate is a good thing when dealing with the typical shit applications that get installed in the average datacenter.
(Yes, I know TPS and linked clones are only available on one product.
)"TLB flushes kill performance.
Recent x86 CPUs address the problem to some degree, but it's still a problem.
"Wrong - Hardware virtualization (AMD-V and Intel VT) address this nicely.
(And also paravirt to a lesser extent.
)"A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest"WTF are you even talking about there?
Get at it from where?
"From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself.
"Wrong - tools installed in the guest give the host a window into the VM, which the hypervisor can use to make smart decisions about memory allocation.
"FreeBSD's jails make a whole lot of sense.
"Maybe for FreeBSD apps, but what percentage of datacenter apps run on FreeBSD?
Maybe 10 percent?
(Probably far less.
)"Operating systems have been multi-user for a long, long time now.
The original use case for Unix involved several users sharing a large box.
Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway?
"Virtualization is not about users sharing the box, it's about applications co-existing on the box, even if those applications require 50 different operating systems.
Jails and virtualization solve very different problems.
Besides, nobody says that you can't use jails where appropriate and virtualization where appropriate.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182401</id>
	<title>It depends on the amount of compartmentalization.</title>
	<author>argent</author>
	<datestamp>1243958040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Jails aren't the low end, even. UNIX is a multiuser environment, and simply running multiple instances of the server in separate directory trees provides all the isolation you need. If that's not enough, you can use chroot, then jails or the various equivalents on Linux, then lightweight VMs, and full VMs, blades, separate servers...</p><p>It's a continuum. The best solution depends on the overhead you can afford to lose and how tight the compartmentalization has to be. For a lot of problems I've seen people using VMs to solve, even jails are kind of heavyweight.</p><p>Windows, of course, has a different application model and it's harder to use some of these intermediate solutions... but you should still be able to do things like run multiple Apache servers bound to different addresses in different directory trees and user IDs, instead of taking on the overhead of a VM.</p></htmltext>
<tokenext>Jails are n't the low end , even .
UNIX is a multiuser environment , and simply running multiple instances of the server in separate directory trees provides all the isolation you need .
If that 's not enough , you can use chroot , then jails or the various equivalents on Linux , then lightweight VMs , and full VMs , blades , separate servers...It 's a continuum .
The best solution depends on the overhead you can afford to lose and how tight the compartmentalization has to be .
For a lot of problems I 've seen people using VMs to solve , even jails are kind of heavyweight.Windows , of course , has a different application model and it 's harder to use some of these intermediate solutions... but you should still be able to do things like run multiple Apache servers bound to different addresses in different directory trees and user IDs , instead of taking on the overhead of a VM .</tokentext>
<sentencetext>Jails aren't the low end, even.
UNIX is a multiuser environment, and simply running multiple instances of the server in separate directory trees provides all the isolation you need.
If that's not enough, you can use chroot, then jails or the various equivalents on Linux, then lightweight VMs, and full VMs, blades, separate servers...It's a continuum.
The best solution depends on the overhead you can afford to lose and how tight the compartmentalization has to be.
For a lot of problems I've seen people using VMs to solve, even jails are kind of heavyweight.Windows, of course, has a different application model and it's harder to use some of these intermediate solutions... but you should still be able to do things like run multiple Apache servers bound to different addresses in different directory trees and user IDs, instead of taking on the overhead of a VM.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177673</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>SlothDead</author>
	<datestamp>1243876800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Well, no.
VMware virtual machines share the memory that is identical, so you don't need to allocate that memory for every client (e.g. Kernel). Also, they share the files on the hard drive as long as they are identical and when running the same programs they even share some chunks of the RAM used by that program.</htmltext>
<tokenext>Well , no .
VMware virtual machines share the memory that is identical , so you do n't need to allocate that memory for every client ( e.g .
Kernel ) . Also , they share the files on the hard drive as long as they are identical and when running the same programs they even share some chunks of the RAM used by that program .</tokentext>
<sentencetext>Well, no.
VMware virtual machines share the memory that is identical, so you don't need to allocate that memory for every client (e.g.
Kernel). Also, they share the files on the hard drive as long as they are identical and when running the same programs they even share some chunks of the RAM used by that program.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177033</id>
	<title>Re:excellent sales story</title>
	<author>Anonymous</author>
	<datestamp>1243870680000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.</i></p><p>Virtualization makes many things a lot easier. Testing, rollback, provisioning, portability &amp; backup.</p><p>The success of virtualization is due to failures of the software industry to have good separation between applications &amp; operating systems. The one-application-per-server trend is the result, which leads to a lot of idle capacity.</p></htmltext>
<tokenext>Virtualization is an excellent story to sell .
It is a process that can be applied to a wide range of problems.Virtualization makes many things a lot easier .
Testing , rollback , provisioning , portability &amp; backup.The success of virtualization is due to failures of the software industry to have good separation between applications &amp; operating systems .
The one-application-per-server trend is the result , which leads to a lot of idle capacity .</tokentext>
<sentencetext>Virtualization is an excellent story to sell.
It is a process that can be applied to a wide range of problems.Virtualization makes many things a lot easier.
Testing, rollback, provisioning, portability &amp; backup.The success of virtualization is due to failures of the software industry to have good separation between applications &amp; operating systems.
The one-application-per-server trend is the result, which leads to a lot of idle capacity.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179385</id>
	<title>Linux alternatives</title>
	<author>speedtux</author>
	<datestamp>1243939560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There is no reason to switch to BSD just to get this functionality; Linux has plenty of choices for isolating software, allowing all sorts of tradeoffs between performance and isolation.</p><p>If you want something more lightweight than VMware/VirtualBox, you have plenty of choices on Linux: KVM, AppArmor, vserver, OpenVZ, LSM, SELinux, or even the BSD jails patch.</p></htmltext>
<tokenext>There is no reason to switch to BSD just to get this functionality ; Linux has plenty of choices for isolating software , allowing all sorts of tradeoffs between performance and isolation.If you want something more lightweight than VMware/VirtualBox , you have plenty of choices on Linux : KVM , AppArmor , vserver , OpenVZ , LSM , SELinux , or even the BSD jails patch .</tokentext>
<sentencetext>There is no reason to switch to BSD just to get this functionality; Linux has plenty of choices for isolating software, allowing all sorts of tradeoffs between performance and isolation.If you want something more lightweight than VMware/VirtualBox, you have plenty of choices on Linux: KVM, AppArmor, vserver, OpenVZ, LSM, SELinux, or even the BSD jails patch.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177305</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>billybob\_jcv</author>
	<datestamp>1243873440000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Sorry, but I think you're missing several important points.  In a company with several hundred physical servers and limited human resources, no one has the time to fool around with tuning a kernal and several apps to all run together in the same OS instance.  We need to build standard images and deploy them very quickly, and then we need a way to easily manage all of the applications.  We also need to be able to very quickly move applications to different HW when they grow beyond their current resources, we refresh server HW or there is a HW failure.  High Availability is expensive, and it is just not feasible for many midrange applications that are running on physical boxes.  Does all of this lead to less than optimal memory &amp; I/O performance?  Sure - but if my choice is hiring 2 more high-priced server engineers, or buying a pile of blades and ESX licenses, I will bet buying more HW &amp; SW will end up being the better overall solution.</htmltext>
<tokenext>Sorry , but I think you 're missing several important points .
In a company with several hundred physical servers and limited human resources , no one has the time to fool around with tuning a kernal and several apps to all run together in the same OS instance .
We need to build standard images and deploy them very quickly , and then we need a way to easily manage all of the applications .
We also need to be able to very quickly move applications to different HW when they grow beyond their current resources , we refresh server HW or there is a HW failure .
High Availability is expensive , and it is just not feasible for many midrange applications that are running on physical boxes .
Does all of this lead to less than optimal memory &amp; I/O performance ?
Sure - but if my choice is hiring 2 more high-priced server engineers , or buying a pile of blades and ESX licenses , I will bet buying more HW &amp; SW will end up being the better overall solution .</tokentext>
<sentencetext>Sorry, but I think you're missing several important points.
In a company with several hundred physical servers and limited human resources, no one has the time to fool around with tuning a kernal and several apps to all run together in the same OS instance.
We need to build standard images and deploy them very quickly, and then we need a way to easily manage all of the applications.
We also need to be able to very quickly move applications to different HW when they grow beyond their current resources, we refresh server HW or there is a HW failure.
High Availability is expensive, and it is just not feasible for many midrange applications that are running on physical boxes.
Does all of this lead to less than optimal memory &amp; I/O performance?
Sure - but if my choice is hiring 2 more high-priced server engineers, or buying a pile of blades and ESX licenses, I will bet buying more HW &amp; SW will end up being the better overall solution.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177105</id>
	<title>Re:Different tools for different jobs</title>
	<author>Anonymous</author>
	<datestamp>1243871460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>But here's what I do know. FreeBSD hasn't been a supported OS on ESX Server until vSphere came out less than two weeks ago.</i></p><p>Really? VMware tools for freebsd have been available for years. You can even run them on openbsd (with freebsd compatibility mode enabled).</p><p>There's even this <a href="http://bsd.slashdot.org/article.pl?sid=04/12/15/1420200" title="slashdot.org" rel="nofollow">slashdot</a> [slashdot.org] story from 2004 about freebsd 4.9 being supported as an esx guest.</p></htmltext>
<tokenext>But here 's what I do know .
FreeBSD has n't been a supported OS on ESX Server until vSphere came out less than two weeks ago.Really ?
VMware tools for freebsd have been available for years .
You can even run them on openbsd ( with freebsd compatibility mode enabled ) .There 's even this slashdot [ slashdot.org ] story from 2004 about freebsd 4.9 being supported as an esx guest .</tokentext>
<sentencetext>But here's what I do know.
FreeBSD hasn't been a supported OS on ESX Server until vSphere came out less than two weeks ago.Really?
VMware tools for freebsd have been available for years.
You can even run them on openbsd (with freebsd compatibility mode enabled).There's even this slashdot [slashdot.org] story from 2004 about freebsd 4.9 being supported as an esx guest.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176913</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179697</id>
	<title>Re:excellent sales story</title>
	<author>Anonymous</author>
	<datestamp>1243943160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>We are <b>ALL</b> impressed with your <b>USE</b> of <b>BOLD</b> even though you <b>NO CLUE</b> what you are <b>TALKING ABOUT</b>.</htmltext>
<tokenext>We are ALL impressed with your USE of BOLD even though you NO CLUE what you are TALKING ABOUT .</tokentext>
<sentencetext>We are ALL impressed with your USE of BOLD even though you NO CLUE what you are TALKING ABOUT.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177715</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177721</id>
	<title>Wrong tool for the job</title>
	<author>FranTaylor</author>
	<datestamp>1243877340000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>You might as well have said,</p><p>"Our earth moving business took a big jump in productivity when we switched from ice-cream scoops to backhoes".</p></htmltext>
<tokenext>You might as well have said , " Our earth moving business took a big jump in productivity when we switched from ice-cream scoops to backhoes " .</tokentext>
<sentencetext>You might as well have said,"Our earth moving business took a big jump in productivity when we switched from ice-cream scoops to backhoes".</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177203</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>syousef</author>
	<datestamp>1243872420000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>Virtualization DOES make sense, when you're trying to solve the right problem. Do not blame the tool for the incompetence of those using it. It's no good using a screwdriver to shovel dirt and then blaming the screwdriver.</p><p>Virtualization is good for many things:<br>- Low performance apps. Install once, run many copies<br>- Excellent for multiple test environments where tests are not hardware dependant<br>- Infrequently used environments, like dev environments, especially where the alternate solution is to provide physical access to multiple machines<br>- Demos and teaching where multiple operating systems are required<br>- Running small apps that don't run on your OS of choice infrequently</p><p>Virtualization is NOT good for:<br>- High performance applications<br>- Performance test envrionemnts<br>- Removing all dependence on physical hardware<br>- Moving your entire business to</p><p>Your specific concerns:<br><i># Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical<br></i></p><p>Actually this depends on your virtualization solution</p><p><i># TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.</i></p><p>So is hard disk access from multiple virtual operating systems contending for the same disk (unless you're going to have one disk per guest OS...even then are you going through one controller?) Resource contention is a trade-off. If all your systems are going to be running flat out simultaneously virtualization is a bad solution.</p><p><i># A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest</i></p><p>You can often mount the virtual disks in a HOST OS. No different to needing software to access multiple partitions. As long as the software is available, it's not as big an issue.</p><p><i># Memory management is an absolute clusterfuck. From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself. This mutual myopia renders the usual page-cache algorithms absolutely useless. Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made</i></p><p>A lot of operating systems are becoming virtualization aware, and can be scheduled cooperatively to some degree. That doesn't mean your concern isn't valid, but there is hope that the problems will be reduced. However once again if all your virtual environments are running flat out, you're using virtualization for the wrong thing.</p></htmltext>
<tokenext>Virtualization DOES make sense , when you 're trying to solve the right problem .
Do not blame the tool for the incompetence of those using it .
It 's no good using a screwdriver to shovel dirt and then blaming the screwdriver.Virtualization is good for many things : - Low performance apps .
Install once , run many copies- Excellent for multiple test environments where tests are not hardware dependant- Infrequently used environments , like dev environments , especially where the alternate solution is to provide physical access to multiple machines- Demos and teaching where multiple operating systems are required- Running small apps that do n't run on your OS of choice infrequentlyVirtualization is NOT good for : - High performance applications- Performance test envrionemnts- Removing all dependence on physical hardware- Moving your entire business toYour specific concerns : # Each guest needs its own kernel , so you need to allocate memory and disk space for all these kernels that are in fact identicalActually this depends on your virtualization solution # TLB flushes kill performance .
Recent x86 CPUs address the problem to some degree , but it 's still a problem.So is hard disk access from multiple virtual operating systems contending for the same disk ( unless you 're going to have one disk per guest OS...even then are you going through one controller ?
) Resource contention is a trade-off .
If all your systems are going to be running flat out simultaneously virtualization is a bad solution. # A guest 's filesystem is on a virtual block device , so it 's hard to get at it without running some kind of fileserver on the guestYou can often mount the virtual disks in a HOST OS .
No different to needing software to access multiple partitions .
As long as the software is available , it 's not as big an issue. # Memory management is an absolute clusterfuck .
From the point of view of the host , each guest 's memory is an opaque blob , and from the point of view of the guest , it has the machine to itself .
This mutual myopia renders the usual page-cache algorithms absolutely useless .
Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being madeA lot of operating systems are becoming virtualization aware , and can be scheduled cooperatively to some degree .
That does n't mean your concern is n't valid , but there is hope that the problems will be reduced .
However once again if all your virtual environments are running flat out , you 're using virtualization for the wrong thing .</tokentext>
<sentencetext>Virtualization DOES make sense, when you're trying to solve the right problem.
Do not blame the tool for the incompetence of those using it.
It's no good using a screwdriver to shovel dirt and then blaming the screwdriver.Virtualization is good for many things:- Low performance apps.
Install once, run many copies- Excellent for multiple test environments where tests are not hardware dependant- Infrequently used environments, like dev environments, especially where the alternate solution is to provide physical access to multiple machines- Demos and teaching where multiple operating systems are required- Running small apps that don't run on your OS of choice infrequentlyVirtualization is NOT good for:- High performance applications- Performance test envrionemnts- Removing all dependence on physical hardware- Moving your entire business toYour specific concerns:# Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identicalActually this depends on your virtualization solution# TLB flushes kill performance.
Recent x86 CPUs address the problem to some degree, but it's still a problem.So is hard disk access from multiple virtual operating systems contending for the same disk (unless you're going to have one disk per guest OS...even then are you going through one controller?
) Resource contention is a trade-off.
If all your systems are going to be running flat out simultaneously virtualization is a bad solution.# A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guestYou can often mount the virtual disks in a HOST OS.
No different to needing software to access multiple partitions.
As long as the software is available, it's not as big an issue.# Memory management is an absolute clusterfuck.
From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself.
This mutual myopia renders the usual page-cache algorithms absolutely useless.
Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being madeA lot of operating systems are becoming virtualization aware, and can be scheduled cooperatively to some degree.
That doesn't mean your concern isn't valid, but there is hope that the problems will be reduced.
However once again if all your virtual environments are running flat out, you're using virtualization for the wrong thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180285</id>
	<title>Re:excellent sales story</title>
	<author>Znork</author>
	<datestamp>1243948140000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>I used to use ESX, but the built in virtualization in RHEL does it better these days. ESX performance is nice enough, but paravirt xen tech outperforms it by 3x on some things (scripts, exec, syscall intensive stuff).</p><p>It's also much, much cheaper.</p><p>Then again, I don't run any virtualized Windows, so your mileage may vary.</p></htmltext>
<tokenext>I used to use ESX , but the built in virtualization in RHEL does it better these days .
ESX performance is nice enough , but paravirt xen tech outperforms it by 3x on some things ( scripts , exec , syscall intensive stuff ) .It 's also much , much cheaper.Then again , I do n't run any virtualized Windows , so your mileage may vary .</tokentext>
<sentencetext>I used to use ESX, but the built in virtualization in RHEL does it better these days.
ESX performance is nice enough, but paravirt xen tech outperforms it by 3x on some things (scripts, exec, syscall intensive stuff).It's also much, much cheaper.Then again, I don't run any virtualized Windows, so your mileage may vary.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177459</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176853</id>
	<title>Solaris Zones also</title>
	<author>Anonymous</author>
	<datestamp>1243869360000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Zones are the same concept, with the same benefit.</p><p>An added advantage Solaris zones have is flavoured zones: Make a Solaris 9 zone on a Solaris 10 host, a Linux zone on a Solaris 10 host and soon a Solaris 10 zone on an OpenSolaris host.</p><p>This has turned out much more stable, easy and simply effecient than our Vmware servers, which we now only have for Windows and other random OS's.</p></htmltext>
<tokenext>Zones are the same concept , with the same benefit.An added advantage Solaris zones have is flavoured zones : Make a Solaris 9 zone on a Solaris 10 host , a Linux zone on a Solaris 10 host and soon a Solaris 10 zone on an OpenSolaris host.This has turned out much more stable , easy and simply effecient than our Vmware servers , which we now only have for Windows and other random OS 's .</tokentext>
<sentencetext>Zones are the same concept, with the same benefit.An added advantage Solaris zones have is flavoured zones: Make a Solaris 9 zone on a Solaris 10 host, a Linux zone on a Solaris 10 host and soon a Solaris 10 zone on an OpenSolaris host.This has turned out much more stable, easy and simply effecient than our Vmware servers, which we now only have for Windows and other random OS's.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176975</id>
	<title>Coral Cache</title>
	<author>Qubit</author>
	<datestamp>1243870200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><a href="http://www.playingwithwire.com.nyud.net/2009/06/virtual-failure-yippiemove-switches-from-vmware-to-freebsd-jails/" title="nyud.net">http://www.playingwithwire.com.nyud.net/2009/06/virtual-failure-yippiemove-switches-from-vmware-to-freebsd-jails/</a> [nyud.net]

Because otherwise it's hosed.</htmltext>
<tokenext>http : //www.playingwithwire.com.nyud.net/2009/06/virtual-failure-yippiemove-switches-from-vmware-to-freebsd-jails/ [ nyud.net ] Because otherwise it 's hosed .</tokentext>
<sentencetext>http://www.playingwithwire.com.nyud.net/2009/06/virtual-failure-yippiemove-switches-from-vmware-to-freebsd-jails/ [nyud.net]

Because otherwise it's hosed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177075</id>
	<title>Virtualization is good enough</title>
	<author>Gothmolly</author>
	<datestamp>1243871040000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>I work for $LARGE\_US\_BANK in the performance and capacity management group, and we constantly see the business side of the house buy servers that end up running at 10-15\% utilization.  Why?  Lots of reasons - the vendor said so, they want "redundancy", they want "failover" and they want "to make sure there's enough".   Given the load, if you lose 10-20\% overhead due to VM, who cares ?</p></htmltext>
<tokenext>I work for $ LARGE \ _US \ _BANK in the performance and capacity management group , and we constantly see the business side of the house buy servers that end up running at 10-15 \ % utilization .
Why ? Lots of reasons - the vendor said so , they want " redundancy " , they want " failover " and they want " to make sure there 's enough " .
Given the load , if you lose 10-20 \ % overhead due to VM , who cares ?</tokentext>
<sentencetext>I work for $LARGE\_US\_BANK in the performance and capacity management group, and we constantly see the business side of the house buy servers that end up running at 10-15\% utilization.
Why?  Lots of reasons - the vendor said so, they want "redundancy", they want "failover" and they want "to make sure there's enough".
Given the load, if you lose 10-20\% overhead due to VM, who cares ?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28193139</id>
	<title>Re:free beats fee most of the time</title>
	<author>obscuro</author>
	<datestamp>1244022720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>But Why not just say so immediately?! Most people won't bother listening to what you have to say if they need too use a search engine to figure out key pieces of information just to understand the context of your words!</p></div><p>See top search result: <a href="http://www.google.com/search?rlz=1C1CHMC\_enUS291US305&amp;sourceid=chrome&amp;ie=UTF-8&amp;q=ltsp" title="google.com" rel="nofollow">http://www.google.com/search?rlz=1C1CHMC\_enUS291US305&amp;sourceid=chrome&amp;ie=UTF-8&amp;q=ltsp</a> [google.com] </p><p>Slashdot would probably begin to suck if people followed this kind of full service philosophy on every post. The posts would be longer, there would be information presented within the posts that a large number of users already know (or could find out with less than 5 SECONDS of effort). I, for one, don't read slashdot for some newsy eye massage with release.</p><p>BTW - Given the readership numbers for slashdot, I think we've proven that most people HERE do listen even though we sometimes need a search engine. I also have a dictionary on my shelf....</p></div>
	</htmltext>
<tokenext>But Why not just say so immediately ? !
Most people wo n't bother listening to what you have to say if they need too use a search engine to figure out key pieces of information just to understand the context of your words ! See top search result : http : //www.google.com/search ? rlz = 1C1CHMC \ _enUS291US305&amp;sourceid = chrome&amp;ie = UTF-8&amp;q = ltsp [ google.com ] Slashdot would probably begin to suck if people followed this kind of full service philosophy on every post .
The posts would be longer , there would be information presented within the posts that a large number of users already know ( or could find out with less than 5 SECONDS of effort ) .
I , for one , do n't read slashdot for some newsy eye massage with release.BTW - Given the readership numbers for slashdot , I think we 've proven that most people HERE do listen even though we sometimes need a search engine .
I also have a dictionary on my shelf... .</tokentext>
<sentencetext>But Why not just say so immediately?!
Most people won't bother listening to what you have to say if they need too use a search engine to figure out key pieces of information just to understand the context of your words!See top search result: http://www.google.com/search?rlz=1C1CHMC\_enUS291US305&amp;sourceid=chrome&amp;ie=UTF-8&amp;q=ltsp [google.com] Slashdot would probably begin to suck if people followed this kind of full service philosophy on every post.
The posts would be longer, there would be information presented within the posts that a large number of users already know (or could find out with less than 5 SECONDS of effort).
I, for one, don't read slashdot for some newsy eye massage with release.BTW - Given the readership numbers for slashdot, I think we've proven that most people HERE do listen even though we sometimes need a search engine.
I also have a dictionary on my shelf....
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28184437</id>
	<title>Re:excellent sales story</title>
	<author>Omega996</author>
	<datestamp>1243966020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>ugh, there sure are a lot of people throwing out 'use ESXi - it's free!'. ESXi only runs on certain hardware, so if you don't have that hardware, it's not even a valid choice. Real management of ESXi is not particularly wonderful without using VMWare's management software, and that's not free, as far as I know.<br>I agree that performance-wise, it's a better choice than VMWServer. But I don't think the entry point for ESXi is as low as xen or Citrix's XenServer. XS runs on a wider range of hardware than ESXi, and the 'basic' management tools are pretty good, and also free.<br><br>I agree with everything else you mention, though, so don't think I'm trolling your post. I read TFA, and wow... where to start?</htmltext>
<tokenext>ugh , there sure are a lot of people throwing out 'use ESXi - it 's free ! ' .
ESXi only runs on certain hardware , so if you do n't have that hardware , it 's not even a valid choice .
Real management of ESXi is not particularly wonderful without using VMWare 's management software , and that 's not free , as far as I know.I agree that performance-wise , it 's a better choice than VMWServer .
But I do n't think the entry point for ESXi is as low as xen or Citrix 's XenServer .
XS runs on a wider range of hardware than ESXi , and the 'basic ' management tools are pretty good , and also free.I agree with everything else you mention , though , so do n't think I 'm trolling your post .
I read TFA , and wow... where to start ?</tokentext>
<sentencetext>ugh, there sure are a lot of people throwing out 'use ESXi - it's free!'.
ESXi only runs on certain hardware, so if you don't have that hardware, it's not even a valid choice.
Real management of ESXi is not particularly wonderful without using VMWare's management software, and that's not free, as far as I know.I agree that performance-wise, it's a better choice than VMWServer.
But I don't think the entry point for ESXi is as low as xen or Citrix's XenServer.
XS runs on a wider range of hardware than ESXi, and the 'basic' management tools are pretty good, and also free.I agree with everything else you mention, though, so don't think I'm trolling your post.
I read TFA, and wow... where to start?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177485</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177559</id>
	<title>Re:XenServer worked for us</title>
	<author>coffee\_bouzu</author>
	<datestamp>1243875600000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>
Comparing XenServer and VMware Server is like comparing apples and oranges. While VMware Server is impressive, it is very much like an emulator: It runs on top of another operating system and has to work harder to execute privileged commands. VMware ESX is a bare-metal hypervisor that is better optimized to do virtualization. While it is still doing "emulation", It is a much better comparison to XenServer than VMware Server is.
<br> <br>
TFA is slashdotted at the moment, so I don't know if VMware Server or ESX is being compared. Either way, the advantage of virtualization is not performance, it is flexibility. The raw performance may be less, but it gives you the ability to do things that just aren't possible with a physical machine. The ability to hot migrate from one physical machine to another in the event of hardware failure or replacement and the ability to have entire "machines" dedicated to single purposes without needing an equal number of physical machines are, at best, more difficult if not impossible when not using virtualization.
<br> <br>
Don't get me wrong, I'm no VMware fanboy. It certainly has its rough edges and is certainly not perfect. However, virtualization as a technology has undeniable benefits in certain situations. Absolute performance just isn't one of them right now.</htmltext>
<tokenext>Comparing XenServer and VMware Server is like comparing apples and oranges .
While VMware Server is impressive , it is very much like an emulator : It runs on top of another operating system and has to work harder to execute privileged commands .
VMware ESX is a bare-metal hypervisor that is better optimized to do virtualization .
While it is still doing " emulation " , It is a much better comparison to XenServer than VMware Server is .
TFA is slashdotted at the moment , so I do n't know if VMware Server or ESX is being compared .
Either way , the advantage of virtualization is not performance , it is flexibility .
The raw performance may be less , but it gives you the ability to do things that just are n't possible with a physical machine .
The ability to hot migrate from one physical machine to another in the event of hardware failure or replacement and the ability to have entire " machines " dedicated to single purposes without needing an equal number of physical machines are , at best , more difficult if not impossible when not using virtualization .
Do n't get me wrong , I 'm no VMware fanboy .
It certainly has its rough edges and is certainly not perfect .
However , virtualization as a technology has undeniable benefits in certain situations .
Absolute performance just is n't one of them right now .</tokentext>
<sentencetext>
Comparing XenServer and VMware Server is like comparing apples and oranges.
While VMware Server is impressive, it is very much like an emulator: It runs on top of another operating system and has to work harder to execute privileged commands.
VMware ESX is a bare-metal hypervisor that is better optimized to do virtualization.
While it is still doing "emulation", It is a much better comparison to XenServer than VMware Server is.
TFA is slashdotted at the moment, so I don't know if VMware Server or ESX is being compared.
Either way, the advantage of virtualization is not performance, it is flexibility.
The raw performance may be less, but it gives you the ability to do things that just aren't possible with a physical machine.
The ability to hot migrate from one physical machine to another in the event of hardware failure or replacement and the ability to have entire "machines" dedicated to single purposes without needing an equal number of physical machines are, at best, more difficult if not impossible when not using virtualization.
Don't get me wrong, I'm no VMware fanboy.
It certainly has its rough edges and is certainly not perfect.
However, virtualization as a technology has undeniable benefits in certain situations.
Absolute performance just isn't one of them right now.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178895</id>
	<title>SQLite? Huh?</title>
	<author>Jacques Chester</author>
	<datestamp>1243933740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>As much as I like and admire SQLite, I'm not sure if it's the right tool for the job. Something like PostgreSQL, with proper MVCC and nice multicore scaling, would probably have worked a lot better in the first place.</p></htmltext>
<tokenext>As much as I like and admire SQLite , I 'm not sure if it 's the right tool for the job .
Something like PostgreSQL , with proper MVCC and nice multicore scaling , would probably have worked a lot better in the first place .</tokentext>
<sentencetext>As much as I like and admire SQLite, I'm not sure if it's the right tool for the job.
Something like PostgreSQL, with proper MVCC and nice multicore scaling, would probably have worked a lot better in the first place.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177891</id>
	<title>Re:excellent sales story</title>
	<author>mysidia</author>
	<datestamp>1243879500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
It's a lot harder and not particularly advisable, but it's completely possible to manage a standalone host without any Windows machine.</p><p>
It just happens the most useful management tools that ship with the product are for Windows.
</p><p>
You can use a command line interface to start/stop VMS, setup datastores, etc, create a VM.
</p><p>
As for gaining access to the console, if you drop the right lines into the<nobr> <wbr></nobr>.VMX file  before starting the VM,  you can direct the host to accept a VNC connection on a port you designate, to access the VM console.
</p><p>
But time is precious, and in most enterprises it's most cost-effective to just round up a couple of Windows workstations and install the VI client on them to manage the VM hosts.
</p></htmltext>
<tokenext>It 's a lot harder and not particularly advisable , but it 's completely possible to manage a standalone host without any Windows machine .
It just happens the most useful management tools that ship with the product are for Windows .
You can use a command line interface to start/stop VMS , setup datastores , etc , create a VM .
As for gaining access to the console , if you drop the right lines into the .VMX file before starting the VM , you can direct the host to accept a VNC connection on a port you designate , to access the VM console .
But time is precious , and in most enterprises it 's most cost-effective to just round up a couple of Windows workstations and install the VI client on them to manage the VM hosts .</tokentext>
<sentencetext>
It's a lot harder and not particularly advisable, but it's completely possible to manage a standalone host without any Windows machine.
It just happens the most useful management tools that ship with the product are for Windows.
You can use a command line interface to start/stop VMS, setup datastores, etc, create a VM.
As for gaining access to the console, if you drop the right lines into the .VMX file  before starting the VM,  you can direct the host to accept a VNC connection on a port you designate, to access the VM console.
But time is precious, and in most enterprises it's most cost-effective to just round up a couple of Windows workstations and install the VI client on them to manage the VM hosts.
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177715</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176889</id>
	<title>Is this a surprise?</title>
	<author>Anonymous</author>
	<datestamp>1243869600000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>Amazing! Not running several additional copies of an operating system with all of the needless overhead involved is faster! Who would have guessed?</p><p>Sometimes a virtual machine is far more "solution" than you need. If you really want the same OS with lots of separated services and resource management... then run a single copy of the OS and implement some resource management. Jails are just one example - I find Solaris Containers to be much more elegant. Of course, then you have to be running Solaris...</p></htmltext>
<tokenext>Amazing !
Not running several additional copies of an operating system with all of the needless overhead involved is faster !
Who would have guessed ? Sometimes a virtual machine is far more " solution " than you need .
If you really want the same OS with lots of separated services and resource management... then run a single copy of the OS and implement some resource management .
Jails are just one example - I find Solaris Containers to be much more elegant .
Of course , then you have to be running Solaris.. .</tokentext>
<sentencetext>Amazing!
Not running several additional copies of an operating system with all of the needless overhead involved is faster!
Who would have guessed?Sometimes a virtual machine is far more "solution" than you need.
If you really want the same OS with lots of separated services and resource management... then run a single copy of the OS and implement some resource management.
Jails are just one example - I find Solaris Containers to be much more elegant.
Of course, then you have to be running Solaris...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178573</id>
	<title>BSD is not dead</title>
	<author>commodoresloat</author>
	<datestamp>1243973700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's just in jail.</p></htmltext>
<tokenext>It 's just in jail .</tokentext>
<sentencetext>It's just in jail.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176625</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177459</id>
	<title>Re:excellent sales story</title>
	<author>aarggh</author>
	<datestamp>1243874580000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>In my opinion it always comes down to the fact that shelling out some money for a good product always beats trying to stuff around with a "free" one that's hard to configure and maintain. I run 4 ESX farms, and have NO problem rolling out virtually any type of server from Oracle/RHEL, to Win2k3/2k8, and everything inbetween. I simply make sure I allocate enough resources, and NEVER over commit. I did a cost analysis ages back trying to convince management we needed to go down the virtualisation path to guarantee business continuity.</p><p>In the end it took the failure of our most critical CRM server crashing and me importing an Acronis backup of it into ESX that convinced them beyond a shadow of a doubt.</p><p>I would say to anyone, something for $15-20K that gives:</p><p>Fault-tolerance<br>Fail-over<br>Easy server roll-outs<br>Simple network re-configuration<br>Almost instant recoverability of machines</p><p>Is more than worth the cost! The true cost of NOT doing it can be the end of a business, or as I have seen, several days of data/productivity lost!</p><p>Performance issues? Reliability issues? I have none at all, the only times i've had issues are poorly developed<nobr> <wbr></nobr>.NET apps, IIS, etc, which I then dump the stats and give them to the developers to get them to clean up their own code. And more than once I've had to restore an entire server because someones scripts deleted or screwed entire data structures, and in a case like that, being able to restore a 120GB virtual in around 30mins from the comfort of my desk or home really beats locating tapes, cataloging them, restoring, etc, etc.</p><p>I have Fibre SAN's (with a mix of F/C, SAS, and SATA disks) and switches, so the SAN just shrugs off any attempt to I/O bind it! The only limitation I can think of is the 4 virtual NIC's, it would be good for some of our products to be able to provide a much higher number.</p><p>No comparison in my opinion.</p></htmltext>
<tokenext>In my opinion it always comes down to the fact that shelling out some money for a good product always beats trying to stuff around with a " free " one that 's hard to configure and maintain .
I run 4 ESX farms , and have NO problem rolling out virtually any type of server from Oracle/RHEL , to Win2k3/2k8 , and everything inbetween .
I simply make sure I allocate enough resources , and NEVER over commit .
I did a cost analysis ages back trying to convince management we needed to go down the virtualisation path to guarantee business continuity.In the end it took the failure of our most critical CRM server crashing and me importing an Acronis backup of it into ESX that convinced them beyond a shadow of a doubt.I would say to anyone , something for $ 15-20K that gives : Fault-toleranceFail-overEasy server roll-outsSimple network re-configurationAlmost instant recoverability of machinesIs more than worth the cost !
The true cost of NOT doing it can be the end of a business , or as I have seen , several days of data/productivity lost ! Performance issues ?
Reliability issues ?
I have none at all , the only times i 've had issues are poorly developed .NET apps , IIS , etc , which I then dump the stats and give them to the developers to get them to clean up their own code .
And more than once I 've had to restore an entire server because someones scripts deleted or screwed entire data structures , and in a case like that , being able to restore a 120GB virtual in around 30mins from the comfort of my desk or home really beats locating tapes , cataloging them , restoring , etc , etc.I have Fibre SAN 's ( with a mix of F/C , SAS , and SATA disks ) and switches , so the SAN just shrugs off any attempt to I/O bind it !
The only limitation I can think of is the 4 virtual NIC 's , it would be good for some of our products to be able to provide a much higher number.No comparison in my opinion .</tokentext>
<sentencetext>In my opinion it always comes down to the fact that shelling out some money for a good product always beats trying to stuff around with a "free" one that's hard to configure and maintain.
I run 4 ESX farms, and have NO problem rolling out virtually any type of server from Oracle/RHEL, to Win2k3/2k8, and everything inbetween.
I simply make sure I allocate enough resources, and NEVER over commit.
I did a cost analysis ages back trying to convince management we needed to go down the virtualisation path to guarantee business continuity.In the end it took the failure of our most critical CRM server crashing and me importing an Acronis backup of it into ESX that convinced them beyond a shadow of a doubt.I would say to anyone, something for $15-20K that gives:Fault-toleranceFail-overEasy server roll-outsSimple network re-configurationAlmost instant recoverability of machinesIs more than worth the cost!
The true cost of NOT doing it can be the end of a business, or as I have seen, several days of data/productivity lost!Performance issues?
Reliability issues?
I have none at all, the only times i've had issues are poorly developed .NET apps, IIS, etc, which I then dump the stats and give them to the developers to get them to clean up their own code.
And more than once I've had to restore an entire server because someones scripts deleted or screwed entire data structures, and in a case like that, being able to restore a 120GB virtual in around 30mins from the comfort of my desk or home really beats locating tapes, cataloging them, restoring, etc, etc.I have Fibre SAN's (with a mix of F/C, SAS, and SATA disks) and switches, so the SAN just shrugs off any attempt to I/O bind it!
The only limitation I can think of is the 4 virtual NIC's, it would be good for some of our products to be able to provide a much higher number.No comparison in my opinion.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182797</id>
	<title>Re:free beats fee most of the time</title>
	<author>ccady</author>
	<datestamp>1243959300000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>When I see a term or acronym on Slashdot that don't know about, I go look up the item and learn something.  I am often glad that I do.  Except that goatse thing.</htmltext>
<tokenext>When I see a term or acronym on Slashdot that do n't know about , I go look up the item and learn something .
I am often glad that I do .
Except that goatse thing .</tokentext>
<sentencetext>When I see a term or acronym on Slashdot that don't know about, I go look up the item and learn something.
I am often glad that I do.
Except that goatse thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181255</id>
	<title>Re:excellent sales story</title>
	<author>awpoopy</author>
	<datestamp>1243953780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I was not aware of that. Thank you for the post.<br>I'll give it a try. It is too bad that it requires oracle, however there's some hope a little light at the end of the tunnel at least.</htmltext>
<tokenext>I was not aware of that .
Thank you for the post.I 'll give it a try .
It is too bad that it requires oracle , however there 's some hope a little light at the end of the tunnel at least .</tokentext>
<sentencetext>I was not aware of that.
Thank you for the post.I'll give it a try.
It is too bad that it requires oracle, however there's some hope a little light at the end of the tunnel at least.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178103</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28187939</id>
	<title>Re:What's the diff between jail and zone?</title>
	<author>jra</author>
	<datestamp>1243937460000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>They sound a fair amount like what I understand OpenVZ to be about as well; does the comparison hold there, too?</p></htmltext>
<tokenext>They sound a fair amount like what I understand OpenVZ to be about as well ; does the comparison hold there , too ?</tokentext>
<sentencetext>They sound a fair amount like what I understand OpenVZ to be about as well; does the comparison hold there, too?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176797</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182181</id>
	<title>Re:Different tools for different jobs</title>
	<author>Just Some Guy</author>
	<datestamp>1243957260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>If you're a company that's trying to do web hosting, or run lots of very very similar systems that do the same, performance-centric task, then yes! OS Virtualization is for you! If you're like 95\% of datacenters out there that have mixed workloads, mixed OS versions, and require deep features that are provided from a real system-level virtualization platform, use those.</p></div><p>If only it weren't mathematically impossible to mix technologies in the datacenter so that you could run jails <em>and</em> VMware in the same building and divide tasks amongst them as appropriate, but alas.</p></div>
	</htmltext>
<tokenext>If you 're a company that 's trying to do web hosting , or run lots of very very similar systems that do the same , performance-centric task , then yes !
OS Virtualization is for you !
If you 're like 95 \ % of datacenters out there that have mixed workloads , mixed OS versions , and require deep features that are provided from a real system-level virtualization platform , use those.If only it were n't mathematically impossible to mix technologies in the datacenter so that you could run jails and VMware in the same building and divide tasks amongst them as appropriate , but alas .</tokentext>
<sentencetext>If you're a company that's trying to do web hosting, or run lots of very very similar systems that do the same, performance-centric task, then yes!
OS Virtualization is for you!
If you're like 95\% of datacenters out there that have mixed workloads, mixed OS versions, and require deep features that are provided from a real system-level virtualization platform, use those.If only it weren't mathematically impossible to mix technologies in the datacenter so that you could run jails and VMware in the same building and divide tasks amongst them as appropriate, but alas.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176913</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182249</id>
	<title>Solaris</title>
	<author>JAlexoi</author>
	<datestamp>1243957500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>In their place I would checkout the Solaris equivalent of jails. Solaris Zones look really good.</htmltext>
<tokenext>In their place I would checkout the Solaris equivalent of jails .
Solaris Zones look really good .</tokentext>
<sentencetext>In their place I would checkout the Solaris equivalent of jails.
Solaris Zones look really good.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755</id>
	<title>Sounds about right</title>
	<author>Just Some Guy</author>
	<datestamp>1243868700000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>We use jails a lot at my work.  We have a few pretty beefy "jail servers", and use FreeBSD's <a href="http://erdgeist.org/arts/software/ezjail/" title="erdgeist.org">ezjail</a> [erdgeist.org] port to manage as many instances as we need.  Need a new spamfilter, say?  <tt>sudo ezjail-admin create spam1.example.com 192.168.0.5</tt> and wait for 3 seconds while it creates a brand new empty system.  It uses FreeBSD's "nullfs" filesystem to mount a partially populated base system read-only, so your actual jail directly only contains the files that you'd install on top of a new system.  This saves drive space, makes it trivially easy to upgrade the OS image on all jails at once (<tt>sudo ezjail-admin update -i</tt>), and saves RAM because each jail shares the same copy of all the base system's shared libraries.</p><p>For extra fun, park each jail on its own ZFS filesystem and take a snapshot of the whole system before doing major upgrades.  Want to migrate a jail onto a different server?  Use <tt>zfs send</tt> and <tt>zfs receive</tt> to move the jail directory onto the other machine and start it.</p><p>The regular FreeBSD 7.2 jails already support multiple IP addresses and any combination of IPv4 and IPv6, and each jail can have its own routing table.  FreeBSD 8-CURRENT jails also get their own firewall if I understand correctly.  You could conceivably have each jail server host its own firewall server that protects and NATs all of the other images on that host.  Imagine one machine running 20 services, all totally isolated and each running on an IP not routable outside of the machine itself - with no performance penalty.</p><p>Jails might not be the solution to <em>every</em> problem (you can't virtualize Windows this way, although quite a few Linux distros should run perfectly), but it's astoundingly good at the problems it <em>does</em> address.  Now that I'm thoroughly spoiled, I'd never want to virtualize Unix any other way.</p></htmltext>
<tokenext>We use jails a lot at my work .
We have a few pretty beefy " jail servers " , and use FreeBSD 's ezjail [ erdgeist.org ] port to manage as many instances as we need .
Need a new spamfilter , say ?
sudo ezjail-admin create spam1.example.com 192.168.0.5 and wait for 3 seconds while it creates a brand new empty system .
It uses FreeBSD 's " nullfs " filesystem to mount a partially populated base system read-only , so your actual jail directly only contains the files that you 'd install on top of a new system .
This saves drive space , makes it trivially easy to upgrade the OS image on all jails at once ( sudo ezjail-admin update -i ) , and saves RAM because each jail shares the same copy of all the base system 's shared libraries.For extra fun , park each jail on its own ZFS filesystem and take a snapshot of the whole system before doing major upgrades .
Want to migrate a jail onto a different server ?
Use zfs send and zfs receive to move the jail directory onto the other machine and start it.The regular FreeBSD 7.2 jails already support multiple IP addresses and any combination of IPv4 and IPv6 , and each jail can have its own routing table .
FreeBSD 8-CURRENT jails also get their own firewall if I understand correctly .
You could conceivably have each jail server host its own firewall server that protects and NATs all of the other images on that host .
Imagine one machine running 20 services , all totally isolated and each running on an IP not routable outside of the machine itself - with no performance penalty.Jails might not be the solution to every problem ( you ca n't virtualize Windows this way , although quite a few Linux distros should run perfectly ) , but it 's astoundingly good at the problems it does address .
Now that I 'm thoroughly spoiled , I 'd never want to virtualize Unix any other way .</tokentext>
<sentencetext>We use jails a lot at my work.
We have a few pretty beefy "jail servers", and use FreeBSD's ezjail [erdgeist.org] port to manage as many instances as we need.
Need a new spamfilter, say?
sudo ezjail-admin create spam1.example.com 192.168.0.5 and wait for 3 seconds while it creates a brand new empty system.
It uses FreeBSD's "nullfs" filesystem to mount a partially populated base system read-only, so your actual jail directly only contains the files that you'd install on top of a new system.
This saves drive space, makes it trivially easy to upgrade the OS image on all jails at once (sudo ezjail-admin update -i), and saves RAM because each jail shares the same copy of all the base system's shared libraries.For extra fun, park each jail on its own ZFS filesystem and take a snapshot of the whole system before doing major upgrades.
Want to migrate a jail onto a different server?
Use zfs send and zfs receive to move the jail directory onto the other machine and start it.The regular FreeBSD 7.2 jails already support multiple IP addresses and any combination of IPv4 and IPv6, and each jail can have its own routing table.
FreeBSD 8-CURRENT jails also get their own firewall if I understand correctly.
You could conceivably have each jail server host its own firewall server that protects and NATs all of the other images on that host.
Imagine one machine running 20 services, all totally isolated and each running on an IP not routable outside of the machine itself - with no performance penalty.Jails might not be the solution to every problem (you can't virtualize Windows this way, although quite a few Linux distros should run perfectly), but it's astoundingly good at the problems it does address.
Now that I'm thoroughly spoiled, I'd never want to virtualize Unix any other way.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28187309</id>
	<title>Re:Virtualization doesn't make sense</title>
	<author>Anonymous</author>
	<datestamp>1243935120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p> Not only are we reinventing the wheel, but we're reinventing a square one covered in jelly.</p></div><p>KY Jelly by chance?</p></div>
	</htmltext>
<tokenext>Not only are we reinventing the wheel , but we 're reinventing a square one covered in jelly.KY Jelly by chance ?</tokentext>
<sentencetext> Not only are we reinventing the wheel, but we're reinventing a square one covered in jelly.KY Jelly by chance?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28186529</id>
	<title>Re:free beats fee most of the time</title>
	<author>Anonymous</author>
	<datestamp>1243974900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Would you like us to explain what the VM in VMware is also?? I see that SQLite was mentioned in the article to. Would you liked that spelled out also? How about what BSD stands for? We wouldn't want you to get confused or something.</p><p>AFIK (As Far As I Know) This site is primarily focused to People of the administrative level of things (Hence the name of the site). If you want to hang out with the big boys, then bring a dictionary. If you want to be a part of the geek/nerd environment then get used to it. We throw Acronyms like WWE wrestlers throw matches.</p></htmltext>
<tokenext>Would you like us to explain what the VM in VMware is also ? ?
I see that SQLite was mentioned in the article to .
Would you liked that spelled out also ?
How about what BSD stands for ?
We would n't want you to get confused or something.AFIK ( As Far As I Know ) This site is primarily focused to People of the administrative level of things ( Hence the name of the site ) .
If you want to hang out with the big boys , then bring a dictionary .
If you want to be a part of the geek/nerd environment then get used to it .
We throw Acronyms like WWE wrestlers throw matches .</tokentext>
<sentencetext>Would you like us to explain what the VM in VMware is also??
I see that SQLite was mentioned in the article to.
Would you liked that spelled out also?
How about what BSD stands for?
We wouldn't want you to get confused or something.AFIK (As Far As I Know) This site is primarily focused to People of the administrative level of things (Hence the name of the site).
If you want to hang out with the big boys, then bring a dictionary.
If you want to be a part of the geek/nerd environment then get used to it.
We throw Acronyms like WWE wrestlers throw matches.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731</id>
	<title>XenServer worked for us</title>
	<author>gbr</author>
	<datestamp>1243868580000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>We had performance issues with VMWare Server as well, especially in the disk I/O area.  Converting to XenServer from Citrix solved the issues for us.  We have great speed, can virtualize other OS's, and management is significantly better.</p></htmltext>
<tokenext>We had performance issues with VMWare Server as well , especially in the disk I/O area .
Converting to XenServer from Citrix solved the issues for us .
We have great speed , can virtualize other OS 's , and management is significantly better .</tokentext>
<sentencetext>We had performance issues with VMWare Server as well, especially in the disk I/O area.
Converting to XenServer from Citrix solved the issues for us.
We have great speed, can virtualize other OS's, and management is significantly better.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176625
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178573
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177559
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176709
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176949
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176799
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179301
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177715
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179697
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177857
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177311
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176913
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182181
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181079
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177075
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180821
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_67</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178763
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177439
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177543
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178169
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177033
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177673
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_64</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176937
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177223
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177283
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177715
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177891
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176971
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177705
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177075
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177393
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177203
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178173
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176799
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179767
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179183
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_65</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179249
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182341
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177715
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178103
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181255
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179821
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176913
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177105
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176799
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28193139
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177927
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177459
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180285
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28188467
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_62</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177305
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28184285
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181925
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177075
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177373
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176997
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176969
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28192993
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176799
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28186529
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_68</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176889
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177003
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177459
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177895
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177485
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28184437
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178059
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177169
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178271
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176797
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28187939
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_69</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177075
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177831
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177157
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177351
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177485
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28183115
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176957
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178843
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_66</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176799
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182797
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28200643
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177459
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181413
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177339
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176709
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176923
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177931
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177915
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_63</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176799
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179073
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177513
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177459
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28189249
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28198741
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178143
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179383
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180215
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176825
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28202077
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177727
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176913
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180691
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28187309
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_02_0043258_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177001
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176677
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176799
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178909
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179767
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179301
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28186529
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28193139
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182797
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179073
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177033
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177047
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177543
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178059
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177485
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28184437
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28183115
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177459
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177895
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181413
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180285
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28188467
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28189249
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177513
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28184285
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176905
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28192993
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177283
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181079
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177251
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28198741
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178169
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177727
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177715
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178103
-----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181255
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179697
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177891
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179821
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179183
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176937
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177223
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176709
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176923
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176949
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180563
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176889
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177003
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176975
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177157
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177351
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176797
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28187939
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176675
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176853
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177381
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176945
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177439
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177339
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28181925
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178763
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177915
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177927
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180215
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177305
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177673
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178143
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177203
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178173
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28187309
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178271
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182401
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176731
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176997
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177931
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176969
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177311
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177559
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176957
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178843
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178629
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177375
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176825
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28202077
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182911
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176755
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177001
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177679
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177857
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28200643
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179383
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177169
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176971
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177705
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28179249
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182341
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176625
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28178573
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28192917
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177075
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180821
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177373
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177831
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177393
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_02_0043258.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28176913
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28177105
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28180691
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_02_0043258.28182181
</commentlist>
</conversation>
