<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_11_21_2234216</id>
	<title>Best Practices For Infrastructure Upgrade?</title>
	<author>timothy</author>
	<datestamp>1258800600000</datestamp>
	<htmltext>An anonymous reader writes <i>"I was put in charge of an aging IT infrastructure that needs a serious overhaul. Current services include the usual suspects, i.e. www, ftp, email, dns, firewall, DHCP &mdash; and some more. In most cases, each service runs on its own hardware, some of them for the last seven years straight. The machines still can (mostly) handle the load that ~150 people in multiple offices put on them, but there's hardly any fallback if any of the services die or an office is disconnected. Now, as the hardware must be replaced, I'd like to buff things up a bit: distributed instances of services (at least one instance per office) and a fallback/load-balancing scheme (either to an instance in another office or a duplicated one within the same). Services running on virtualized servers hosted by a single reasonably-sized machine per office (plus one for testing and a spare) seem to recommend themselves. What's you experience with virtualization of services and implementing fallback/load-balancing schemes? What's Best Practice for an update like this? I'm interested in your success stories and anecdotes, but also pointers and (book) references. Thanks!"</i></htmltext>
<tokenext>An anonymous reader writes " I was put in charge of an aging IT infrastructure that needs a serious overhaul .
Current services include the usual suspects , i.e .
www , ftp , email , dns , firewall , DHCP    and some more .
In most cases , each service runs on its own hardware , some of them for the last seven years straight .
The machines still can ( mostly ) handle the load that ~ 150 people in multiple offices put on them , but there 's hardly any fallback if any of the services die or an office is disconnected .
Now , as the hardware must be replaced , I 'd like to buff things up a bit : distributed instances of services ( at least one instance per office ) and a fallback/load-balancing scheme ( either to an instance in another office or a duplicated one within the same ) .
Services running on virtualized servers hosted by a single reasonably-sized machine per office ( plus one for testing and a spare ) seem to recommend themselves .
What 's you experience with virtualization of services and implementing fallback/load-balancing schemes ?
What 's Best Practice for an update like this ?
I 'm interested in your success stories and anecdotes , but also pointers and ( book ) references .
Thanks ! "</tokentext>
<sentencetext>An anonymous reader writes "I was put in charge of an aging IT infrastructure that needs a serious overhaul.
Current services include the usual suspects, i.e.
www, ftp, email, dns, firewall, DHCP — and some more.
In most cases, each service runs on its own hardware, some of them for the last seven years straight.
The machines still can (mostly) handle the load that ~150 people in multiple offices put on them, but there's hardly any fallback if any of the services die or an office is disconnected.
Now, as the hardware must be replaced, I'd like to buff things up a bit: distributed instances of services (at least one instance per office) and a fallback/load-balancing scheme (either to an instance in another office or a duplicated one within the same).
Services running on virtualized servers hosted by a single reasonably-sized machine per office (plus one for testing and a spare) seem to recommend themselves.
What's you experience with virtualization of services and implementing fallback/load-balancing schemes?
What's Best Practice for an update like this?
I'm interested in your success stories and anecdotes, but also pointers and (book) references.
Thanks!"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190286</id>
	<title>Linux Vserver</title>
	<author>patrick\_leb</author>
	<datestamp>1258816380000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Here's how we do it:</p><p>- Run your services in a few vservers on the same physical server:<br>
&nbsp; &nbsp; * DNS + DHCP<br>
&nbsp; &nbsp; * mail<br>
&nbsp; &nbsp; * ftp<br>
&nbsp; &nbsp; * www<br>- Have a backup server where your stuff is rsynced daily. This allows for quick restores in case of disaster.</p><p>Vservers are great because they isolate you from the hardware. Server becomes too small? Buy another one, move your vservers to it and you're done. Need to upgrade a service? Copy the vserver, upgrade, test, swap it with the old one when you are set. It's a great advantage to be able to move stuff easily from one box to another.</p></htmltext>
<tokenext>Here 's how we do it : - Run your services in a few vservers on the same physical server :     * DNS + DHCP     * mail     * ftp     * www- Have a backup server where your stuff is rsynced daily .
This allows for quick restores in case of disaster.Vservers are great because they isolate you from the hardware .
Server becomes too small ?
Buy another one , move your vservers to it and you 're done .
Need to upgrade a service ?
Copy the vserver , upgrade , test , swap it with the old one when you are set .
It 's a great advantage to be able to move stuff easily from one box to another .</tokentext>
<sentencetext>Here's how we do it:- Run your services in a few vservers on the same physical server:
    * DNS + DHCP
    * mail
    * ftp
    * www- Have a backup server where your stuff is rsynced daily.
This allows for quick restores in case of disaster.Vservers are great because they isolate you from the hardware.
Server becomes too small?
Buy another one, move your vservers to it and you're done.
Need to upgrade a service?
Copy the vserver, upgrade, test, swap it with the old one when you are set.
It's a great advantage to be able to move stuff easily from one box to another.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190014</id>
	<title>Some possible goals</title>
	<author>giladpn</author>
	<datestamp>1258814340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You got a lot of posts pointing out the error of your ways; basically what people are saying - it sounds gung ho, there is no clear reasoning in the post justifying your shift.
<br> <br>
Maybe they are a bit strong but note there is a lot of experience behind them.
<br> <br>
Having said that, I would like to take a kinder gentler tone. Once you go through your fundamental reasons for wanting change, I'd suggest you choose ONE big thing that you want to do. Changing everything at once is usually not so hot.
<br> <br>
So what could be a goal that would make your users happier and you a hero? Well, don't know, but I can tell what is typical in many such cases
<br>
- lowering capital costs (less spending on physical servers and their maintenance) while keeping everything running is one; cloud computing may help on that
<br>
- faster performance is one, but only in those places were users are actually complaining. Making a list of those places and fixing them one at a time would be an approach.
<br>
- new business needs is another one, but for that - leave everything that works alone and focus on solving very well the new business need. Your partners are your CEO, CFO, marketing etc...
<br> <br>
For example, seems from your post that the overall architecture of the system is actually quite decent. So you may want to just repeat that same architecture in an updated way in a cloud computing approach, save some money and prepare for the next computing trend. If you decide that is for you, move one server at a time, arrange fail-over in the cloud, and prove one-at-a-time that it works as fast as the old stuff.
<br> <br>
Bit of advice: don't just do virtualization without knowing why. If the business reason is economics, then jump over virtualization to the next trend, cloud computing. If it isn't economics, don't bother with virtualization at all.
<br> <br>
Consider your goals and choose ONE. 'Nuff said.</htmltext>
<tokenext>You got a lot of posts pointing out the error of your ways ; basically what people are saying - it sounds gung ho , there is no clear reasoning in the post justifying your shift .
Maybe they are a bit strong but note there is a lot of experience behind them .
Having said that , I would like to take a kinder gentler tone .
Once you go through your fundamental reasons for wanting change , I 'd suggest you choose ONE big thing that you want to do .
Changing everything at once is usually not so hot .
So what could be a goal that would make your users happier and you a hero ?
Well , do n't know , but I can tell what is typical in many such cases - lowering capital costs ( less spending on physical servers and their maintenance ) while keeping everything running is one ; cloud computing may help on that - faster performance is one , but only in those places were users are actually complaining .
Making a list of those places and fixing them one at a time would be an approach .
- new business needs is another one , but for that - leave everything that works alone and focus on solving very well the new business need .
Your partners are your CEO , CFO , marketing etc.. . For example , seems from your post that the overall architecture of the system is actually quite decent .
So you may want to just repeat that same architecture in an updated way in a cloud computing approach , save some money and prepare for the next computing trend .
If you decide that is for you , move one server at a time , arrange fail-over in the cloud , and prove one-at-a-time that it works as fast as the old stuff .
Bit of advice : do n't just do virtualization without knowing why .
If the business reason is economics , then jump over virtualization to the next trend , cloud computing .
If it is n't economics , do n't bother with virtualization at all .
Consider your goals and choose ONE .
'Nuff said .</tokentext>
<sentencetext>You got a lot of posts pointing out the error of your ways; basically what people are saying - it sounds gung ho, there is no clear reasoning in the post justifying your shift.
Maybe they are a bit strong but note there is a lot of experience behind them.
Having said that, I would like to take a kinder gentler tone.
Once you go through your fundamental reasons for wanting change, I'd suggest you choose ONE big thing that you want to do.
Changing everything at once is usually not so hot.
So what could be a goal that would make your users happier and you a hero?
Well, don't know, but I can tell what is typical in many such cases

- lowering capital costs (less spending on physical servers and their maintenance) while keeping everything running is one; cloud computing may help on that

- faster performance is one, but only in those places were users are actually complaining.
Making a list of those places and fixing them one at a time would be an approach.
- new business needs is another one, but for that - leave everything that works alone and focus on solving very well the new business need.
Your partners are your CEO, CFO, marketing etc...
 
For example, seems from your post that the overall architecture of the system is actually quite decent.
So you may want to just repeat that same architecture in an updated way in a cloud computing approach, save some money and prepare for the next computing trend.
If you decide that is for you, move one server at a time, arrange fail-over in the cloud, and prove one-at-a-time that it works as fast as the old stuff.
Bit of advice: don't just do virtualization without knowing why.
If the business reason is economics, then jump over virtualization to the next trend, cloud computing.
If it isn't economics, don't bother with virtualization at all.
Consider your goals and choose ONE.
'Nuff said.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30218760</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>Anonymous</author>
	<datestamp>1259057640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>No, you need seperate servers for when the DHCP upgrade requires a conflicting library with the DNS servers which you don't want to upgrade at the same time.</p><p>THIS is where virtualization becomes useful.</p></div></blockquote><p>Your core services shouldn't be sharing code at runtime outside the kernel.  DNS and DHCP are both easy to build as monoliths and much more secure and reliable that way (which is why some distros ship that way hello Red Hat).</p><p>I agree with everything else you said.</p><p>Every virtualization project I have ever encountered in real life was an expensive non-solution to a problem easily solved with virtualization.  I have read about some that weren't, but never seen one in the Real World [tm].</p><p>DNS redundancy, for example, is unbelievably trivial.</p></div>
	</htmltext>
<tokenext>No , you need seperate servers for when the DHCP upgrade requires a conflicting library with the DNS servers which you do n't want to upgrade at the same time.THIS is where virtualization becomes useful.Your core services should n't be sharing code at runtime outside the kernel .
DNS and DHCP are both easy to build as monoliths and much more secure and reliable that way ( which is why some distros ship that way hello Red Hat ) .I agree with everything else you said.Every virtualization project I have ever encountered in real life was an expensive non-solution to a problem easily solved with virtualization .
I have read about some that were n't , but never seen one in the Real World [ tm ] .DNS redundancy , for example , is unbelievably trivial .</tokentext>
<sentencetext>No, you need seperate servers for when the DHCP upgrade requires a conflicting library with the DNS servers which you don't want to upgrade at the same time.THIS is where virtualization becomes useful.Your core services shouldn't be sharing code at runtime outside the kernel.
DNS and DHCP are both easy to build as monoliths and much more secure and reliable that way (which is why some distros ship that way hello Red Hat).I agree with everything else you said.Every virtualization project I have ever encountered in real life was an expensive non-solution to a problem easily solved with virtualization.
I have read about some that weren't, but never seen one in the Real World [tm].DNS redundancy, for example, is unbelievably trivial.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192632</id>
	<title>Keep it Simple</title>
	<author>Anonymous</author>
	<datestamp>1258897860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Buy 3 machines.  Put all the services on each and put one in each of your 2 appropriate locations.  Everything you list can run on a single Linux box,.  Use the 3rd for your sandbox.</p></htmltext>
<tokenext>Buy 3 machines .
Put all the services on each and put one in each of your 2 appropriate locations .
Everything you list can run on a single Linux box, .
Use the 3rd for your sandbox .</tokentext>
<sentencetext>Buy 3 machines.
Put all the services on each and put one in each of your 2 appropriate locations.
Everything you list can run on a single Linux box,.
Use the 3rd for your sandbox.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193994</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>TheLink</author>
	<datestamp>1258910160000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>I have vmware machines on one server at home. There are still benefits even though it's not a cluster. So it's not that stupid.<br><br>It is easier to move the virtual servers to another machine or O/S. This is useful when upgrading or when hardware fails or when growing (move from one real server to two or more real servers). There's no need to reinstall stuff because the drivers are different etc.<br><br>You can snapshot virtual machines and then back them up while they are running. Backup and restore is not that hard that way. So even if you have a single point of failure, if you have recent image back ups, you could buy a machine with preinstalled O/S, install vmware, and get back up and running rather quickly.<br><br>And when power fails and the UPS runs low on battery, I have a script that suspends all virtual machines then powers the server down. That's more convenient too than setting up lots of UPS agents on multiple machines and hoping they all shutdown in time.<br><br>DB performance sucks in a vmware guest though, so where DB/IO performance is important, use "real" stuff. Things may be better with other virtualization tech/software.</htmltext>
<tokenext>I have vmware machines on one server at home .
There are still benefits even though it 's not a cluster .
So it 's not that stupid.It is easier to move the virtual servers to another machine or O/S .
This is useful when upgrading or when hardware fails or when growing ( move from one real server to two or more real servers ) .
There 's no need to reinstall stuff because the drivers are different etc.You can snapshot virtual machines and then back them up while they are running .
Backup and restore is not that hard that way .
So even if you have a single point of failure , if you have recent image back ups , you could buy a machine with preinstalled O/S , install vmware , and get back up and running rather quickly.And when power fails and the UPS runs low on battery , I have a script that suspends all virtual machines then powers the server down .
That 's more convenient too than setting up lots of UPS agents on multiple machines and hoping they all shutdown in time.DB performance sucks in a vmware guest though , so where DB/IO performance is important , use " real " stuff .
Things may be better with other virtualization tech/software .</tokentext>
<sentencetext>I have vmware machines on one server at home.
There are still benefits even though it's not a cluster.
So it's not that stupid.It is easier to move the virtual servers to another machine or O/S.
This is useful when upgrading or when hardware fails or when growing (move from one real server to two or more real servers).
There's no need to reinstall stuff because the drivers are different etc.You can snapshot virtual machines and then back them up while they are running.
Backup and restore is not that hard that way.
So even if you have a single point of failure, if you have recent image back ups, you could buy a machine with preinstalled O/S, install vmware, and get back up and running rather quickly.And when power fails and the UPS runs low on battery, I have a script that suspends all virtual machines then powers the server down.
That's more convenient too than setting up lots of UPS agents on multiple machines and hoping they all shutdown in time.DB performance sucks in a vmware guest though, so where DB/IO performance is important, use "real" stuff.
Things may be better with other virtualization tech/software.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193414</id>
	<title>Re:I'd say</title>
	<author>stilwebm</author>
	<datestamp>1258905840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Having managed old infrastructure boxes in the past, I know it's harder than it sounds.  The reliability was rock solid, but as demands of the network grew - not only in numbers of nodes but the way the nodes were used - and security concerns mounted, it was no longer feasible to maintain the boxes as-is.</p><p>Compiler, library, and package management changes over that time period makes it difficult on *nix boxes and Windows support expirations likewise make it difficult in Windows land.  You reach a point where the time invested to patch a system exceeds the cost of replacing the system.  Additionally, the downtime from the patch process (good luck finding a decent staging server for something seven years old) offsets the purported reliability of the setup.  Lose a major component on one of those machines and you'll get a crash course in starting over and modernization.</p></htmltext>
<tokenext>Having managed old infrastructure boxes in the past , I know it 's harder than it sounds .
The reliability was rock solid , but as demands of the network grew - not only in numbers of nodes but the way the nodes were used - and security concerns mounted , it was no longer feasible to maintain the boxes as-is.Compiler , library , and package management changes over that time period makes it difficult on * nix boxes and Windows support expirations likewise make it difficult in Windows land .
You reach a point where the time invested to patch a system exceeds the cost of replacing the system .
Additionally , the downtime from the patch process ( good luck finding a decent staging server for something seven years old ) offsets the purported reliability of the setup .
Lose a major component on one of those machines and you 'll get a crash course in starting over and modernization .</tokentext>
<sentencetext>Having managed old infrastructure boxes in the past, I know it's harder than it sounds.
The reliability was rock solid, but as demands of the network grew - not only in numbers of nodes but the way the nodes were used - and security concerns mounted, it was no longer feasible to maintain the boxes as-is.Compiler, library, and package management changes over that time period makes it difficult on *nix boxes and Windows support expirations likewise make it difficult in Windows land.
You reach a point where the time invested to patch a system exceeds the cost of replacing the system.
Additionally, the downtime from the patch process (good luck finding a decent staging server for something seven years old) offsets the purported reliability of the setup.
Lose a major component on one of those machines and you'll get a crash course in starting over and modernization.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188876</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189948</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>DaMattster</author>
	<datestamp>1258813860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Having a separate box for each service is not necessarily a good idea.  This is energy inefficient and you have a lot of wasted computing resources.  That said, virtualization that has been done with little thought or planning is a disaster waiting to happen.  I for one, would use Cirtix XENServer.  Smaller services such as DNS, DHCP, and FTP can be collapsed into a virtualization server and dedicate one core to each service.  If you are adventurous, you could use that same box for routing using OpenBSD.  This makes much better use of a mutlicore server.  More critical services such as WWW and E-Mail are best left on their own servers.  A balance of techniques work better than an either or approach.</htmltext>
<tokenext>Having a separate box for each service is not necessarily a good idea .
This is energy inefficient and you have a lot of wasted computing resources .
That said , virtualization that has been done with little thought or planning is a disaster waiting to happen .
I for one , would use Cirtix XENServer .
Smaller services such as DNS , DHCP , and FTP can be collapsed into a virtualization server and dedicate one core to each service .
If you are adventurous , you could use that same box for routing using OpenBSD .
This makes much better use of a mutlicore server .
More critical services such as WWW and E-Mail are best left on their own servers .
A balance of techniques work better than an either or approach .</tokentext>
<sentencetext>Having a separate box for each service is not necessarily a good idea.
This is energy inefficient and you have a lot of wasted computing resources.
That said, virtualization that has been done with little thought or planning is a disaster waiting to happen.
I for one, would use Cirtix XENServer.
Smaller services such as DNS, DHCP, and FTP can be collapsed into a virtualization server and dedicate one core to each service.
If you are adventurous, you could use that same box for routing using OpenBSD.
This makes much better use of a mutlicore server.
More critical services such as WWW and E-Mail are best left on their own servers.
A balance of techniques work better than an either or approach.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188904</id>
	<title>Think about the complexity of duplication</title>
	<author>El Cubano</author>
	<datestamp>1258805220000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p> <i>there's hardly any fallback if any of the services dies or an office is disconnected. Now, as the hardware must be replaced, I'd like to buff things up a bit: distributed instances of services (at least one instance per office) and a fallback/load-balancing scheme (either to an instance in another office or a duplicated one within the same).</i>

</p><p>Is that really necessary?  I know that we all would like to have bullet-proof services.  However, is the network service to the various offices so unreliable that it justifies the added complexity of instantiating services at every location?  Or even introducing redundancy at each location?  If you were talking about thousands or tens of thousands of users at each location, it might make sense just because you would have to distribute the load in some way.

</p><p>What you need to do is evaluate your connectivity and its reliability.  For example:

</p><ul>
<li>How reliable is the current connectivity?</li><li>If it is not reliable enough, how much would it cost over the long run to upgrade to a sufficiently reliable service?</li><li>If the connection goes down, how does it affect that office?  (I.e., if the Internet is completely inaccessible, will having all those duplicated services at the remote office enable them to continue working as though nothing were wrong?  If the service being out causes such a disruption that having duplicate services at the remote office doesn't help, then why bother?)</li>
<li>How much will it cost over the long run to add all that extra hardware, along with the burden of maintaining it and all the services running on it?</li></ul><p>Once you answer at least those questions, then you have the information you need in order to make a sensible decision.</p></htmltext>
<tokenext>there 's hardly any fallback if any of the services dies or an office is disconnected .
Now , as the hardware must be replaced , I 'd like to buff things up a bit : distributed instances of services ( at least one instance per office ) and a fallback/load-balancing scheme ( either to an instance in another office or a duplicated one within the same ) .
Is that really necessary ?
I know that we all would like to have bullet-proof services .
However , is the network service to the various offices so unreliable that it justifies the added complexity of instantiating services at every location ?
Or even introducing redundancy at each location ?
If you were talking about thousands or tens of thousands of users at each location , it might make sense just because you would have to distribute the load in some way .
What you need to do is evaluate your connectivity and its reliability .
For example : How reliable is the current connectivity ? If it is not reliable enough , how much would it cost over the long run to upgrade to a sufficiently reliable service ? If the connection goes down , how does it affect that office ?
( I.e. , if the Internet is completely inaccessible , will having all those duplicated services at the remote office enable them to continue working as though nothing were wrong ?
If the service being out causes such a disruption that having duplicate services at the remote office does n't help , then why bother ?
) How much will it cost over the long run to add all that extra hardware , along with the burden of maintaining it and all the services running on it ? Once you answer at least those questions , then you have the information you need in order to make a sensible decision .</tokentext>
<sentencetext> there's hardly any fallback if any of the services dies or an office is disconnected.
Now, as the hardware must be replaced, I'd like to buff things up a bit: distributed instances of services (at least one instance per office) and a fallback/load-balancing scheme (either to an instance in another office or a duplicated one within the same).
Is that really necessary?
I know that we all would like to have bullet-proof services.
However, is the network service to the various offices so unreliable that it justifies the added complexity of instantiating services at every location?
Or even introducing redundancy at each location?
If you were talking about thousands or tens of thousands of users at each location, it might make sense just because you would have to distribute the load in some way.
What you need to do is evaluate your connectivity and its reliability.
For example:


How reliable is the current connectivity?If it is not reliable enough, how much would it cost over the long run to upgrade to a sufficiently reliable service?If the connection goes down, how does it affect that office?
(I.e., if the Internet is completely inaccessible, will having all those duplicated services at the remote office enable them to continue working as though nothing were wrong?
If the service being out causes such a disruption that having duplicate services at the remote office doesn't help, then why bother?
)
How much will it cost over the long run to add all that extra hardware, along with the burden of maintaining it and all the services running on it?Once you answer at least those questions, then you have the information you need in order to make a sensible decision.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30196422</id>
	<title>Re:Why?</title>
	<author>Nefarious Wheel</author>
	<datestamp>1258885500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Why virtual servers? If you are going to run multiple services on one machine (and that's fine if it can handle the load) just do it.</p></div><p>Fast rollback for system changes for one thing (reboot the earlier version of the system disk), easier hardware upgrades (boot the virtual server image from a faster machine), better load balancing (see previous example).  Even if your hardware:system instance ratio is best served 1:1 (a rare occurance) you could make an excellent case for going virtual.</p></div>
	</htmltext>
<tokenext>Why virtual servers ?
If you are going to run multiple services on one machine ( and that 's fine if it can handle the load ) just do it.Fast rollback for system changes for one thing ( reboot the earlier version of the system disk ) , easier hardware upgrades ( boot the virtual server image from a faster machine ) , better load balancing ( see previous example ) .
Even if your hardware : system instance ratio is best served 1 : 1 ( a rare occurance ) you could make an excellent case for going virtual .</tokentext>
<sentencetext>Why virtual servers?
If you are going to run multiple services on one machine (and that's fine if it can handle the load) just do it.Fast rollback for system changes for one thing (reboot the earlier version of the system disk), easier hardware upgrades (boot the virtual server image from a faster machine), better load balancing (see previous example).
Even if your hardware:system instance ratio is best served 1:1 (a rare occurance) you could make an excellent case for going virtual.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>bertok</author>
	<datestamp>1258808220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>I, personally, am TOTALLY in agreement with the ethos of whoever designed it, a single box for each service.</p><p>...</p><p>Virtualisation is, IMHO, *totally* inappropriate for 99\% of cases where it is used, ditto *cloud* computing.</p></div><p>I totally disagree.</p><p>Look at some of the services he listed: DNS and DHCP.</p><p>You literally can't buy a server these days with less than 2 cores, and getting less than 4 is a challenge. That kind of computing power is overkill for such basic services, so it makes <i>perfect</i> sense to partition a single high-powered box to better utilize it. There is no need to give up redundancy either, you can buy two boxes, and have every key services duplicated between them. Buying two boxes <i>per service</i> on the other hand is insane, especially services like DHCP, which in an environment like that might have to respond to a packet <i>once an hour</i>.</p><p>Even the other listed services probably cause negligible load. Most web servers sit there at 0.1\% load most of the time, ditto with ftp, which tends to see only sporadic use.</p><p>I think you'll find that the exact opposite of your quote is true: for 99\% of corporate environments where virtualization is used, it is appropriate. In fact, it's under-used. Most places could save a lot of money by virtualizing more.</p><p>I'm guessing you work for an organization where money grows on trees, and you can 'design' whatever the hell you want, and you get the budget for it, no matter how wasteful, right?</p></div>
	</htmltext>
<tokenext>I , personally , am TOTALLY in agreement with the ethos of whoever designed it , a single box for each service....Virtualisation is , IMHO , * totally * inappropriate for 99 \ % of cases where it is used , ditto * cloud * computing.I totally disagree.Look at some of the services he listed : DNS and DHCP.You literally ca n't buy a server these days with less than 2 cores , and getting less than 4 is a challenge .
That kind of computing power is overkill for such basic services , so it makes perfect sense to partition a single high-powered box to better utilize it .
There is no need to give up redundancy either , you can buy two boxes , and have every key services duplicated between them .
Buying two boxes per service on the other hand is insane , especially services like DHCP , which in an environment like that might have to respond to a packet once an hour.Even the other listed services probably cause negligible load .
Most web servers sit there at 0.1 \ % load most of the time , ditto with ftp , which tends to see only sporadic use.I think you 'll find that the exact opposite of your quote is true : for 99 \ % of corporate environments where virtualization is used , it is appropriate .
In fact , it 's under-used .
Most places could save a lot of money by virtualizing more.I 'm guessing you work for an organization where money grows on trees , and you can 'design ' whatever the hell you want , and you get the budget for it , no matter how wasteful , right ?</tokentext>
<sentencetext>I, personally, am TOTALLY in agreement with the ethos of whoever designed it, a single box for each service....Virtualisation is, IMHO, *totally* inappropriate for 99\% of cases where it is used, ditto *cloud* computing.I totally disagree.Look at some of the services he listed: DNS and DHCP.You literally can't buy a server these days with less than 2 cores, and getting less than 4 is a challenge.
That kind of computing power is overkill for such basic services, so it makes perfect sense to partition a single high-powered box to better utilize it.
There is no need to give up redundancy either, you can buy two boxes, and have every key services duplicated between them.
Buying two boxes per service on the other hand is insane, especially services like DHCP, which in an environment like that might have to respond to a packet once an hour.Even the other listed services probably cause negligible load.
Most web servers sit there at 0.1\% load most of the time, ditto with ftp, which tends to see only sporadic use.I think you'll find that the exact opposite of your quote is true: for 99\% of corporate environments where virtualization is used, it is appropriate.
In fact, it's under-used.
Most places could save a lot of money by virtualizing more.I'm guessing you work for an organization where money grows on trees, and you can 'design' whatever the hell you want, and you get the budget for it, no matter how wasteful, right?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190242</id>
	<title>Some advice</title>
	<author>plopez</author>
	<datestamp>1258816020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>1) don't screw up. This is a great opportunity to make huge improvements and gain the trust and respect of your managers and clients. Don't blow it.</p><p>2) Make sure you have good back ups. Oh you have them? When was the last time you tested them?</p><p>3) Go gradually. Don't change too many things at once. This makes recovering easier and isolating the cause easier.</p><p>4) Put together a careful plan. Identify what you need to change first. Set priorities.</p><p>5) Always have  fall back position. Take the old systems offline, cut over to the new system. If the old system fails, rollback. And leave the old systems available for a while until you feel assured they are stable.</p><p>6) Don't drink the koolaid. Any product purporting to help migrations should be avoided unless people you trust have used it and/or you are very familiar with it.</p><p>7) Always remember point number 1. Being conservative and careful are your best tools.</p></htmltext>
<tokenext>1 ) do n't screw up .
This is a great opportunity to make huge improvements and gain the trust and respect of your managers and clients .
Do n't blow it.2 ) Make sure you have good back ups .
Oh you have them ?
When was the last time you tested them ? 3 ) Go gradually .
Do n't change too many things at once .
This makes recovering easier and isolating the cause easier.4 ) Put together a careful plan .
Identify what you need to change first .
Set priorities.5 ) Always have fall back position .
Take the old systems offline , cut over to the new system .
If the old system fails , rollback .
And leave the old systems available for a while until you feel assured they are stable.6 ) Do n't drink the koolaid .
Any product purporting to help migrations should be avoided unless people you trust have used it and/or you are very familiar with it.7 ) Always remember point number 1 .
Being conservative and careful are your best tools .</tokentext>
<sentencetext>1) don't screw up.
This is a great opportunity to make huge improvements and gain the trust and respect of your managers and clients.
Don't blow it.2) Make sure you have good back ups.
Oh you have them?
When was the last time you tested them?3) Go gradually.
Don't change too many things at once.
This makes recovering easier and isolating the cause easier.4) Put together a careful plan.
Identify what you need to change first.
Set priorities.5) Always have  fall back position.
Take the old systems offline, cut over to the new system.
If the old system fails, rollback.
And leave the old systems available for a while until you feel assured they are stable.6) Don't drink the koolaid.
Any product purporting to help migrations should be avoided unless people you trust have used it and/or you are very familiar with it.7) Always remember point number 1.
Being conservative and careful are your best tools.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30195072</id>
	<title>Get your offsite squared away</title>
	<author>Anonymous</author>
	<datestamp>1258917960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Since it wasn't mentioned by the OP as being done, I'll assume it wasn't. Before you even touch anything in the office:</p><p>1) get secondary/ternary DNS set up offsite, and preferrably with 2+ different providers. There are many cheap (and even free) services to accomplish this such as DynDNS. It's cheap, it's easy, and it'll save your bacon if you **** up the primary while you're working. It may seem like overkill but at my ~80 person place I have a primary and secondary onsite, and utilize two inexpensive offsite services for backup secondarys.</p><p>2) get secondary/ternary MX "spool" hosting; again DynDNS for instance offers this cheaply as do many others. For the same reasons as above, if you screw up the primary mailserver you have automatic backup that will spool any incoming mail for you; at our place I have a primary and secondary onsite, and 2 spool (aka "forwarders") backups at MX level 30 at 2 different providers.</p><p>Now if you A) screw something up, or B) lose that internet connection you at least have something covering your buns for these critical, essential services. We also have an offsite IT email (a basic GMail account) with all participating employee personal email addresses in it, so that if SHTF and an office goes offline an IT person can log into the GMail from any computer and send an alert to the company that something is wrong.</p><p>Regardless of what you do with the onsite scenario, create offsite backup scenarios and CYA before anything else. And test them.<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>Since it was n't mentioned by the OP as being done , I 'll assume it was n't .
Before you even touch anything in the office : 1 ) get secondary/ternary DNS set up offsite , and preferrably with 2 + different providers .
There are many cheap ( and even free ) services to accomplish this such as DynDNS .
It 's cheap , it 's easy , and it 'll save your bacon if you * * * * up the primary while you 're working .
It may seem like overkill but at my ~ 80 person place I have a primary and secondary onsite , and utilize two inexpensive offsite services for backup secondarys.2 ) get secondary/ternary MX " spool " hosting ; again DynDNS for instance offers this cheaply as do many others .
For the same reasons as above , if you screw up the primary mailserver you have automatic backup that will spool any incoming mail for you ; at our place I have a primary and secondary onsite , and 2 spool ( aka " forwarders " ) backups at MX level 30 at 2 different providers.Now if you A ) screw something up , or B ) lose that internet connection you at least have something covering your buns for these critical , essential services .
We also have an offsite IT email ( a basic GMail account ) with all participating employee personal email addresses in it , so that if SHTF and an office goes offline an IT person can log into the GMail from any computer and send an alert to the company that something is wrong.Regardless of what you do with the onsite scenario , create offsite backup scenarios and CYA before anything else .
And test them .
: )</tokentext>
<sentencetext>Since it wasn't mentioned by the OP as being done, I'll assume it wasn't.
Before you even touch anything in the office:1) get secondary/ternary DNS set up offsite, and preferrably with 2+ different providers.
There are many cheap (and even free) services to accomplish this such as DynDNS.
It's cheap, it's easy, and it'll save your bacon if you **** up the primary while you're working.
It may seem like overkill but at my ~80 person place I have a primary and secondary onsite, and utilize two inexpensive offsite services for backup secondarys.2) get secondary/ternary MX "spool" hosting; again DynDNS for instance offers this cheaply as do many others.
For the same reasons as above, if you screw up the primary mailserver you have automatic backup that will spool any incoming mail for you; at our place I have a primary and secondary onsite, and 2 spool (aka "forwarders") backups at MX level 30 at 2 different providers.Now if you A) screw something up, or B) lose that internet connection you at least have something covering your buns for these critical, essential services.
We also have an offsite IT email (a basic GMail account) with all participating employee personal email addresses in it, so that if SHTF and an office goes offline an IT person can log into the GMail from any computer and send an alert to the company that something is wrong.Regardless of what you do with the onsite scenario, create offsite backup scenarios and CYA before anything else.
And test them.
:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30207220</id>
	<title>Re:Real question</title>
	<author>Flere Imsaho</author>
	<datestamp>1258975920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Being inexperienced, asking the right questions, learning and planning carefully !=  having no clue.</p><p>If we all followed your line of reasoning, we'd all stay in our comfort zone and never grow our skills.</p></htmltext>
<tokenext>Being inexperienced , asking the right questions , learning and planning carefully ! = having no clue.If we all followed your line of reasoning , we 'd all stay in our comfort zone and never grow our skills .</tokentext>
<sentencetext>Being inexperienced, asking the right questions, learning and planning carefully !=  having no clue.If we all followed your line of reasoning, we'd all stay in our comfort zone and never grow our skills.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189150</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192664</id>
	<title>Don't over do it !!!</title>
	<author>Anonymous</author>
	<datestamp>1258898220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I have done this before !</p><p>Simple, 2 machines at the main site which will host all the services and be a backup up to one another.</p><p>Then 1 machine per each external site hosting all the services too.</p><p>Virtualization would only be recommended if security is crucial, and only for the services accessible from outside.  But it's a complexity add-on !</p><p>Recommendation for system :  Mac Mini Servers !  with Snow Leopard Server !</p><p>Ritchie</p></htmltext>
<tokenext>I have done this before ! Simple , 2 machines at the main site which will host all the services and be a backup up to one another.Then 1 machine per each external site hosting all the services too.Virtualization would only be recommended if security is crucial , and only for the services accessible from outside .
But it 's a complexity add-on ! Recommendation for system : Mac Mini Servers !
with Snow Leopard Server ! Ritchie</tokentext>
<sentencetext>I have done this before !Simple, 2 machines at the main site which will host all the services and be a backup up to one another.Then 1 machine per each external site hosting all the services too.Virtualization would only be recommended if security is crucial, and only for the services accessible from outside.
But it's a complexity add-on !Recommendation for system :  Mac Mini Servers !
with Snow Leopard Server !Ritchie</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190528</id>
	<title>Re:Microsoft Essential Business Server</title>
	<author>h4rr4r</author>
	<datestamp>1258819020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Someone please mod down the parent, he is an MS shill. Look at his posting history.</p><p>Astroturfers should not be welcome here.</p></htmltext>
<tokenext>Someone please mod down the parent , he is an MS shill .
Look at his posting history.Astroturfers should not be welcome here .</tokentext>
<sentencetext>Someone please mod down the parent, he is an MS shill.
Look at his posting history.Astroturfers should not be welcome here.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189738</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076</id>
	<title>Trying to make your mark, eh?</title>
	<author>Anonymous</author>
	<datestamp>1258806300000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>The system you have works solidly, and has worked solidly for seven years.</p><p>I, personally, am TOTALLY in agreement with the ethos of whoever designed it, a single box for each service.</p><p>Frankly, with the cost of modern hardware, you could triple the capacity of what you have now just by gradually swapping out for newer hardware over the next few months, and keeping the shite old boxen for fallback.</p><p>Virtualisation is, IMHO, *totally* inappropriate for 99\% of cases where it is used, ditto *cloud* computing.</p><p>It sounds to me like you are more interested in making your own mark, than actually taking an objective view. I may of course be wrong, but usually that is the case in stories like this.</p><p>In my experience, everyone who tries to make their own mark actually degrades a system, and simply discounts the ways that they have degraded it as being "obsolete" or "no longer applicable"</p><p>Frankly, based on your post alone, I'd sack you on the spot, because you sound like the biggest threat to the system to come along in seven years.</p><p>These are NOT your computers, if you want a system just so, build it yourself with your own money in your own home.</p><p>This advice / opinion is of course worth exactly what it cost.</p><p>Apologies in advance if I have misconstrued your approach. (but I doubt that I have)</p><p>YMMV.</p></htmltext>
<tokenext>The system you have works solidly , and has worked solidly for seven years.I , personally , am TOTALLY in agreement with the ethos of whoever designed it , a single box for each service.Frankly , with the cost of modern hardware , you could triple the capacity of what you have now just by gradually swapping out for newer hardware over the next few months , and keeping the shite old boxen for fallback.Virtualisation is , IMHO , * totally * inappropriate for 99 \ % of cases where it is used , ditto * cloud * computing.It sounds to me like you are more interested in making your own mark , than actually taking an objective view .
I may of course be wrong , but usually that is the case in stories like this.In my experience , everyone who tries to make their own mark actually degrades a system , and simply discounts the ways that they have degraded it as being " obsolete " or " no longer applicable " Frankly , based on your post alone , I 'd sack you on the spot , because you sound like the biggest threat to the system to come along in seven years.These are NOT your computers , if you want a system just so , build it yourself with your own money in your own home.This advice / opinion is of course worth exactly what it cost.Apologies in advance if I have misconstrued your approach .
( but I doubt that I have ) YMMV .</tokentext>
<sentencetext>The system you have works solidly, and has worked solidly for seven years.I, personally, am TOTALLY in agreement with the ethos of whoever designed it, a single box for each service.Frankly, with the cost of modern hardware, you could triple the capacity of what you have now just by gradually swapping out for newer hardware over the next few months, and keeping the shite old boxen for fallback.Virtualisation is, IMHO, *totally* inappropriate for 99\% of cases where it is used, ditto *cloud* computing.It sounds to me like you are more interested in making your own mark, than actually taking an objective view.
I may of course be wrong, but usually that is the case in stories like this.In my experience, everyone who tries to make their own mark actually degrades a system, and simply discounts the ways that they have degraded it as being "obsolete" or "no longer applicable"Frankly, based on your post alone, I'd sack you on the spot, because you sound like the biggest threat to the system to come along in seven years.These are NOT your computers, if you want a system just so, build it yourself with your own money in your own home.This advice / opinion is of course worth exactly what it cost.Apologies in advance if I have misconstrued your approach.
(but I doubt that I have)YMMV.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189352</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>mabhatter654</author>
	<datestamp>1258809120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>not really, you can split your VMs between 2-3 servers and do the migrations manually in the beginning. Once you make the virtual images the hard work is done, even if you just run 2 images per server, you've saved money or increased reliability. Now that you have VMs you can reinstall from backup tapes to another configured server so you have a start at disaster recovery.  Once that part is done it's a function of how much money you are allowed to throw at the solution (blades, clusters, sans, etc)</p></htmltext>
<tokenext>not really , you can split your VMs between 2-3 servers and do the migrations manually in the beginning .
Once you make the virtual images the hard work is done , even if you just run 2 images per server , you 've saved money or increased reliability .
Now that you have VMs you can reinstall from backup tapes to another configured server so you have a start at disaster recovery .
Once that part is done it 's a function of how much money you are allowed to throw at the solution ( blades , clusters , sans , etc )</tokentext>
<sentencetext>not really, you can split your VMs between 2-3 servers and do the migrations manually in the beginning.
Once you make the virtual images the hard work is done, even if you just run 2 images per server, you've saved money or increased reliability.
Now that you have VMs you can reinstall from backup tapes to another configured server so you have a start at disaster recovery.
Once that part is done it's a function of how much money you are allowed to throw at the solution (blades, clusters, sans, etc)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192678</id>
	<title>On your way out</title>
	<author>obarthelemy</author>
	<datestamp>1258898400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>From your question, I'd say you're on the verge of a huge screw-up.</p><p>You must be young. Don't set out to make your mark. On the contrary, set out to make yourself entirely forgettable, which is what people want from their IT infrastructure.</p><p>First, look to replacing what's currently there, and nothing more. There don't seem to be any requests for added features.</p><p>If you can do that within budget, look at what is lacking. It may be ease of use, reliability, redundancy, backups, disaster recovery, speed, room to grow, features...</p><p>If you want to be really smart, do just what's asked of you, under budget, under deadline, with no hassle. But plan ahead for the next few requests, and document that. When those requests come up, you'll be able to turn back and said: I knew it, I planned for it already. THAT earns you points. Not trying to force any random feature that catches your fancy down management and users' throats.</p></htmltext>
<tokenext>From your question , I 'd say you 're on the verge of a huge screw-up.You must be young .
Do n't set out to make your mark .
On the contrary , set out to make yourself entirely forgettable , which is what people want from their IT infrastructure.First , look to replacing what 's currently there , and nothing more .
There do n't seem to be any requests for added features.If you can do that within budget , look at what is lacking .
It may be ease of use , reliability , redundancy , backups , disaster recovery , speed , room to grow , features...If you want to be really smart , do just what 's asked of you , under budget , under deadline , with no hassle .
But plan ahead for the next few requests , and document that .
When those requests come up , you 'll be able to turn back and said : I knew it , I planned for it already .
THAT earns you points .
Not trying to force any random feature that catches your fancy down management and users ' throats .</tokentext>
<sentencetext>From your question, I'd say you're on the verge of a huge screw-up.You must be young.
Don't set out to make your mark.
On the contrary, set out to make yourself entirely forgettable, which is what people want from their IT infrastructure.First, look to replacing what's currently there, and nothing more.
There don't seem to be any requests for added features.If you can do that within budget, look at what is lacking.
It may be ease of use, reliability, redundancy, backups, disaster recovery, speed, room to grow, features...If you want to be really smart, do just what's asked of you, under budget, under deadline, with no hassle.
But plan ahead for the next few requests, and document that.
When those requests come up, you'll be able to turn back and said: I knew it, I planned for it already.
THAT earns you points.
Not trying to force any random feature that catches your fancy down management and users' throats.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30198870</id>
	<title>blades</title>
	<author>WhiteWiz</author>
	<datestamp>1258906140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I use blade servers every day and I love them.
You are correct in saying that blade servers are not the right choice for every installation.  We calculated that after 7 servers it is cheaper to buy a blade chassis than separate 1u servers.
-blade servers don't work for any application that requires a special card (ie a T1 card for a VOIP server)
-the CD rom etc collects dust for the 5 years after the OS is installed
-iLo lets you attach the CD/DVD from the workstation you are iLo-ing from to the server to install the OS
-you can also install from a USB CD
-I can go 2 months between physical visits to our DataCenter.  This is important if you have a HotSite or CoLocation for Disaster Recovery.</htmltext>
<tokenext>I use blade servers every day and I love them .
You are correct in saying that blade servers are not the right choice for every installation .
We calculated that after 7 servers it is cheaper to buy a blade chassis than separate 1u servers .
-blade servers do n't work for any application that requires a special card ( ie a T1 card for a VOIP server ) -the CD rom etc collects dust for the 5 years after the OS is installed -iLo lets you attach the CD/DVD from the workstation you are iLo-ing from to the server to install the OS -you can also install from a USB CD -I can go 2 months between physical visits to our DataCenter .
This is important if you have a HotSite or CoLocation for Disaster Recovery .</tokentext>
<sentencetext>I use blade servers every day and I love them.
You are correct in saying that blade servers are not the right choice for every installation.
We calculated that after 7 servers it is cheaper to buy a blade chassis than separate 1u servers.
-blade servers don't work for any application that requires a special card (ie a T1 card for a VOIP server)
-the CD rom etc collects dust for the 5 years after the OS is installed
-iLo lets you attach the CD/DVD from the workstation you are iLo-ing from to the server to install the OS
-you can also install from a USB CD
-I can go 2 months between physical visits to our DataCenter.
This is important if you have a HotSite or CoLocation for Disaster Recovery.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189160</id>
	<title>Re:Why?</title>
	<author>nurb432</author>
	<datestamp>1258807200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Virtual was my first thought too.</p><p>Just p2v his entire data center first, then work on 'upgrades' from there.</p></htmltext>
<tokenext>Virtual was my first thought too.Just p2v his entire data center first , then work on 'upgrades ' from there .</tokentext>
<sentencetext>Virtual was my first thought too.Just p2v his entire data center first, then work on 'upgrades' from there.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189946</id>
	<title>Re:I'd say</title>
	<author>onepoint</author>
	<datestamp>1258813860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>if it works keep it running. You are correct in everything you point out. if anything, start first with a full replicated system setup, then a proper back up. next test the new systems, back up never seem to work on the first try so get the bug's worked out.</p><p>after this I have no real idea on what you need to do.</p></htmltext>
<tokenext>if it works keep it running .
You are correct in everything you point out .
if anything , start first with a full replicated system setup , then a proper back up .
next test the new systems , back up never seem to work on the first try so get the bug 's worked out.after this I have no real idea on what you need to do .</tokentext>
<sentencetext>if it works keep it running.
You are correct in everything you point out.
if anything, start first with a full replicated system setup, then a proper back up.
next test the new systems, back up never seem to work on the first try so get the bug's worked out.after this I have no real idea on what you need to do.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188876</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30195356</id>
	<title>Infrastructure Overhaul</title>
	<author>Anonymous</author>
	<datestamp>1258920300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Your employer has put you in charge of an information systems infrastructure overhaul or upgrade and you are posting to<nobr> <wbr></nobr>/. asking for advice? Tell your employer to hire someone capable of doing their own research. What are you a MCSE?</p></htmltext>
<tokenext>Your employer has put you in charge of an information systems infrastructure overhaul or upgrade and you are posting to / .
asking for advice ?
Tell your employer to hire someone capable of doing their own research .
What are you a MCSE ?</tokentext>
<sentencetext>Your employer has put you in charge of an information systems infrastructure overhaul or upgrade and you are posting to /.
asking for advice?
Tell your employer to hire someone capable of doing their own research.
What are you a MCSE?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189376</id>
	<title>Maybe this is really a uni project</title>
	<author>natd</author>
	<datestamp>1258809300000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>What I see going on here, as others have touched on, is someone who doesn't realise that he's dealing with a small environment, even by my (Australian) standards where I'm frequently in awe of the kinds of scale that the US and Europe consider commonplace.
<p>
If the current system has been acceptable for 7 years, I'm guessing the users needs aren't something so mindbogglingly critical that risk must be removed at any cost. Equally, if that was the case, the business would be either bringing in an experienced team or writing a blank cheque to an external party, not giving it to the guy who changes passwords and has spent the last week putting together a jigsaw of every enterprise option out there, and getting an "n+1" tattoo inside his eyelids.
</p><p>
Finally, 7 years isn't exactly old. We've got a subsidiary company of just that size (150 users, 10 branches) running on Proliant 1600/2500/5500 gear (ie 90's) which we consider capable for the job, which includes Oracle 8, Citrix MF plus a dozen or so more apps and users on current hardware. We have the occasional hardware fault which a maintenance provider can address same day, bill us at ad-hoc rates yet we still see only a couple of thousand dollars a year in maintenance leaving us content that this old junk is still appropriate no matter which we we look at it.</p></htmltext>
<tokenext>What I see going on here , as others have touched on , is someone who does n't realise that he 's dealing with a small environment , even by my ( Australian ) standards where I 'm frequently in awe of the kinds of scale that the US and Europe consider commonplace .
If the current system has been acceptable for 7 years , I 'm guessing the users needs are n't something so mindbogglingly critical that risk must be removed at any cost .
Equally , if that was the case , the business would be either bringing in an experienced team or writing a blank cheque to an external party , not giving it to the guy who changes passwords and has spent the last week putting together a jigsaw of every enterprise option out there , and getting an " n + 1 " tattoo inside his eyelids .
Finally , 7 years is n't exactly old .
We 've got a subsidiary company of just that size ( 150 users , 10 branches ) running on Proliant 1600/2500/5500 gear ( ie 90 's ) which we consider capable for the job , which includes Oracle 8 , Citrix MF plus a dozen or so more apps and users on current hardware .
We have the occasional hardware fault which a maintenance provider can address same day , bill us at ad-hoc rates yet we still see only a couple of thousand dollars a year in maintenance leaving us content that this old junk is still appropriate no matter which we we look at it .</tokentext>
<sentencetext>What I see going on here, as others have touched on, is someone who doesn't realise that he's dealing with a small environment, even by my (Australian) standards where I'm frequently in awe of the kinds of scale that the US and Europe consider commonplace.
If the current system has been acceptable for 7 years, I'm guessing the users needs aren't something so mindbogglingly critical that risk must be removed at any cost.
Equally, if that was the case, the business would be either bringing in an experienced team or writing a blank cheque to an external party, not giving it to the guy who changes passwords and has spent the last week putting together a jigsaw of every enterprise option out there, and getting an "n+1" tattoo inside his eyelids.
Finally, 7 years isn't exactly old.
We've got a subsidiary company of just that size (150 users, 10 branches) running on Proliant 1600/2500/5500 gear (ie 90's) which we consider capable for the job, which includes Oracle 8, Citrix MF plus a dozen or so more apps and users on current hardware.
We have the occasional hardware fault which a maintenance provider can address same day, bill us at ad-hoc rates yet we still see only a couple of thousand dollars a year in maintenance leaving us content that this old junk is still appropriate no matter which we we look at it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191038</id>
	<title>Insurance...</title>
	<author>Anonymous</author>
	<datestamp>1258826160000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>1) Buy a comprehensive insurance policy<br>2) Write a detailed implementation plan that you copied from a Google search<br>3) Wait the 3-6 months the plan calls out before actual "work" begins<br>4) Burn down the building using a homeless person as the schill<br>5) Submit an emergency "continuity" plan that you wanted to deploy all along<br>6) implement the new plan in one third the time of the original plan<br>7) come in under budget by 38.3\%<br>8) hire a whole new help desk at half the budgeted payroll (52.7\% savings)<br>9) speak at the board meeting: challenges you over came to saving the company<br>10) Graciously accept the position of CIO</p><p>(send all paychecks and bonuses to numbered bank account and retire to a non-extradition country)<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>1 ) Buy a comprehensive insurance policy2 ) Write a detailed implementation plan that you copied from a Google search3 ) Wait the 3-6 months the plan calls out before actual " work " begins4 ) Burn down the building using a homeless person as the schill5 ) Submit an emergency " continuity " plan that you wanted to deploy all along6 ) implement the new plan in one third the time of the original plan7 ) come in under budget by 38.3 \ % 8 ) hire a whole new help desk at half the budgeted payroll ( 52.7 \ % savings ) 9 ) speak at the board meeting : challenges you over came to saving the company10 ) Graciously accept the position of CIO ( send all paychecks and bonuses to numbered bank account and retire to a non-extradition country ) : )</tokentext>
<sentencetext>1) Buy a comprehensive insurance policy2) Write a detailed implementation plan that you copied from a Google search3) Wait the 3-6 months the plan calls out before actual "work" begins4) Burn down the building using a homeless person as the schill5) Submit an emergency "continuity" plan that you wanted to deploy all along6) implement the new plan in one third the time of the original plan7) come in under budget by 38.3\%8) hire a whole new help desk at half the budgeted payroll (52.7\% savings)9) speak at the board meeting: challenges you over came to saving the company10) Graciously accept the position of CIO(send all paychecks and bonuses to numbered bank account and retire to a non-extradition country) :)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189848</id>
	<title>Backup fabric/infrastructure</title>
	<author>mlts</author>
	<datestamp>1258813080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Don't forget that with all the shiny new servers, to have some sort of backup fabric in place for each and every one of them.</p><p>I'd focus on four backup levels:</p><p>Level 1, quick local "oh shit" image based restores:  A drive attached to the machine where it can do images of the OS and (if the data is small) data volumes.  Then set up a backup program (the built in one in Windows Server 2008 is excellent).  This way, if the machine tanks, you can do a fast bare metal by booting the OS CD, pointing it to the backup volume, pointing out the new OS volume, click "restore", walk off.</p><p>Level 2, a network backup server:  The server would be a machine with a large amount of disk, and a tape autochanger.  It would run at the low end Retrospect or Backup Exec, upper end, Networker, ArcServe, or TSM.  And it would do d2d2t backups, so grabbing the data from machines is fast so you can do the most with a backup window.  Then, with the tape array, make a rotation system factoring offsites to Iron Mountain, as well as onsite backups.  Of course, this server would handle archiving, perhaps with a dedicated DLT-ICE (or similar WORM tech) drive for backups that can't be tampered with.</p><p>Level 3, offsite strategy:  If you need to have stuff up 24/7, consider a hot or warm site that can take over should something happen to the main site.  Even if you don't need an offsite server room, you do need offsite backup storage and rotation planning.  Usually this is Iron Mountain's domain, but it can't hurt to also have a tape safe on some leased company property only known by the top IT brass just in case.</p><p>Level 4, the cloud:  Cloud storage is costly.  There are also security issues with it.  However, the advantage is that if your data center gets completely obliterated, the data is still accessible.  I'd recommend having some form of encryption (PGP comes to mind, perhaps on the cheap, TrueCrypt containers), and storing your core business tax data (Quickbooks/Peachtree) here.  You want to store what you need to recover the business, but you don't want to store too much because you are paying lots of cash for it.  Last time I checked, for the cost per month you use a cloud provider for a terabyte of storage, an external 1TB drive a month was cheaper.  But you are paying for cloud storage's SLA and relability.</p><p>I know backup fabric is usually the last thing on an IT department's minds, but it is VERY important, and may mean the company exists or doesn't exist when (not if) something happens.</p><p>Tailor this to your requirements and budget, of course.</p></htmltext>
<tokenext>Do n't forget that with all the shiny new servers , to have some sort of backup fabric in place for each and every one of them.I 'd focus on four backup levels : Level 1 , quick local " oh shit " image based restores : A drive attached to the machine where it can do images of the OS and ( if the data is small ) data volumes .
Then set up a backup program ( the built in one in Windows Server 2008 is excellent ) .
This way , if the machine tanks , you can do a fast bare metal by booting the OS CD , pointing it to the backup volume , pointing out the new OS volume , click " restore " , walk off.Level 2 , a network backup server : The server would be a machine with a large amount of disk , and a tape autochanger .
It would run at the low end Retrospect or Backup Exec , upper end , Networker , ArcServe , or TSM .
And it would do d2d2t backups , so grabbing the data from machines is fast so you can do the most with a backup window .
Then , with the tape array , make a rotation system factoring offsites to Iron Mountain , as well as onsite backups .
Of course , this server would handle archiving , perhaps with a dedicated DLT-ICE ( or similar WORM tech ) drive for backups that ca n't be tampered with.Level 3 , offsite strategy : If you need to have stuff up 24/7 , consider a hot or warm site that can take over should something happen to the main site .
Even if you do n't need an offsite server room , you do need offsite backup storage and rotation planning .
Usually this is Iron Mountain 's domain , but it ca n't hurt to also have a tape safe on some leased company property only known by the top IT brass just in case.Level 4 , the cloud : Cloud storage is costly .
There are also security issues with it .
However , the advantage is that if your data center gets completely obliterated , the data is still accessible .
I 'd recommend having some form of encryption ( PGP comes to mind , perhaps on the cheap , TrueCrypt containers ) , and storing your core business tax data ( Quickbooks/Peachtree ) here .
You want to store what you need to recover the business , but you do n't want to store too much because you are paying lots of cash for it .
Last time I checked , for the cost per month you use a cloud provider for a terabyte of storage , an external 1TB drive a month was cheaper .
But you are paying for cloud storage 's SLA and relability.I know backup fabric is usually the last thing on an IT department 's minds , but it is VERY important , and may mean the company exists or does n't exist when ( not if ) something happens.Tailor this to your requirements and budget , of course .</tokentext>
<sentencetext>Don't forget that with all the shiny new servers, to have some sort of backup fabric in place for each and every one of them.I'd focus on four backup levels:Level 1, quick local "oh shit" image based restores:  A drive attached to the machine where it can do images of the OS and (if the data is small) data volumes.
Then set up a backup program (the built in one in Windows Server 2008 is excellent).
This way, if the machine tanks, you can do a fast bare metal by booting the OS CD, pointing it to the backup volume, pointing out the new OS volume, click "restore", walk off.Level 2, a network backup server:  The server would be a machine with a large amount of disk, and a tape autochanger.
It would run at the low end Retrospect or Backup Exec, upper end, Networker, ArcServe, or TSM.
And it would do d2d2t backups, so grabbing the data from machines is fast so you can do the most with a backup window.
Then, with the tape array, make a rotation system factoring offsites to Iron Mountain, as well as onsite backups.
Of course, this server would handle archiving, perhaps with a dedicated DLT-ICE (or similar WORM tech) drive for backups that can't be tampered with.Level 3, offsite strategy:  If you need to have stuff up 24/7, consider a hot or warm site that can take over should something happen to the main site.
Even if you don't need an offsite server room, you do need offsite backup storage and rotation planning.
Usually this is Iron Mountain's domain, but it can't hurt to also have a tape safe on some leased company property only known by the top IT brass just in case.Level 4, the cloud:  Cloud storage is costly.
There are also security issues with it.
However, the advantage is that if your data center gets completely obliterated, the data is still accessible.
I'd recommend having some form of encryption (PGP comes to mind, perhaps on the cheap, TrueCrypt containers), and storing your core business tax data (Quickbooks/Peachtree) here.
You want to store what you need to recover the business, but you don't want to store too much because you are paying lots of cash for it.
Last time I checked, for the cost per month you use a cloud provider for a terabyte of storage, an external 1TB drive a month was cheaper.
But you are paying for cloud storage's SLA and relability.I know backup fabric is usually the last thing on an IT department's minds, but it is VERY important, and may mean the company exists or doesn't exist when (not if) something happens.Tailor this to your requirements and budget, of course.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189002</id>
	<title>openVZ</title>
	<author>RiotingPacifist</author>
	<datestamp>1258805760000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>For services running on linux, openVZ can be used as a jail with migration capabilities instead of a full on VM,</p><p>DISCLAIMER: I don't have a job so I've read about this but not used it in a pro environment yet</p></htmltext>
<tokenext>For services running on linux , openVZ can be used as a jail with migration capabilities instead of a full on VM,DISCLAIMER : I do n't have a job so I 've read about this but not used it in a pro environment yet</tokentext>
<sentencetext>For services running on linux, openVZ can be used as a jail with migration capabilities instead of a full on VM,DISCLAIMER: I don't have a job so I've read about this but not used it in a pro environment yet</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193190</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>NotBornYesterday</author>
	<datestamp>1258904040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Virtualization isn't always done for redundancy.  Virtualization all on one server makes perfect sense if the goal is server consolidation &amp; energy savings.  Just make sure that management understands that.</htmltext>
<tokenext>Virtualization is n't always done for redundancy .
Virtualization all on one server makes perfect sense if the goal is server consolidation &amp; energy savings .
Just make sure that management understands that .</tokentext>
<sentencetext>Virtualization isn't always done for redundancy.
Virtualization all on one server makes perfect sense if the goal is server consolidation &amp; energy savings.
Just make sure that management understands that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189016</id>
	<title>Google(tm) Cloud</title>
	<author>ickleberry</author>
	<datestamp>1258805880000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext>Outsource everything to "de cloud", because that way when everything fails spectacularly it isn't your fault.</htmltext>
<tokenext>Outsource everything to " de cloud " , because that way when everything fails spectacularly it is n't your fault .</tokentext>
<sentencetext>Outsource everything to "de cloud", because that way when everything fails spectacularly it isn't your fault.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192522</id>
	<title>Re:Microsoft Essential Business Server</title>
	<author>lukas84</author>
	<datestamp>1258895460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>EBS sucks. Really. It would be nice if you could just buy the bundle of the "full" products at the same price as EBS, but the way EBS is currently structured it's a nightmare.</p><p>Even SBS is an extremely complex product to handle, with lots of special cases to consider since a lot comes preintegrated and everything is a slight bit different compared to their standalone counterparts.</p><p>And as it is right now, SBS and EBS are outdated - both still ship with Exchange 2007 and WS08 and from what i've heard so far it will take months if not years till we see them both shipping WS08R2 and Exchange 2010.</p><p>If the OP is running Microsoft, EBS would be a bad choice.</p></htmltext>
<tokenext>EBS sucks .
Really. It would be nice if you could just buy the bundle of the " full " products at the same price as EBS , but the way EBS is currently structured it 's a nightmare.Even SBS is an extremely complex product to handle , with lots of special cases to consider since a lot comes preintegrated and everything is a slight bit different compared to their standalone counterparts.And as it is right now , SBS and EBS are outdated - both still ship with Exchange 2007 and WS08 and from what i 've heard so far it will take months if not years till we see them both shipping WS08R2 and Exchange 2010.If the OP is running Microsoft , EBS would be a bad choice .</tokentext>
<sentencetext>EBS sucks.
Really. It would be nice if you could just buy the bundle of the "full" products at the same price as EBS, but the way EBS is currently structured it's a nightmare.Even SBS is an extremely complex product to handle, with lots of special cases to consider since a lot comes preintegrated and everything is a slight bit different compared to their standalone counterparts.And as it is right now, SBS and EBS are outdated - both still ship with Exchange 2007 and WS08 and from what i've heard so far it will take months if not years till we see them both shipping WS08R2 and Exchange 2010.If the OP is running Microsoft, EBS would be a bad choice.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189738</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190152</id>
	<title>Separate data centres</title>
	<author>David Gerard</author>
	<datestamp>1258815300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>At least for external services like www. Big red buttons do get pushed. I worked at one company where the big red button in the data centre got pushed, all power went off immediately (the big red button is for fire safety and must cut ALL power) and the Oracle DB got trashed, taking them off air for four days; their customers were not happy. They got religion about redundancy.

</p><p>Redundancy is one of those things like backups, support contracts, software freedom, etc. that management don't realise how much you need until you get bitten in the arse by the lack of it. You clearly get it, which is good.

</p><p>(I have a similar problem at present: an important dev machine has (a) no service redundancy (b) no disk redundancy. (b) is unlikely, (a) requires duplicating all services including a proprietary version control system onto another box. I'm going to have to switch on an old Ultra 60 that's been decommissioned. Argh.)</p></htmltext>
<tokenext>At least for external services like www .
Big red buttons do get pushed .
I worked at one company where the big red button in the data centre got pushed , all power went off immediately ( the big red button is for fire safety and must cut ALL power ) and the Oracle DB got trashed , taking them off air for four days ; their customers were not happy .
They got religion about redundancy .
Redundancy is one of those things like backups , support contracts , software freedom , etc .
that management do n't realise how much you need until you get bitten in the arse by the lack of it .
You clearly get it , which is good .
( I have a similar problem at present : an important dev machine has ( a ) no service redundancy ( b ) no disk redundancy .
( b ) is unlikely , ( a ) requires duplicating all services including a proprietary version control system onto another box .
I 'm going to have to switch on an old Ultra 60 that 's been decommissioned .
Argh. )</tokentext>
<sentencetext>At least for external services like www.
Big red buttons do get pushed.
I worked at one company where the big red button in the data centre got pushed, all power went off immediately (the big red button is for fire safety and must cut ALL power) and the Oracle DB got trashed, taking them off air for four days; their customers were not happy.
They got religion about redundancy.
Redundancy is one of those things like backups, support contracts, software freedom, etc.
that management don't realise how much you need until you get bitten in the arse by the lack of it.
You clearly get it, which is good.
(I have a similar problem at present: an important dev machine has (a) no service redundancy (b) no disk redundancy.
(b) is unlikely, (a) requires duplicating all services including a proprietary version control system onto another box.
I'm going to have to switch on an old Ultra 60 that's been decommissioned.
Argh.)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190030</id>
	<title>Simple solution: vmware + amazon as a backup</title>
	<author>mveloso</author>
	<datestamp>1258814400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you have external access at your offices, leave everything as-is. Image everything, and use Amazon as a backup machine. Simple, low-cost, and basically on-demand.</p><p>More info about the setup would be good, but if everything's been running, don't touch it - back it up.</p></htmltext>
<tokenext>If you have external access at your offices , leave everything as-is .
Image everything , and use Amazon as a backup machine .
Simple , low-cost , and basically on-demand.More info about the setup would be good , but if everything 's been running , do n't touch it - back it up .</tokentext>
<sentencetext>If you have external access at your offices, leave everything as-is.
Image everything, and use Amazon as a backup machine.
Simple, low-cost, and basically on-demand.More info about the setup would be good, but if everything's been running, don't touch it - back it up.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191912</id>
	<title>Re:What 150 users?</title>
	<author>Anonymous</author>
	<datestamp>1258882380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Depending on the offices and if they're their own, tapes may not be a good idea. In fact, many geoscientists ship data around on external hard drives. Tape is bad because not all locations have a tape drive or multiple drives for every tape format (let alone recovery software). Second is cost: Getting 1TB external drives are cheap. Third: you can work right off the drive if it's just a single user.</p></htmltext>
<tokenext>Depending on the offices and if they 're their own , tapes may not be a good idea .
In fact , many geoscientists ship data around on external hard drives .
Tape is bad because not all locations have a tape drive or multiple drives for every tape format ( let alone recovery software ) .
Second is cost : Getting 1TB external drives are cheap .
Third : you can work right off the drive if it 's just a single user .</tokentext>
<sentencetext>Depending on the offices and if they're their own, tapes may not be a good idea.
In fact, many geoscientists ship data around on external hard drives.
Tape is bad because not all locations have a tape drive or multiple drives for every tape format (let alone recovery software).
Second is cost: Getting 1TB external drives are cheap.
Third: you can work right off the drive if it's just a single user.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189090</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798</id>
	<title>Cloud Computing(TM)</title>
	<author>Anonymous</author>
	<datestamp>1258804560000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext>VMWare servers.  Distributed SANs.  Services spread over the cluster with full failover.  Multiple connecting switches for iSCSI and the SAN controllers.
<p>
E</p></htmltext>
<tokenext>VMWare servers .
Distributed SANs .
Services spread over the cluster with full failover .
Multiple connecting switches for iSCSI and the SAN controllers .
E</tokentext>
<sentencetext>VMWare servers.
Distributed SANs.
Services spread over the cluster with full failover.
Multiple connecting switches for iSCSI and the SAN controllers.
E</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189496</id>
	<title>A possibly helpful response</title>
	<author>Anonymous</author>
	<datestamp>1258809960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'm a systems admin at a small college with about 1000 desktop machines in the buildings. We were a strictly Sun/Solaris shop for a long time, but in the last couple years we've invested in some 1U dual processor Xeon boxes. These run Ubuntu Server and Xen. We're in the progress of moving services from physical Solaris servers to virtual Xen servers. Two x86 servers can basically replace our old 16 server Sun rack. We'll likely keep our storage array around for a while, but so far LDAP, email, and web services have been migrated. DHCP and DNS could easily be migrated and if you buy 2U servers with enough large hard drives, a seperate storage array probably wouldn't be necessary.</p></htmltext>
<tokenext>I 'm a systems admin at a small college with about 1000 desktop machines in the buildings .
We were a strictly Sun/Solaris shop for a long time , but in the last couple years we 've invested in some 1U dual processor Xeon boxes .
These run Ubuntu Server and Xen .
We 're in the progress of moving services from physical Solaris servers to virtual Xen servers .
Two x86 servers can basically replace our old 16 server Sun rack .
We 'll likely keep our storage array around for a while , but so far LDAP , email , and web services have been migrated .
DHCP and DNS could easily be migrated and if you buy 2U servers with enough large hard drives , a seperate storage array probably would n't be necessary .</tokentext>
<sentencetext>I'm a systems admin at a small college with about 1000 desktop machines in the buildings.
We were a strictly Sun/Solaris shop for a long time, but in the last couple years we've invested in some 1U dual processor Xeon boxes.
These run Ubuntu Server and Xen.
We're in the progress of moving services from physical Solaris servers to virtual Xen servers.
Two x86 servers can basically replace our old 16 server Sun rack.
We'll likely keep our storage array around for a while, but so far LDAP, email, and web services have been migrated.
DHCP and DNS could easily be migrated and if you buy 2U servers with enough large hard drives, a seperate storage array probably wouldn't be necessary.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194072</id>
	<title>Best Practice - Backup</title>
	<author>1s44c</author>
	<datestamp>1258910640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>First back everything up.<br>Second test the backups.<br>Third ensure there is good monitoring on everything important.<br>Only then should you think about upgrades.</p><p>I can't believe nobody else has said this.</p></htmltext>
<tokenext>First back everything up.Second test the backups.Third ensure there is good monitoring on everything important.Only then should you think about upgrades.I ca n't believe nobody else has said this .</tokentext>
<sentencetext>First back everything up.Second test the backups.Third ensure there is good monitoring on everything important.Only then should you think about upgrades.I can't believe nobody else has said this.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30200230</id>
	<title>Re:Affordable SME Solution</title>
	<author>Slashcrap</author>
	<datestamp>1258971840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Xen with Remus or HA is the thinking man's setup</p></div><p>Presumably the man is thinking "Holy shit, they only announced this last week and it's still pre-alpha. Am I fucking insane?".</p><p>The answer is yes, yes he is.</p></div>
	</htmltext>
<tokenext>Xen with Remus or HA is the thinking man 's setupPresumably the man is thinking " Holy shit , they only announced this last week and it 's still pre-alpha .
Am I fucking insane ?
" .The answer is yes , yes he is .</tokentext>
<sentencetext>Xen with Remus or HA is the thinking man's setupPresumably the man is thinking "Holy shit, they only announced this last week and it's still pre-alpha.
Am I fucking insane?
".The answer is yes, yes he is.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188994</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30246478</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>ckaminski</author>
	<datestamp>1259344080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The downside of virtualizing 3 or 4 boxes onto one is that you lose some amount of independence.  If you lose one machine, you lose all of the hosted VMs, so you absolutely need some VM host high-availability.<br><br>But the beauty of the VM approach is it doesn't have to be an all-or-nothing: build a two-three host virtual host network, and migrate (p2v) your hosts as time permits.  In the end your utilization goes up, your physical plant costs (capital+runtime expense) go down.<br><br>I'm biased - I've used virtualization for the better part of 10 years now, and I'm 100\% sold on it.  I've used it for big businesses and small SOHOs.  The SOHOs are where the biggest value was seen (VMware Server).<br><br>I'm with you on the cloud.  While the idea of the cloud is amorphous, the value of virtualization is not necessarily.  You can certainly go overboard (SANs plus multiple cluster interconnects and networks), but you can get a decent two host redundant configuration for virtualization for under $1000.  It'll require you to use Linux and Xen, but it's definitely doable.</htmltext>
<tokenext>The downside of virtualizing 3 or 4 boxes onto one is that you lose some amount of independence .
If you lose one machine , you lose all of the hosted VMs , so you absolutely need some VM host high-availability.But the beauty of the VM approach is it does n't have to be an all-or-nothing : build a two-three host virtual host network , and migrate ( p2v ) your hosts as time permits .
In the end your utilization goes up , your physical plant costs ( capital + runtime expense ) go down.I 'm biased - I 've used virtualization for the better part of 10 years now , and I 'm 100 \ % sold on it .
I 've used it for big businesses and small SOHOs .
The SOHOs are where the biggest value was seen ( VMware Server ) .I 'm with you on the cloud .
While the idea of the cloud is amorphous , the value of virtualization is not necessarily .
You can certainly go overboard ( SANs plus multiple cluster interconnects and networks ) , but you can get a decent two host redundant configuration for virtualization for under $ 1000 .
It 'll require you to use Linux and Xen , but it 's definitely doable .</tokentext>
<sentencetext>The downside of virtualizing 3 or 4 boxes onto one is that you lose some amount of independence.
If you lose one machine, you lose all of the hosted VMs, so you absolutely need some VM host high-availability.But the beauty of the VM approach is it doesn't have to be an all-or-nothing: build a two-three host virtual host network, and migrate (p2v) your hosts as time permits.
In the end your utilization goes up, your physical plant costs (capital+runtime expense) go down.I'm biased - I've used virtualization for the better part of 10 years now, and I'm 100\% sold on it.
I've used it for big businesses and small SOHOs.
The SOHOs are where the biggest value was seen (VMware Server).I'm with you on the cloud.
While the idea of the cloud is amorphous, the value of virtualization is not necessarily.
You can certainly go overboard (SANs plus multiple cluster interconnects and networks), but you can get a decent two host redundant configuration for virtualization for under $1000.
It'll require you to use Linux and Xen, but it's definitely doable.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191346</id>
	<title>whitewiz</title>
	<author>Anonymous</author>
	<datestamp>1258830120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I work with VMware daily, so I am biased but also experienced.<br>The problems you have currently;<br>-can't get replacement parts for 7year old servers<br>-if something fails you can't buy a server like the one you have to restore onto; the data is still retrievable it'll just take longer<br>-you have no like-hardware test environment</p><p>How a virtual environment will help you;  lets say 2 current model servers and a piece of shared disk<br>-p2v is more efficient than re-installing on bare metal<br>-2 servers provide redundancy for ALL the virtual machines<br>-disaster recovery is now hardware independent<br>-you can snap-shot and roll-back upgrades that fail<br>-you can add more resources (cpu) by adding another server<br>-you can provision new virtual servers easily<br>-you can fix the hardware during business hours!</p></htmltext>
<tokenext>I work with VMware daily , so I am biased but also experienced.The problems you have currently ; -ca n't get replacement parts for 7year old servers-if something fails you ca n't buy a server like the one you have to restore onto ; the data is still retrievable it 'll just take longer-you have no like-hardware test environmentHow a virtual environment will help you ; lets say 2 current model servers and a piece of shared disk-p2v is more efficient than re-installing on bare metal-2 servers provide redundancy for ALL the virtual machines-disaster recovery is now hardware independent-you can snap-shot and roll-back upgrades that fail-you can add more resources ( cpu ) by adding another server-you can provision new virtual servers easily-you can fix the hardware during business hours !</tokentext>
<sentencetext>I work with VMware daily, so I am biased but also experienced.The problems you have currently;-can't get replacement parts for 7year old servers-if something fails you can't buy a server like the one you have to restore onto; the data is still retrievable it'll just take longer-you have no like-hardware test environmentHow a virtual environment will help you;  lets say 2 current model servers and a piece of shared disk-p2v is more efficient than re-installing on bare metal-2 servers provide redundancy for ALL the virtual machines-disaster recovery is now hardware independent-you can snap-shot and roll-back upgrades that fail-you can add more resources (cpu) by adding another server-you can provision new virtual servers easily-you can fix the hardware during business hours!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192518</id>
	<title>Re:Real question</title>
	<author>torune</author>
	<datestamp>1258895280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Obviously hired by someone more clueless than he.</htmltext>
<tokenext>Obviously hired by someone more clueless than he .</tokentext>
<sentencetext>Obviously hired by someone more clueless than he.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189150</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188876</id>
	<title>I'd say</title>
	<author>pele</author>
	<datestamp>1258805040000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>don't touch anything if it's been up and running for the past 7 years. if you really must replicate then get some more cheap boxes and replicate. it's cheaper and faster than virtual anything. if you must. but 150 users doesn't warrant anything in my oppinion. I'd rather invest in backup links (from different companies) between offices. you can bond them for extra throughput.</p></htmltext>
<tokenext>do n't touch anything if it 's been up and running for the past 7 years .
if you really must replicate then get some more cheap boxes and replicate .
it 's cheaper and faster than virtual anything .
if you must .
but 150 users does n't warrant anything in my oppinion .
I 'd rather invest in backup links ( from different companies ) between offices .
you can bond them for extra throughput .</tokentext>
<sentencetext>don't touch anything if it's been up and running for the past 7 years.
if you really must replicate then get some more cheap boxes and replicate.
it's cheaper and faster than virtual anything.
if you must.
but 150 users doesn't warrant anything in my oppinion.
I'd rather invest in backup links (from different companies) between offices.
you can bond them for extra throughput.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30195048</id>
	<title>ModularIT</title>
	<author>Anonymous</author>
	<datestamp>1258917780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><a href="http://www.modularit.org/" title="modularit.org" rel="nofollow">ModularIT</a> [modularit.org] is what you are looking for. Every service runs in a different virtual machine on one or more physical servers, there is a web interface and you can move machines between physical servers. Open source. Developers are friendly and based in Canary Islands (Spain).</p></htmltext>
<tokenext>ModularIT [ modularit.org ] is what you are looking for .
Every service runs in a different virtual machine on one or more physical servers , there is a web interface and you can move machines between physical servers .
Open source .
Developers are friendly and based in Canary Islands ( Spain ) .</tokentext>
<sentencetext>ModularIT [modularit.org] is what you are looking for.
Every service runs in a different virtual machine on one or more physical servers, there is a web interface and you can move machines between physical servers.
Open source.
Developers are friendly and based in Canary Islands (Spain).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188924</id>
	<title>balancing act</title>
	<author>TheSHAD0W</author>
	<datestamp>1258805340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Beware of load balancing, because it will tempt you into getting too little capacity for mission-critical work.  You need enough capacity to handle the entire load with multiple nodes down, or you will be courting a cascade failure.  Load balancing is better than fallback, because you will be constantly testing all of the hardware and software setups and will discover problems before an emergency strikes; but do make sure you've got the overcapacity needed to take up the slack when bad things happen.</p></htmltext>
<tokenext>Beware of load balancing , because it will tempt you into getting too little capacity for mission-critical work .
You need enough capacity to handle the entire load with multiple nodes down , or you will be courting a cascade failure .
Load balancing is better than fallback , because you will be constantly testing all of the hardware and software setups and will discover problems before an emergency strikes ; but do make sure you 've got the overcapacity needed to take up the slack when bad things happen .</tokentext>
<sentencetext>Beware of load balancing, because it will tempt you into getting too little capacity for mission-critical work.
You need enough capacity to handle the entire load with multiple nodes down, or you will be courting a cascade failure.
Load balancing is better than fallback, because you will be constantly testing all of the hardware and software setups and will discover problems before an emergency strikes; but do make sure you've got the overcapacity needed to take up the slack when bad things happen.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30210454</id>
	<title>Re:I'd say</title>
	<author>GWBasic</author>
	<datestamp>1259001960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>if you really must replicate then get some more cheap boxes and replicate. it's cheaper and faster than virtual anything</p></div><p>Uhm, no.  Physical-to-Virtual, also known as P2V, can turn an existing physical box into a VM with minimal or no downtime.</p></div>
	</htmltext>
<tokenext>if you really must replicate then get some more cheap boxes and replicate .
it 's cheaper and faster than virtual anythingUhm , no .
Physical-to-Virtual , also known as P2V , can turn an existing physical box into a VM with minimal or no downtime .</tokentext>
<sentencetext>if you really must replicate then get some more cheap boxes and replicate.
it's cheaper and faster than virtual anythingUhm, no.
Physical-to-Virtual, also known as P2V, can turn an existing physical box into a VM with minimal or no downtime.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188876</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>Foofoobar</author>
	<datestamp>1258805760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Note that he did say VMWare on a cluster. I have an idiot at my office trying to do VMWare all on one server and failing to realize this still creates one point of failure. If you are going to do virtualization, the only benefit comes when you invest in a cluster otherwise don't do it at all.</htmltext>
<tokenext>Note that he did say VMWare on a cluster .
I have an idiot at my office trying to do VMWare all on one server and failing to realize this still creates one point of failure .
If you are going to do virtualization , the only benefit comes when you invest in a cluster otherwise do n't do it at all .</tokentext>
<sentencetext>Note that he did say VMWare on a cluster.
I have an idiot at my office trying to do VMWare all on one server and failing to realize this still creates one point of failure.
If you are going to do virtualization, the only benefit comes when you invest in a cluster otherwise don't do it at all.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30199898</id>
	<title>Only 150 people and you need multiple servers?</title>
	<author>Anonymous</author>
	<datestamp>1259007300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A single Mac Mini easily serves multiple (in my case 72) domains, e-mail/calendaring, VPN, DNS, FTP, chat, iPhone push notification, web services, SMB domain controller and more.  I don't think the CPU utilization has ever gone over 10\%.  Easy to cluster together since it uses DoveCot for e-mail.</p><p>Unless you have an app that sucks CPU, you don't need a blade chassis.  That kind of thing's for special-purpose apps, not generic infrastructure.</p></htmltext>
<tokenext>A single Mac Mini easily serves multiple ( in my case 72 ) domains , e-mail/calendaring , VPN , DNS , FTP , chat , iPhone push notification , web services , SMB domain controller and more .
I do n't think the CPU utilization has ever gone over 10 \ % .
Easy to cluster together since it uses DoveCot for e-mail.Unless you have an app that sucks CPU , you do n't need a blade chassis .
That kind of thing 's for special-purpose apps , not generic infrastructure .</tokentext>
<sentencetext>A single Mac Mini easily serves multiple (in my case 72) domains, e-mail/calendaring, VPN, DNS, FTP, chat, iPhone push notification, web services, SMB domain controller and more.
I don't think the CPU utilization has ever gone over 10\%.
Easy to cluster together since it uses DoveCot for e-mail.Unless you have an app that sucks CPU, you don't need a blade chassis.
That kind of thing's for special-purpose apps, not generic infrastructure.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194894</id>
	<title>Re:Don't do it</title>
	<author>Anonymous</author>
	<datestamp>1258916400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>...bbbut that would kill his plan to implement RDD (resume driven development) as part of his CAS (career advancement strategy).</p></htmltext>
<tokenext>...bbbut that would kill his plan to implement RDD ( resume driven development ) as part of his CAS ( career advancement strategy ) .</tokentext>
<sentencetext>...bbbut that would kill his plan to implement RDD (resume driven development) as part of his CAS (career advancement strategy).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189012</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30195602</id>
	<title>Go with simple.</title>
	<author>ricks03</author>
	<datestamp>1258922160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I too inherited an aging infrastructure, and have mostly replaced all of, well, everything, with mostly what you're talking about, so have faced some of the decisions you're looking at, and used VMWare for much of that upgrade.

<p>Home Office (in this context):
Dual vmware servers, each having generally the VM instances:
<br>System:
<br>Guest #1: Windows 2008: Domain controller, DHCP, DNS, WINS
<br>Guest #2: CentOS: Radius
<br>Guest #3: CentOS: WWW, FTP

</p><p>Network a dual link running BGP, with VPNs to each of the remote sites, which have their own server for DNS (a slave) and DHCP (in case the VPN link is down).

</p><p>Using VMWare for services that aren't redundant as well. All VMs back up to the other VMWare server (with Ranger) so I can bring up guest VMs if their VMWare server fails. Virtualization gives me very easy DR (instead of having to recover an OS, I only have to recover a VM), easy hardware upgrades (migrate the VM), and generally the services are redundant for OS and hardware maintenance so I can patch and reboot without disrupting most services.

</p><p>More complex than that in practice, but you get the idea.</p></htmltext>
<tokenext>I too inherited an aging infrastructure , and have mostly replaced all of , well , everything , with mostly what you 're talking about , so have faced some of the decisions you 're looking at , and used VMWare for much of that upgrade .
Home Office ( in this context ) : Dual vmware servers , each having generally the VM instances : System : Guest # 1 : Windows 2008 : Domain controller , DHCP , DNS , WINS Guest # 2 : CentOS : Radius Guest # 3 : CentOS : WWW , FTP Network a dual link running BGP , with VPNs to each of the remote sites , which have their own server for DNS ( a slave ) and DHCP ( in case the VPN link is down ) .
Using VMWare for services that are n't redundant as well .
All VMs back up to the other VMWare server ( with Ranger ) so I can bring up guest VMs if their VMWare server fails .
Virtualization gives me very easy DR ( instead of having to recover an OS , I only have to recover a VM ) , easy hardware upgrades ( migrate the VM ) , and generally the services are redundant for OS and hardware maintenance so I can patch and reboot without disrupting most services .
More complex than that in practice , but you get the idea .</tokentext>
<sentencetext>I too inherited an aging infrastructure, and have mostly replaced all of, well, everything, with mostly what you're talking about, so have faced some of the decisions you're looking at, and used VMWare for much of that upgrade.
Home Office (in this context):
Dual vmware servers, each having generally the VM instances:
System:
Guest #1: Windows 2008: Domain controller, DHCP, DNS, WINS
Guest #2: CentOS: Radius
Guest #3: CentOS: WWW, FTP

Network a dual link running BGP, with VPNs to each of the remote sites, which have their own server for DNS (a slave) and DHCP (in case the VPN link is down).
Using VMWare for services that aren't redundant as well.
All VMs back up to the other VMWare server (with Ranger) so I can bring up guest VMs if their VMWare server fails.
Virtualization gives me very easy DR (instead of having to recover an OS, I only have to recover a VM), easy hardware upgrades (migrate the VM), and generally the services are redundant for OS and hardware maintenance so I can patch and reboot without disrupting most services.
More complex than that in practice, but you get the idea.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190364</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>Robert Larson</author>
	<datestamp>1258817220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'd tend to agree here. Buy a couple of blades. Implement vSphere with DRS and HA and possibly FT. Centralize all these core services. HA/FT will provide the fault tolerance at the core. Then spend on buffing redundant network links for remote sites and/or network capacity as needed. Simplify simplify. Minimize the number of VMs providing core services. Put as much as you can into a cloud.</htmltext>
<tokenext>I 'd tend to agree here .
Buy a couple of blades .
Implement vSphere with DRS and HA and possibly FT. Centralize all these core services .
HA/FT will provide the fault tolerance at the core .
Then spend on buffing redundant network links for remote sites and/or network capacity as needed .
Simplify simplify .
Minimize the number of VMs providing core services .
Put as much as you can into a cloud .</tokentext>
<sentencetext>I'd tend to agree here.
Buy a couple of blades.
Implement vSphere with DRS and HA and possibly FT. Centralize all these core services.
HA/FT will provide the fault tolerance at the core.
Then spend on buffing redundant network links for remote sites and/or network capacity as needed.
Simplify simplify.
Minimize the number of VMs providing core services.
Put as much as you can into a cloud.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189168</id>
	<title>Simple and straightforward = complex</title>
	<author>sphealey</author>
	<datestamp>1258807260000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>So let's see if I understand:  you want to take a simple, straightforward, easy-to-understand architecture with no single points of failure that would be very easy to recover in the event of a problem and extremely easy to recreate at a different site in a few hours in the event of a disaster, and replace it will a vastly more complex system that uses tons of shiny new buzzwords.  All to serve 150 end users for whom you have quantified no complaints related to the architecture other than it might need to be sped up a bit (or perhaps find a GUI interface for the ftp server, etc).</p><p>This should turn out well.</p><p>sPh</p><p>As far as "distributed redundant system", strongly suggested you read Moans Nogood's essay "<a href="http://wedonotuse.blogspot.com/2007/04/so-few-really-need-uptime.html" title="blogspot.com">You Don't Need High Availability</a> [blogspot.com]" and think very deeply about it before proceeding.</p></htmltext>
<tokenext>So let 's see if I understand : you want to take a simple , straightforward , easy-to-understand architecture with no single points of failure that would be very easy to recover in the event of a problem and extremely easy to recreate at a different site in a few hours in the event of a disaster , and replace it will a vastly more complex system that uses tons of shiny new buzzwords .
All to serve 150 end users for whom you have quantified no complaints related to the architecture other than it might need to be sped up a bit ( or perhaps find a GUI interface for the ftp server , etc ) .This should turn out well.sPhAs far as " distributed redundant system " , strongly suggested you read Moans Nogood 's essay " You Do n't Need High Availability [ blogspot.com ] " and think very deeply about it before proceeding .</tokentext>
<sentencetext>So let's see if I understand:  you want to take a simple, straightforward, easy-to-understand architecture with no single points of failure that would be very easy to recover in the event of a problem and extremely easy to recreate at a different site in a few hours in the event of a disaster, and replace it will a vastly more complex system that uses tons of shiny new buzzwords.
All to serve 150 end users for whom you have quantified no complaints related to the architecture other than it might need to be sped up a bit (or perhaps find a GUI interface for the ftp server, etc).This should turn out well.sPhAs far as "distributed redundant system", strongly suggested you read Moans Nogood's essay "You Don't Need High Availability [blogspot.com]" and think very deeply about it before proceeding.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192482</id>
	<title>You need time to plan...</title>
	<author>Anonymous</author>
	<datestamp>1258894500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Go Microsoft. Easiest way</p></htmltext>
<tokenext>Go Microsoft .
Easiest way</tokentext>
<sentencetext>Go Microsoft.
Easiest way</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194252</id>
	<title>Disaster Recovery Solution</title>
	<author>Anonymous</author>
	<datestamp>1258911900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Instead of configuring a complicated redundant network for such a small amount of users, I think you would have better luck implementing a backup/disaster recovery service similar to this: http://www.zenitharca.com/</p></htmltext>
<tokenext>Instead of configuring a complicated redundant network for such a small amount of users , I think you would have better luck implementing a backup/disaster recovery service similar to this : http : //www.zenitharca.com/</tokentext>
<sentencetext>Instead of configuring a complicated redundant network for such a small amount of users, I think you would have better luck implementing a backup/disaster recovery service similar to this: http://www.zenitharca.com/</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194192</id>
	<title>Re:P2V and consolidate</title>
	<author>masdog</author>
	<datestamp>1258911540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>VMWare converter is free, and it works with ESXi.<br> <br>

<a href="http://www.vmware.com/products/converter/overview.html" title="vmware.com">Check it out here.</a> [vmware.com]</htmltext>
<tokenext>VMWare converter is free , and it works with ESXi .
Check it out here .
[ vmware.com ]</tokentext>
<sentencetext>VMWare converter is free, and it works with ESXi.
Check it out here.
[vmware.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190332</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189448</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>Anonymous</author>
	<datestamp>1258809660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Poppycock.  You can buy small form factor single core pc's for under $200, or even a refurbished 3-4 year old server box for close to the same price.  Depending on the environmental and space considerations, you can pick the platforms to suit and keep the costs minimal.  Shoot, even a $200 netbook would have more cpu power and storage than most 7 year old computers, generate little or no heat, and demand a fraction of the power.  If this guy is smart, he can cut electrical costs and cooling costs substantially without changing a perfectly functional architecture.</p><p>What doesnt make sense is grossly overcomplicating things by trying to shove too much into some large scale platform and then further complicate it with a virtualization layer.  We gave up mainframes and thin clients/fat servers didnt work for a reason.</p><p>Sure, its cool and technically challenging.  Whats the business reason/driver for going the cool/challenging route again?</p><p>If the OP decides to quit 2 months after implementing his super cool setup because the job after that is completely boring, who can come in and grasp what he's set up and maintain/upgrade it?  Another finicky tech guru that wants to play with the stuff on the job and gets bored and walks off a couple of months later?</p></htmltext>
<tokenext>Poppycock .
You can buy small form factor single core pc 's for under $ 200 , or even a refurbished 3-4 year old server box for close to the same price .
Depending on the environmental and space considerations , you can pick the platforms to suit and keep the costs minimal .
Shoot , even a $ 200 netbook would have more cpu power and storage than most 7 year old computers , generate little or no heat , and demand a fraction of the power .
If this guy is smart , he can cut electrical costs and cooling costs substantially without changing a perfectly functional architecture.What doesnt make sense is grossly overcomplicating things by trying to shove too much into some large scale platform and then further complicate it with a virtualization layer .
We gave up mainframes and thin clients/fat servers didnt work for a reason.Sure , its cool and technically challenging .
Whats the business reason/driver for going the cool/challenging route again ? If the OP decides to quit 2 months after implementing his super cool setup because the job after that is completely boring , who can come in and grasp what he 's set up and maintain/upgrade it ?
Another finicky tech guru that wants to play with the stuff on the job and gets bored and walks off a couple of months later ?</tokentext>
<sentencetext>Poppycock.
You can buy small form factor single core pc's for under $200, or even a refurbished 3-4 year old server box for close to the same price.
Depending on the environmental and space considerations, you can pick the platforms to suit and keep the costs minimal.
Shoot, even a $200 netbook would have more cpu power and storage than most 7 year old computers, generate little or no heat, and demand a fraction of the power.
If this guy is smart, he can cut electrical costs and cooling costs substantially without changing a perfectly functional architecture.What doesnt make sense is grossly overcomplicating things by trying to shove too much into some large scale platform and then further complicate it with a virtualization layer.
We gave up mainframes and thin clients/fat servers didnt work for a reason.Sure, its cool and technically challenging.
Whats the business reason/driver for going the cool/challenging route again?If the OP decides to quit 2 months after implementing his super cool setup because the job after that is completely boring, who can come in and grasp what he's set up and maintain/upgrade it?
Another finicky tech guru that wants to play with the stuff on the job and gets bored and walks off a couple of months later?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189144</id>
	<title>Upgrade vs Overhaul?</title>
	<author>turtleshadow</author>
	<datestamp>1258806900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Really what your being unspecific about is the difference between upgrade versus an overhaul.</p><p>From the floor up (power, cooling, cabling, footprint) is an overhaul.<br>If you want a phase approach or some other piecemeal approach still you have to consider each a small overhaul within a larger system.</p><p>7 year old equipment is likely not going to be cascaded so really your considering it as candidate for heart transplant which means building a some sort of life support while the new system (heart) is brought on line in parallel. This is very expensive in time, budget, and resources.</p><p>Your really going to know your business' processes over the course of more than a "business year" so as to do everything without problems.</p><p>Business moments like tax time, EOY reports, monthly invoicing periods, HR/payroll are to be expected and must still function.<br>Un predictables like supporting business audits (like having to pull up old records, on systems that no longer read them?) and changes  in executive leadership also would impact an upgrade/overhaul.</p><p><b> At no time did you ever mention disaster recovery plan, regular offsite backup strategy or a business continuity plan. These are often overlooked or dealt with inappropriately during normal business times and should be verified prior to beginning. A major overhaul or upgrade could or ought to trigger any one of these at any moment. </b></p><p>I have been there, and I have been there when everyone in the room craps in their pants when the tapes have been found to be lost or unreadable or blank.</p></htmltext>
<tokenext>Really what your being unspecific about is the difference between upgrade versus an overhaul.From the floor up ( power , cooling , cabling , footprint ) is an overhaul.If you want a phase approach or some other piecemeal approach still you have to consider each a small overhaul within a larger system.7 year old equipment is likely not going to be cascaded so really your considering it as candidate for heart transplant which means building a some sort of life support while the new system ( heart ) is brought on line in parallel .
This is very expensive in time , budget , and resources.Your really going to know your business ' processes over the course of more than a " business year " so as to do everything without problems.Business moments like tax time , EOY reports , monthly invoicing periods , HR/payroll are to be expected and must still function.Un predictables like supporting business audits ( like having to pull up old records , on systems that no longer read them ?
) and changes in executive leadership also would impact an upgrade/overhaul .
At no time did you ever mention disaster recovery plan , regular offsite backup strategy or a business continuity plan .
These are often overlooked or dealt with inappropriately during normal business times and should be verified prior to beginning .
A major overhaul or upgrade could or ought to trigger any one of these at any moment .
I have been there , and I have been there when everyone in the room craps in their pants when the tapes have been found to be lost or unreadable or blank .</tokentext>
<sentencetext>Really what your being unspecific about is the difference between upgrade versus an overhaul.From the floor up (power, cooling, cabling, footprint) is an overhaul.If you want a phase approach or some other piecemeal approach still you have to consider each a small overhaul within a larger system.7 year old equipment is likely not going to be cascaded so really your considering it as candidate for heart transplant which means building a some sort of life support while the new system (heart) is brought on line in parallel.
This is very expensive in time, budget, and resources.Your really going to know your business' processes over the course of more than a "business year" so as to do everything without problems.Business moments like tax time, EOY reports, monthly invoicing periods, HR/payroll are to be expected and must still function.Un predictables like supporting business audits (like having to pull up old records, on systems that no longer read them?
) and changes  in executive leadership also would impact an upgrade/overhaul.
At no time did you ever mention disaster recovery plan, regular offsite backup strategy or a business continuity plan.
These are often overlooked or dealt with inappropriately during normal business times and should be verified prior to beginning.
A major overhaul or upgrade could or ought to trigger any one of these at any moment.
I have been there, and I have been there when everyone in the room craps in their pants when the tapes have been found to be lost or unreadable or blank.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191918</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>Anonymous</author>
	<datestamp>1258882500000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>No, you need seperate servers for when the DHCP upgrade requires a conflicting library with the DNS servers which you don't want to upgrade at the same time.</p><p>THIS is where virtualization becomes useful.</p><p>On the other hand, my solutions is a couple of FreeBSD boxes with jails for each service.  You could do the same with whatever the Linux equivalent is, or Solaris zones if you want.  No need to actually run VMs.</p><p>Just run a couple boxes, seperate the services onto different jails.  When you need to upgrade the core OS, do it on your backup box first, get all the services upgraded, switch it to your primary and repeat on the other.</p><p>Its not a matter of config files, its a matter of dependencies.  If you've never run into a dependency conflict, you don't have much experience.  Upgrading every service at the same time isn't always an option, sometimes newer versions in repositories are broken with regards to something you use or need.</p></htmltext>
<tokenext>No , you need seperate servers for when the DHCP upgrade requires a conflicting library with the DNS servers which you do n't want to upgrade at the same time.THIS is where virtualization becomes useful.On the other hand , my solutions is a couple of FreeBSD boxes with jails for each service .
You could do the same with whatever the Linux equivalent is , or Solaris zones if you want .
No need to actually run VMs.Just run a couple boxes , seperate the services onto different jails .
When you need to upgrade the core OS , do it on your backup box first , get all the services upgraded , switch it to your primary and repeat on the other.Its not a matter of config files , its a matter of dependencies .
If you 've never run into a dependency conflict , you do n't have much experience .
Upgrading every service at the same time is n't always an option , sometimes newer versions in repositories are broken with regards to something you use or need .</tokentext>
<sentencetext>No, you need seperate servers for when the DHCP upgrade requires a conflicting library with the DNS servers which you don't want to upgrade at the same time.THIS is where virtualization becomes useful.On the other hand, my solutions is a couple of FreeBSD boxes with jails for each service.
You could do the same with whatever the Linux equivalent is, or Solaris zones if you want.
No need to actually run VMs.Just run a couple boxes, seperate the services onto different jails.
When you need to upgrade the core OS, do it on your backup box first, get all the services upgraded, switch it to your primary and repeat on the other.Its not a matter of config files, its a matter of dependencies.
If you've never run into a dependency conflict, you don't have much experience.
Upgrading every service at the same time isn't always an option, sometimes newer versions in repositories are broken with regards to something you use or need.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189312</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30200628</id>
	<title>Re:Probably forgo virtualization</title>
	<author>buchanmilne</author>
	<datestamp>1258979580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>If the administration 'team' has equal access to all the services today on disparate servers, I don't think virtualization is necessarily a good idea, the services can be consolodated in a single OS instance.</p></div><p>Even if they all can run on the same OS instance, do you really want a large database query killing your DNS recursion. If they were separate VMs, then memory pressure on the database VM wouldn't impact the DNS VM (or, as much).</p><p>If you require different OSs for other reasons (e.g., some Windows, some Unix) then virtualisation is a requirement if you want to reduce box count.</p><p><div class="quote"><p>In terms of HA, put two relatively low end boxes in each branch (you said 7 year old servers were fine, so high end is overkill).  Read up on linux HA which is free, and use DRBD to get total redundancy in your storage as well as a cheap software mirror or raid 5.  Some may rightfully question the need for HA, but this approach is pretty dirt cheap at low scale.</p></div><p>1)Just install CentOS, or the distribution of your choice that ships Red Hat Cluster and a suitable hypervisor<br>2)Install DRBD, cluster, and configure GFS on top of DRBD for storage of VM base images and VM configuration files<br>3)Choose Xen or KVM as hypervisor<br>4)Install VMs (Windows, Linux etc.) using the virt-manager GUI tool<br>5)See that you can now migrate VMs between physical servers without service interruption, and VM recovery can occur in seconds (if a physical server failed). CentOS probably won't have it quite yet, but Xen can now do real-time state replication, so in future even unplanned downtime on a physical machine will be without impact</p><p>If you can fit it in your budget (which you should be able to, having spent nothing on virtualisation software), buy decent servers which have remote management cards (e.g. HP iLO, Dell DRAC, Sun ILOM). Not only is it convenient (e.g. being able to boot into recovery remotely if you ever need it), but cluster operation will be more reliable if you use these for fencing.</p><p>While this may be a bit more complex than typical "Linux HA", the benefits are worth it. In an environment I was involved in until recently, we had a virtualisation cluster running VM pairs which were clustered. In the past 6 months, the virtualisation layer (including GFS, cluster on the physicals, Xen etc.) has not failed, while the clustered service running on the VMs has numerous times. The most likely action that will be taken to fix this is to remove the clustering between VMs, to rely almost exclusively on virtualisation for HA.</p><p>This might not be "Best Practice", but it can provide best of breed and bang for buck for a small investment of time, which can be recovered for the next  site.</p></div>
	</htmltext>
<tokenext>If the administration 'team ' has equal access to all the services today on disparate servers , I do n't think virtualization is necessarily a good idea , the services can be consolodated in a single OS instance.Even if they all can run on the same OS instance , do you really want a large database query killing your DNS recursion .
If they were separate VMs , then memory pressure on the database VM would n't impact the DNS VM ( or , as much ) .If you require different OSs for other reasons ( e.g. , some Windows , some Unix ) then virtualisation is a requirement if you want to reduce box count.In terms of HA , put two relatively low end boxes in each branch ( you said 7 year old servers were fine , so high end is overkill ) .
Read up on linux HA which is free , and use DRBD to get total redundancy in your storage as well as a cheap software mirror or raid 5 .
Some may rightfully question the need for HA , but this approach is pretty dirt cheap at low scale.1 ) Just install CentOS , or the distribution of your choice that ships Red Hat Cluster and a suitable hypervisor2 ) Install DRBD , cluster , and configure GFS on top of DRBD for storage of VM base images and VM configuration files3 ) Choose Xen or KVM as hypervisor4 ) Install VMs ( Windows , Linux etc .
) using the virt-manager GUI tool5 ) See that you can now migrate VMs between physical servers without service interruption , and VM recovery can occur in seconds ( if a physical server failed ) .
CentOS probably wo n't have it quite yet , but Xen can now do real-time state replication , so in future even unplanned downtime on a physical machine will be without impactIf you can fit it in your budget ( which you should be able to , having spent nothing on virtualisation software ) , buy decent servers which have remote management cards ( e.g .
HP iLO , Dell DRAC , Sun ILOM ) .
Not only is it convenient ( e.g .
being able to boot into recovery remotely if you ever need it ) , but cluster operation will be more reliable if you use these for fencing.While this may be a bit more complex than typical " Linux HA " , the benefits are worth it .
In an environment I was involved in until recently , we had a virtualisation cluster running VM pairs which were clustered .
In the past 6 months , the virtualisation layer ( including GFS , cluster on the physicals , Xen etc .
) has not failed , while the clustered service running on the VMs has numerous times .
The most likely action that will be taken to fix this is to remove the clustering between VMs , to rely almost exclusively on virtualisation for HA.This might not be " Best Practice " , but it can provide best of breed and bang for buck for a small investment of time , which can be recovered for the next site .</tokentext>
<sentencetext>If the administration 'team' has equal access to all the services today on disparate servers, I don't think virtualization is necessarily a good idea, the services can be consolodated in a single OS instance.Even if they all can run on the same OS instance, do you really want a large database query killing your DNS recursion.
If they were separate VMs, then memory pressure on the database VM wouldn't impact the DNS VM (or, as much).If you require different OSs for other reasons (e.g., some Windows, some Unix) then virtualisation is a requirement if you want to reduce box count.In terms of HA, put two relatively low end boxes in each branch (you said 7 year old servers were fine, so high end is overkill).
Read up on linux HA which is free, and use DRBD to get total redundancy in your storage as well as a cheap software mirror or raid 5.
Some may rightfully question the need for HA, but this approach is pretty dirt cheap at low scale.1)Just install CentOS, or the distribution of your choice that ships Red Hat Cluster and a suitable hypervisor2)Install DRBD, cluster, and configure GFS on top of DRBD for storage of VM base images and VM configuration files3)Choose Xen or KVM as hypervisor4)Install VMs (Windows, Linux etc.
) using the virt-manager GUI tool5)See that you can now migrate VMs between physical servers without service interruption, and VM recovery can occur in seconds (if a physical server failed).
CentOS probably won't have it quite yet, but Xen can now do real-time state replication, so in future even unplanned downtime on a physical machine will be without impactIf you can fit it in your budget (which you should be able to, having spent nothing on virtualisation software), buy decent servers which have remote management cards (e.g.
HP iLO, Dell DRAC, Sun ILOM).
Not only is it convenient (e.g.
being able to boot into recovery remotely if you ever need it), but cluster operation will be more reliable if you use these for fencing.While this may be a bit more complex than typical "Linux HA", the benefits are worth it.
In an environment I was involved in until recently, we had a virtualisation cluster running VM pairs which were clustered.
In the past 6 months, the virtualisation layer (including GFS, cluster on the physicals, Xen etc.
) has not failed, while the clustered service running on the VMs has numerous times.
The most likely action that will be taken to fix this is to remove the clustering between VMs, to rely almost exclusively on virtualisation for HA.This might not be "Best Practice", but it can provide best of breed and bang for buck for a small investment of time, which can be recovered for the next  site.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189766</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190280</id>
	<title>Most of the poster don't  'get it'</title>
	<author>plopez</author>
	<datestamp>1258816320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>The question is not about hardware or configuration. It is about best practices. This is a higher level process question. Not an implementation question.</p></htmltext>
<tokenext>The question is not about hardware or configuration .
It is about best practices .
This is a higher level process question .
Not an implementation question .</tokentext>
<sentencetext>The question is not about hardware or configuration.
It is about best practices.
This is a higher level process question.
Not an implementation question.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190686</id>
	<title>Combine some &amp; keep some separate - find savin</title>
	<author>jvin248</author>
	<datestamp>1258821000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>
Assuming you have seven year old Microsoft OS boxes, then switching over to a fewer number of latest Linux OS boxes would be an improvement.  Many of the services you list can run in the same Linux box just as happily - without VMing them.  Others you may want a dedicated box (email server with big HDD arrays).  For a small facility having only 150 users you've got a small budget and insignificant system loads. <br> <br>
However, if you want to make a more significant dent in operations, equipment costs and IT maintenance, look into client-server setups using LTSP.org - transfer all fat-client based 150 users to thin clients (stripped down current machines or new thin clients the size of desk phones) running on a few back-room servers.  Switch over the office phone system to something like Asterisk etc.  Look into FreeNAS and m0n0wall/pfSense. Set up a Drupal or Wordpress system to publish internal documents and/or to the Web.  Lot's to keep you busy and productive besides those few old workhorses.</htmltext>
<tokenext>Assuming you have seven year old Microsoft OS boxes , then switching over to a fewer number of latest Linux OS boxes would be an improvement .
Many of the services you list can run in the same Linux box just as happily - without VMing them .
Others you may want a dedicated box ( email server with big HDD arrays ) .
For a small facility having only 150 users you 've got a small budget and insignificant system loads .
However , if you want to make a more significant dent in operations , equipment costs and IT maintenance , look into client-server setups using LTSP.org - transfer all fat-client based 150 users to thin clients ( stripped down current machines or new thin clients the size of desk phones ) running on a few back-room servers .
Switch over the office phone system to something like Asterisk etc .
Look into FreeNAS and m0n0wall/pfSense .
Set up a Drupal or Wordpress system to publish internal documents and/or to the Web .
Lot 's to keep you busy and productive besides those few old workhorses .</tokentext>
<sentencetext>
Assuming you have seven year old Microsoft OS boxes, then switching over to a fewer number of latest Linux OS boxes would be an improvement.
Many of the services you list can run in the same Linux box just as happily - without VMing them.
Others you may want a dedicated box (email server with big HDD arrays).
For a small facility having only 150 users you've got a small budget and insignificant system loads.
However, if you want to make a more significant dent in operations, equipment costs and IT maintenance, look into client-server setups using LTSP.org - transfer all fat-client based 150 users to thin clients (stripped down current machines or new thin clients the size of desk phones) running on a few back-room servers.
Switch over the office phone system to something like Asterisk etc.
Look into FreeNAS and m0n0wall/pfSense.
Set up a Drupal or Wordpress system to publish internal documents and/or to the Web.
Lot's to keep you busy and productive besides those few old workhorses.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189766</id>
	<title>Probably forgo virtualization</title>
	<author>Junta</author>
	<datestamp>1258812480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If the administration 'team' has equal access to all the services today on disparate servers, I don't think virtualization is necessarily a good idea, the services can be consolodated in a single OS instance.</p><p>In terms of HA, put two relatively low end boxes in each branch (you said 7 year old servers were fine, so high end is overkill).  Read up on linux HA which is free, and use DRBD to get total redundancy in your storage as well as a cheap software mirror or raid 5.  Some may rightfully question the need for HA, but this approach is pretty dirt cheap at low scale.</p></htmltext>
<tokenext>If the administration 'team ' has equal access to all the services today on disparate servers , I do n't think virtualization is necessarily a good idea , the services can be consolodated in a single OS instance.In terms of HA , put two relatively low end boxes in each branch ( you said 7 year old servers were fine , so high end is overkill ) .
Read up on linux HA which is free , and use DRBD to get total redundancy in your storage as well as a cheap software mirror or raid 5 .
Some may rightfully question the need for HA , but this approach is pretty dirt cheap at low scale .</tokentext>
<sentencetext>If the administration 'team' has equal access to all the services today on disparate servers, I don't think virtualization is necessarily a good idea, the services can be consolodated in a single OS instance.In terms of HA, put two relatively low end boxes in each branch (you said 7 year old servers were fine, so high end is overkill).
Read up on linux HA which is free, and use DRBD to get total redundancy in your storage as well as a cheap software mirror or raid 5.
Some may rightfully question the need for HA, but this approach is pretty dirt cheap at low scale.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189916</id>
	<title>Re:P2V and consolidate</title>
	<author>Anonymous</author>
	<datestamp>1258813620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Leave it to some asshole on slashdot to recommend server models, ram, and hard drive speed (!) without understanding a damn thing about anything.</p></htmltext>
<tokenext>Leave it to some asshole on slashdot to recommend server models , ram , and hard drive speed ( !
) without understanding a damn thing about anything .</tokentext>
<sentencetext>Leave it to some asshole on slashdot to recommend server models, ram, and hard drive speed (!
) without understanding a damn thing about anything.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190776</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>mysidia</author>
	<datestamp>1258822500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
To have true high-availability, even 2 VMware servers isn't enough, you need a reliable shared storage system that both servers can access.
</p><p>
Even then, the storage chassis itself will be a central point of failure.
To have true HA you need a pair of independent shared storage units with continuous synchronous replication and some reliable mechanism of failover.
</p><p>
But even without HA...
</p><p>
There are still benefits of running only on one server and using virtualization.
Getting higher utilization of a smaller volume of hardware still saves money,  since you aren't running 10 servers sitting at 10\% load all the time.
</p><p>
You can run multiple OSes.
</p><p>
You can run applications that require their own OS install.
For example: domain controller can run on its own without other apps running on the DC.   The major apps have their own server
</p><p>
Finally, there are security benefits of isolating apps to their own server.   If one server is compromised,  it can be taken out of service without affecting the other apps.
</p><p>
You can run the bleeding edge server OS version only for the app that needs it, and run more stable code for other apps.
</p><p>
If one server crashes due to an OS bug, the others keep running.
</p><p>
The hypervisor itself is a thin OS, and if run on proper hardware is highly stable.
Driver issues are unlikely to bring down your servers, especially when utilizing advanced CPU features such as processor VT and IOMMU which provide sophisticated I/O and device isolation functions.
</p><p>
Of course,  your hardware is a single point of failure.    But  backups/disaster recovery is easier to manage in a virtual  environment,  you just  VCB and  regular copies of your VMDKs to a secondary piece of metal  to prevent data loss.
</p></htmltext>
<tokenext>To have true high-availability , even 2 VMware servers is n't enough , you need a reliable shared storage system that both servers can access .
Even then , the storage chassis itself will be a central point of failure .
To have true HA you need a pair of independent shared storage units with continuous synchronous replication and some reliable mechanism of failover .
But even without HA.. . There are still benefits of running only on one server and using virtualization .
Getting higher utilization of a smaller volume of hardware still saves money , since you are n't running 10 servers sitting at 10 \ % load all the time .
You can run multiple OSes .
You can run applications that require their own OS install .
For example : domain controller can run on its own without other apps running on the DC .
The major apps have their own server Finally , there are security benefits of isolating apps to their own server .
If one server is compromised , it can be taken out of service without affecting the other apps .
You can run the bleeding edge server OS version only for the app that needs it , and run more stable code for other apps .
If one server crashes due to an OS bug , the others keep running .
The hypervisor itself is a thin OS , and if run on proper hardware is highly stable .
Driver issues are unlikely to bring down your servers , especially when utilizing advanced CPU features such as processor VT and IOMMU which provide sophisticated I/O and device isolation functions .
Of course , your hardware is a single point of failure .
But backups/disaster recovery is easier to manage in a virtual environment , you just VCB and regular copies of your VMDKs to a secondary piece of metal to prevent data loss .</tokentext>
<sentencetext>
To have true high-availability, even 2 VMware servers isn't enough, you need a reliable shared storage system that both servers can access.
Even then, the storage chassis itself will be a central point of failure.
To have true HA you need a pair of independent shared storage units with continuous synchronous replication and some reliable mechanism of failover.
But even without HA...

There are still benefits of running only on one server and using virtualization.
Getting higher utilization of a smaller volume of hardware still saves money,  since you aren't running 10 servers sitting at 10\% load all the time.
You can run multiple OSes.
You can run applications that require their own OS install.
For example: domain controller can run on its own without other apps running on the DC.
The major apps have their own server

Finally, there are security benefits of isolating apps to their own server.
If one server is compromised,  it can be taken out of service without affecting the other apps.
You can run the bleeding edge server OS version only for the app that needs it, and run more stable code for other apps.
If one server crashes due to an OS bug, the others keep running.
The hypervisor itself is a thin OS, and if run on proper hardware is highly stable.
Driver issues are unlikely to bring down your servers, especially when utilizing advanced CPU features such as processor VT and IOMMU which provide sophisticated I/O and device isolation functions.
Of course,  your hardware is a single point of failure.
But  backups/disaster recovery is easier to manage in a virtual  environment,  you just  VCB and  regular copies of your VMDKs to a secondary piece of metal  to prevent data loss.
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190294</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>AF\_Cheddar\_Head</author>
	<datestamp>1258816500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This guy has it right.</p><p>I do this kind of thing for a living, upgrading small military sites that support 50-100 users. Most of these sites haven't seen new hardware for several years and have a stand-alone AD. We provide new hardware and bring them into an integrated AD.</p><p>Start adding up the costs of VMWare, I know ESXi is free but you very quickly need/want the management tools of VSphere and they ain't cheap, and it is significantly cheaper to use not virtual boxes combining compatible services.</p><p>2-4 servers and a small Equallogic SAN can go a long ways towards providing what you need. Less than 50K in hardware and software licenses.</p><p>Depending on connectivity and redundancy requirements a DC at each site also providing internal DNS, DHCP and WINS (UGH!!) a mail server with a mail relay at the central office and a File and Print server should do it. VPN appliance (Cisco 5510) to put it all be a firewall at corporate.</p><p>I provide a bit more redundancy and security for the military sites but that's the basics.</p></htmltext>
<tokenext>This guy has it right.I do this kind of thing for a living , upgrading small military sites that support 50-100 users .
Most of these sites have n't seen new hardware for several years and have a stand-alone AD .
We provide new hardware and bring them into an integrated AD.Start adding up the costs of VMWare , I know ESXi is free but you very quickly need/want the management tools of VSphere and they ai n't cheap , and it is significantly cheaper to use not virtual boxes combining compatible services.2-4 servers and a small Equallogic SAN can go a long ways towards providing what you need .
Less than 50K in hardware and software licenses.Depending on connectivity and redundancy requirements a DC at each site also providing internal DNS , DHCP and WINS ( UGH ! !
) a mail server with a mail relay at the central office and a File and Print server should do it .
VPN appliance ( Cisco 5510 ) to put it all be a firewall at corporate.I provide a bit more redundancy and security for the military sites but that 's the basics .</tokentext>
<sentencetext>This guy has it right.I do this kind of thing for a living, upgrading small military sites that support 50-100 users.
Most of these sites haven't seen new hardware for several years and have a stand-alone AD.
We provide new hardware and bring them into an integrated AD.Start adding up the costs of VMWare, I know ESXi is free but you very quickly need/want the management tools of VSphere and they ain't cheap, and it is significantly cheaper to use not virtual boxes combining compatible services.2-4 servers and a small Equallogic SAN can go a long ways towards providing what you need.
Less than 50K in hardware and software licenses.Depending on connectivity and redundancy requirements a DC at each site also providing internal DNS, DHCP and WINS (UGH!!
) a mail server with a mail relay at the central office and a File and Print server should do it.
VPN appliance (Cisco 5510) to put it all be a firewall at corporate.I provide a bit more redundancy and security for the military sites but that's the basics.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189550</id>
	<title>One Box Per Service</title>
	<author>KalvinB</author>
	<datestamp>1258810440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Unless you have power problems or financial restrictions you're better off with dedicated boxes.  I currently run 3 old computers.  Ubuntu, Windows XP, Windows 2003 with Apache on XP running PHP sites and doing reverse proxy for the IIS server on the 2003 box.  Ubuntu handles memcache.  Because I'm not made out of money I'm going to virtualize all three systems onto one quad core system which will cost around $600 rather than $1800 for three new systems.  It'll also cut down on power usage.</p><p>Slowness can be caused by any number of issues.  An old harddrive can cause a system to be sluggish.  Just imaging the existing systems onto brand new drives could make things better.  Upgrading the network to 1Gbit or just making sure the switches you have are performing could help.  Putting more memory into existing systems could also speed things up.</p><p>Make sure the power supplies are running well, fans aren't clogged with dust, and that proper cooling is in place.</p><p>If all else is not sufficient, progressively purchase new systems to replace old ones and give the old ones to charity after 6 months to make sure everything is good.</p></htmltext>
<tokenext>Unless you have power problems or financial restrictions you 're better off with dedicated boxes .
I currently run 3 old computers .
Ubuntu , Windows XP , Windows 2003 with Apache on XP running PHP sites and doing reverse proxy for the IIS server on the 2003 box .
Ubuntu handles memcache .
Because I 'm not made out of money I 'm going to virtualize all three systems onto one quad core system which will cost around $ 600 rather than $ 1800 for three new systems .
It 'll also cut down on power usage.Slowness can be caused by any number of issues .
An old harddrive can cause a system to be sluggish .
Just imaging the existing systems onto brand new drives could make things better .
Upgrading the network to 1Gbit or just making sure the switches you have are performing could help .
Putting more memory into existing systems could also speed things up.Make sure the power supplies are running well , fans are n't clogged with dust , and that proper cooling is in place.If all else is not sufficient , progressively purchase new systems to replace old ones and give the old ones to charity after 6 months to make sure everything is good .</tokentext>
<sentencetext>Unless you have power problems or financial restrictions you're better off with dedicated boxes.
I currently run 3 old computers.
Ubuntu, Windows XP, Windows 2003 with Apache on XP running PHP sites and doing reverse proxy for the IIS server on the 2003 box.
Ubuntu handles memcache.
Because I'm not made out of money I'm going to virtualize all three systems onto one quad core system which will cost around $600 rather than $1800 for three new systems.
It'll also cut down on power usage.Slowness can be caused by any number of issues.
An old harddrive can cause a system to be sluggish.
Just imaging the existing systems onto brand new drives could make things better.
Upgrading the network to 1Gbit or just making sure the switches you have are performing could help.
Putting more memory into existing systems could also speed things up.Make sure the power supplies are running well, fans aren't clogged with dust, and that proper cooling is in place.If all else is not sufficient, progressively purchase new systems to replace old ones and give the old ones to charity after 6 months to make sure everything is good.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193378</id>
	<title>If you want a virtual environment</title>
	<author>zipherx</author>
	<datestamp>1258905660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If you want a virtual environment, witch in my experience is really easy to administer, you need some sort of SAN or iSCSI environment. Then you have a base for attaching the needed computing power to this storage solution. It will be costly to start up, mostly be course of the rather powerful switches you need to get. Those are easy 10K a piece.<br>
We just set up a brand new virtual environment at my work (university it department serving about 5k people), the trick is really to get the infrastructure in place, network connectivity, and backbone/power redundancy etc. Then we are adding R710 Dell boxes, with 50GB ram(we are upgrading all 5 of them to 128GB next year) and 2x Quad core Xeons, those are cheap, only about 7k a piece. The processing power of those new Nahelem Xeons are awesome! Can definitely recommend.<br> For a not to expensive SAN i would recommend Dell's Equilogic boxes, they have all the new features, while being robust and built redundant (2 storage controllers, psu's etc), the basic box with 40TB is about 70k.<br> <br>
Since the main concern in my eyes are your aging hardware, you need to migrate one way or the other. Maybe just P2V'ing the old stuff to a vm is not desirable, if you need to update all software. Otherwise it is a easy way to move your old server in a convenient and safe way.

<br> <br> good luck.</htmltext>
<tokenext>If you want a virtual environment , witch in my experience is really easy to administer , you need some sort of SAN or iSCSI environment .
Then you have a base for attaching the needed computing power to this storage solution .
It will be costly to start up , mostly be course of the rather powerful switches you need to get .
Those are easy 10K a piece .
We just set up a brand new virtual environment at my work ( university it department serving about 5k people ) , the trick is really to get the infrastructure in place , network connectivity , and backbone/power redundancy etc .
Then we are adding R710 Dell boxes , with 50GB ram ( we are upgrading all 5 of them to 128GB next year ) and 2x Quad core Xeons , those are cheap , only about 7k a piece .
The processing power of those new Nahelem Xeons are awesome !
Can definitely recommend .
For a not to expensive SAN i would recommend Dell 's Equilogic boxes , they have all the new features , while being robust and built redundant ( 2 storage controllers , psu 's etc ) , the basic box with 40TB is about 70k .
Since the main concern in my eyes are your aging hardware , you need to migrate one way or the other .
Maybe just P2V'ing the old stuff to a vm is not desirable , if you need to update all software .
Otherwise it is a easy way to move your old server in a convenient and safe way .
good luck .</tokentext>
<sentencetext>If you want a virtual environment, witch in my experience is really easy to administer, you need some sort of SAN or iSCSI environment.
Then you have a base for attaching the needed computing power to this storage solution.
It will be costly to start up, mostly be course of the rather powerful switches you need to get.
Those are easy 10K a piece.
We just set up a brand new virtual environment at my work (university it department serving about 5k people), the trick is really to get the infrastructure in place, network connectivity, and backbone/power redundancy etc.
Then we are adding R710 Dell boxes, with 50GB ram(we are upgrading all 5 of them to 128GB next year) and 2x Quad core Xeons, those are cheap, only about 7k a piece.
The processing power of those new Nahelem Xeons are awesome!
Can definitely recommend.
For a not to expensive SAN i would recommend Dell's Equilogic boxes, they have all the new features, while being robust and built redundant (2 storage controllers, psu's etc), the basic box with 40TB is about 70k.
Since the main concern in my eyes are your aging hardware, you need to migrate one way or the other.
Maybe just P2V'ing the old stuff to a vm is not desirable, if you need to update all software.
Otherwise it is a easy way to move your old server in a convenient and safe way.
good luck.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191876</id>
	<title>Re:openVZ</title>
	<author>Anonymous</author>
	<datestamp>1258881240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>dude, you could always setup different things in the basement and practice on them.</p></htmltext>
<tokenext>dude , you could always setup different things in the basement and practice on them .</tokentext>
<sentencetext>dude, you could always setup different things in the basement and practice on them.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190508</id>
	<title>Why all the VM hate?</title>
	<author>deadwill69</author>
	<datestamp>1258818780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I don't see what all the fuss is about vm's.  It allows you to continue to run one service per "box" and cut down on the amount of servers.  Using vm's has allowed us consolidate numerous slightly used, dedicated boxes.  In turn, we have improved out fail overs with vmware's management console and snap shots saved to a SAN.  Near instantaneous recovery without all the head aches.  We still do tape and spinning disk backups depending on how critical the machine's mission.  There are still a lot of services the best practices requires they have their own box:  Infrastructure services being the critical one.  All the rest do just fine virtualized.  As for the remote offices, the should need more than slaved DHCP,DNS, LDAP/Active Directory, gateway, and a firewall unless your using the remote location for load balancing on web, connection redundancy, etc.  We use an MPLS to one of our remote office for this ourselves.

HTH,

will</htmltext>
<tokenext>I do n't see what all the fuss is about vm 's .
It allows you to continue to run one service per " box " and cut down on the amount of servers .
Using vm 's has allowed us consolidate numerous slightly used , dedicated boxes .
In turn , we have improved out fail overs with vmware 's management console and snap shots saved to a SAN .
Near instantaneous recovery without all the head aches .
We still do tape and spinning disk backups depending on how critical the machine 's mission .
There are still a lot of services the best practices requires they have their own box : Infrastructure services being the critical one .
All the rest do just fine virtualized .
As for the remote offices , the should need more than slaved DHCP,DNS , LDAP/Active Directory , gateway , and a firewall unless your using the remote location for load balancing on web , connection redundancy , etc .
We use an MPLS to one of our remote office for this ourselves .
HTH , will</tokentext>
<sentencetext>I don't see what all the fuss is about vm's.
It allows you to continue to run one service per "box" and cut down on the amount of servers.
Using vm's has allowed us consolidate numerous slightly used, dedicated boxes.
In turn, we have improved out fail overs with vmware's management console and snap shots saved to a SAN.
Near instantaneous recovery without all the head aches.
We still do tape and spinning disk backups depending on how critical the machine's mission.
There are still a lot of services the best practices requires they have their own box:  Infrastructure services being the critical one.
All the rest do just fine virtualized.
As for the remote offices, the should need more than slaved DHCP,DNS, LDAP/Active Directory, gateway, and a firewall unless your using the remote location for load balancing on web, connection redundancy, etc.
We use an MPLS to one of our remote office for this ourselves.
HTH,

will</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30195376</id>
	<title>Re:Microsoft Essential Business Server</title>
	<author>VTBlue</author>
	<datestamp>1258920420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>if you wanna give me some more info, i can push your feedback to the product team.  Can you tell me exactly where EBS sucks around "integration" ?  The reason I ask, is that I'm really hard pressed to find any article, review, or customer who hates EBS or says it sucks.  Your insight would be appreciated.</p></htmltext>
<tokenext>if you wan na give me some more info , i can push your feedback to the product team .
Can you tell me exactly where EBS sucks around " integration " ?
The reason I ask , is that I 'm really hard pressed to find any article , review , or customer who hates EBS or says it sucks .
Your insight would be appreciated .</tokentext>
<sentencetext>if you wanna give me some more info, i can push your feedback to the product team.
Can you tell me exactly where EBS sucks around "integration" ?
The reason I ask, is that I'm really hard pressed to find any article, review, or customer who hates EBS or says it sucks.
Your insight would be appreciated.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192522</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191438</id>
	<title>Go for the big iron</title>
	<author>Anonymous</author>
	<datestamp>1258831320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Personally I would recommend a mainframe implementation for the workloads you are suggesting.</p><p>If you're gonna over cook this, may as well go all in, eh</p></htmltext>
<tokenext>Personally I would recommend a mainframe implementation for the workloads you are suggesting.If you 're gon na over cook this , may as well go all in , eh</tokentext>
<sentencetext>Personally I would recommend a mainframe implementation for the workloads you are suggesting.If you're gonna over cook this, may as well go all in, eh</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190766</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>syousef</author>
	<datestamp>1258822260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>You literally can't buy a server these days with less than 2 cores, and getting less than 4 is a challenge. </i></p><p>Does it matter how many cores? They're cheap! 4 times the chance of failure is my only issue. In any case it sounds like he could combine services WITHOUT the overhead of visualization.</p><p><i>Even the other listed services probably cause negligible load. Most web servers sit there at 0.1\% load most of the time, ditto with ftp, which tends to see only sporadic use.</i></p><p>Yes but it's the rest of the time that actually counts. It doesn't matter if you can handle low load periods if you can't handle high.</p><p><i>I think you'll find that the exact opposite of your quote is true: for 99\% of corporate environments where actualization is used, it is appropriate. In fact, it's under-used. Most places could save a lot of money by virtualizing more.</i></p><p>Visualization has distinct advantages, and utilisation is certainly one advantage but if you require high availability and can't predict peak loads accurately (across all services simultaneously!) it may well not be appropriate. The bigger advantage of virtualisation is the ability to bring up your virtual machine on a completely different piece of hardware should your existing hardware fail. You can achieve similar without visualization, but I find that more compelling than the utilisation argument, which frankly is just a sales ploy for most cases.</p><p><i>I'm guessing you work for an organization where money grows on trees, and you can 'design' whatever the hell you want, and you get the budget for it, no matter how wasteful, right?</i></p><p>Yeah that's why they're using the same infrastructure for 7 years running, right?</p><p>I hate it when slashdot descends into this kind of childish petty character attack. It's not conducive to a reasoned discussion.</p></div>
	</htmltext>
<tokenext>You literally ca n't buy a server these days with less than 2 cores , and getting less than 4 is a challenge .
Does it matter how many cores ?
They 're cheap !
4 times the chance of failure is my only issue .
In any case it sounds like he could combine services WITHOUT the overhead of visualization.Even the other listed services probably cause negligible load .
Most web servers sit there at 0.1 \ % load most of the time , ditto with ftp , which tends to see only sporadic use.Yes but it 's the rest of the time that actually counts .
It does n't matter if you can handle low load periods if you ca n't handle high.I think you 'll find that the exact opposite of your quote is true : for 99 \ % of corporate environments where actualization is used , it is appropriate .
In fact , it 's under-used .
Most places could save a lot of money by virtualizing more.Visualization has distinct advantages , and utilisation is certainly one advantage but if you require high availability and ca n't predict peak loads accurately ( across all services simultaneously !
) it may well not be appropriate .
The bigger advantage of virtualisation is the ability to bring up your virtual machine on a completely different piece of hardware should your existing hardware fail .
You can achieve similar without visualization , but I find that more compelling than the utilisation argument , which frankly is just a sales ploy for most cases.I 'm guessing you work for an organization where money grows on trees , and you can 'design ' whatever the hell you want , and you get the budget for it , no matter how wasteful , right ? Yeah that 's why they 're using the same infrastructure for 7 years running , right ? I hate it when slashdot descends into this kind of childish petty character attack .
It 's not conducive to a reasoned discussion .</tokentext>
<sentencetext>You literally can't buy a server these days with less than 2 cores, and getting less than 4 is a challenge.
Does it matter how many cores?
They're cheap!
4 times the chance of failure is my only issue.
In any case it sounds like he could combine services WITHOUT the overhead of visualization.Even the other listed services probably cause negligible load.
Most web servers sit there at 0.1\% load most of the time, ditto with ftp, which tends to see only sporadic use.Yes but it's the rest of the time that actually counts.
It doesn't matter if you can handle low load periods if you can't handle high.I think you'll find that the exact opposite of your quote is true: for 99\% of corporate environments where actualization is used, it is appropriate.
In fact, it's under-used.
Most places could save a lot of money by virtualizing more.Visualization has distinct advantages, and utilisation is certainly one advantage but if you require high availability and can't predict peak loads accurately (across all services simultaneously!
) it may well not be appropriate.
The bigger advantage of virtualisation is the ability to bring up your virtual machine on a completely different piece of hardware should your existing hardware fail.
You can achieve similar without visualization, but I find that more compelling than the utilisation argument, which frankly is just a sales ploy for most cases.I'm guessing you work for an organization where money grows on trees, and you can 'design' whatever the hell you want, and you get the budget for it, no matter how wasteful, right?Yeah that's why they're using the same infrastructure for 7 years running, right?I hate it when slashdot descends into this kind of childish petty character attack.
It's not conducive to a reasoned discussion.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193590</id>
	<title>Here is how I got some advice from a professional</title>
	<author>managerialslime</author>
	<datestamp>1258907100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Before you complete your plans for your upgrade path, you might want to hire a professional to review your infrastructure and assumptions.  That is just what I did.<p>

Before doing my upgrade, I wanted to be sure my infrastructure would be up-to-date with current standards.  The following 2-part document first qualifies the person giving advice and then presents 25 questions I needed that person to answer.

</p><p>
(As each of the 254 questions are covered on the CISSP exam, a competent consultant should be able to guide you in the right direction.)

</p><p>

Feel free to adjust the estimates of person-hours for each task.  The estimates below are for a company with about 50 servers, 50 network devices, and a WAN / MPLS covering a dozen offices across the US.
</p><p>

Good luck!

</p><p>

RFQ Goal:  THE COMPANY desires to contract with a consultant who will, on an annual basis, review THE COMPANY&rsquo;s compliance with its own security policies and standards.  The consultant will summarize their findings in a brief report, including any recommendations for future improvement.  In addition, as planning for a major upgrade is underway, additional recommendations for the upgraded system are expected.
</p><p>
Consultant Background: The consultant will be an individual skilled and experienced in this task.  The consultant will have no less than five years experience in the information security field.
</p><p>
Credentials: The consultant must have at least one of the following credentials and furnish verification that the credential is current:
</p><p>* Certified Information Systems Security Professional (CISSP)
</p><p>* Certified Information Systems Auditor (CISA)
</p><p>* Certified Information Security Manager (CISM)
</p><p>Work to be Performed:
</p><p>* THE COMPANY will send the consultant a Purchase Order authorizing the start of the engagement.  Depending on consultant availability, the engagement is expected to take from four to ten weeks to compete.
</p><p>* Supporting material review:  Within two weeks of receiving a purchase order authorizing work to begin, the consultant will spend 6 to 8 hours reviewing any supporting materials provided by THE COMPANY (typically answers to prior security assessments) and developing follow-up questions.
</p><p>* Estimated consulting time:  8 hours.

</p><p>* Follow-up questions: Within four weeks of receiving a purchase order authorizing work to begin, the consultant will then email those questions to a designated contact at THE COMPANY and then read any answers that are returned.
</p><p>* Estimated consulting time:  2 hours.
</p><p>* Within six weeks of receiving a purchase order authorizing work to begin, the consultant will then spend up to 4 hours on-site at THE COMPANY&rsquo;s data center, asking questions to validate readings.
</p><p>* Estimated consulting and travel time: 8 hours.
</p><p>* Within six weeks of receiving a purchase order authorizing work to begin, the consultant will use an industry standard tool of their choosing and at their cost, to attempt a penetration test of THE COMPANY&rsquo;s system.
</p><p>* Estimated consulting time:  16 hours.
</p><p>* Within eight weeks of receiving a purchase order authorizing work to begin, the consultant will then use Microsoft Word to fill in a twenty-five question survey with their observations and recommendations and email their report to their contact at THE COMPANY.  Any question not applicable to a security assessment may be left blank.
</p><p>* Estimated consulting time: 2 hours.
</p><p>* Within nine weeks of receiving a purchase order authorizing work to begin, the consultant will conduct a conference call reviewing their findings.
</p><p>* Within ten weeks of receiving a purchase order authorizing work to begin, the consultant will The agrees to forward to THE COMPANY copies of all supporting documents and other working papers and products performed on behalf of THE COMPANY, and also provide THE COMPANY with an invoice for the amount agreed to in the Purchase Order.  THE COMPANY will pay the invoice within fifteen days.</p></htmltext>
<tokenext>Before you complete your plans for your upgrade path , you might want to hire a professional to review your infrastructure and assumptions .
That is just what I did .
Before doing my upgrade , I wanted to be sure my infrastructure would be up-to-date with current standards .
The following 2-part document first qualifies the person giving advice and then presents 25 questions I needed that person to answer .
( As each of the 254 questions are covered on the CISSP exam , a competent consultant should be able to guide you in the right direction .
) Feel free to adjust the estimates of person-hours for each task .
The estimates below are for a company with about 50 servers , 50 network devices , and a WAN / MPLS covering a dozen offices across the US .
Good luck !
RFQ Goal : THE COMPANY desires to contract with a consultant who will , on an annual basis , review THE COMPANY    s compliance with its own security policies and standards .
The consultant will summarize their findings in a brief report , including any recommendations for future improvement .
In addition , as planning for a major upgrade is underway , additional recommendations for the upgraded system are expected .
Consultant Background : The consultant will be an individual skilled and experienced in this task .
The consultant will have no less than five years experience in the information security field .
Credentials : The consultant must have at least one of the following credentials and furnish verification that the credential is current : * Certified Information Systems Security Professional ( CISSP ) * Certified Information Systems Auditor ( CISA ) * Certified Information Security Manager ( CISM ) Work to be Performed : * THE COMPANY will send the consultant a Purchase Order authorizing the start of the engagement .
Depending on consultant availability , the engagement is expected to take from four to ten weeks to compete .
* Supporting material review : Within two weeks of receiving a purchase order authorizing work to begin , the consultant will spend 6 to 8 hours reviewing any supporting materials provided by THE COMPANY ( typically answers to prior security assessments ) and developing follow-up questions .
* Estimated consulting time : 8 hours .
* Follow-up questions : Within four weeks of receiving a purchase order authorizing work to begin , the consultant will then email those questions to a designated contact at THE COMPANY and then read any answers that are returned .
* Estimated consulting time : 2 hours .
* Within six weeks of receiving a purchase order authorizing work to begin , the consultant will then spend up to 4 hours on-site at THE COMPANY    s data center , asking questions to validate readings .
* Estimated consulting and travel time : 8 hours .
* Within six weeks of receiving a purchase order authorizing work to begin , the consultant will use an industry standard tool of their choosing and at their cost , to attempt a penetration test of THE COMPANY    s system .
* Estimated consulting time : 16 hours .
* Within eight weeks of receiving a purchase order authorizing work to begin , the consultant will then use Microsoft Word to fill in a twenty-five question survey with their observations and recommendations and email their report to their contact at THE COMPANY .
Any question not applicable to a security assessment may be left blank .
* Estimated consulting time : 2 hours .
* Within nine weeks of receiving a purchase order authorizing work to begin , the consultant will conduct a conference call reviewing their findings .
* Within ten weeks of receiving a purchase order authorizing work to begin , the consultant will The agrees to forward to THE COMPANY copies of all supporting documents and other working papers and products performed on behalf of THE COMPANY , and also provide THE COMPANY with an invoice for the amount agreed to in the Purchase Order .
THE COMPANY will pay the invoice within fifteen days .</tokentext>
<sentencetext>Before you complete your plans for your upgrade path, you might want to hire a professional to review your infrastructure and assumptions.
That is just what I did.
Before doing my upgrade, I wanted to be sure my infrastructure would be up-to-date with current standards.
The following 2-part document first qualifies the person giving advice and then presents 25 questions I needed that person to answer.
(As each of the 254 questions are covered on the CISSP exam, a competent consultant should be able to guide you in the right direction.
)



Feel free to adjust the estimates of person-hours for each task.
The estimates below are for a company with about 50 servers, 50 network devices, and a WAN / MPLS covering a dozen offices across the US.
Good luck!
RFQ Goal:  THE COMPANY desires to contract with a consultant who will, on an annual basis, review THE COMPANY’s compliance with its own security policies and standards.
The consultant will summarize their findings in a brief report, including any recommendations for future improvement.
In addition, as planning for a major upgrade is underway, additional recommendations for the upgraded system are expected.
Consultant Background: The consultant will be an individual skilled and experienced in this task.
The consultant will have no less than five years experience in the information security field.
Credentials: The consultant must have at least one of the following credentials and furnish verification that the credential is current:
* Certified Information Systems Security Professional (CISSP)
* Certified Information Systems Auditor (CISA)
* Certified Information Security Manager (CISM)
Work to be Performed:
* THE COMPANY will send the consultant a Purchase Order authorizing the start of the engagement.
Depending on consultant availability, the engagement is expected to take from four to ten weeks to compete.
* Supporting material review:  Within two weeks of receiving a purchase order authorizing work to begin, the consultant will spend 6 to 8 hours reviewing any supporting materials provided by THE COMPANY (typically answers to prior security assessments) and developing follow-up questions.
* Estimated consulting time:  8 hours.
* Follow-up questions: Within four weeks of receiving a purchase order authorizing work to begin, the consultant will then email those questions to a designated contact at THE COMPANY and then read any answers that are returned.
* Estimated consulting time:  2 hours.
* Within six weeks of receiving a purchase order authorizing work to begin, the consultant will then spend up to 4 hours on-site at THE COMPANY’s data center, asking questions to validate readings.
* Estimated consulting and travel time: 8 hours.
* Within six weeks of receiving a purchase order authorizing work to begin, the consultant will use an industry standard tool of their choosing and at their cost, to attempt a penetration test of THE COMPANY’s system.
* Estimated consulting time:  16 hours.
* Within eight weeks of receiving a purchase order authorizing work to begin, the consultant will then use Microsoft Word to fill in a twenty-five question survey with their observations and recommendations and email their report to their contact at THE COMPANY.
Any question not applicable to a security assessment may be left blank.
* Estimated consulting time: 2 hours.
* Within nine weeks of receiving a purchase order authorizing work to begin, the consultant will conduct a conference call reviewing their findings.
* Within ten weeks of receiving a purchase order authorizing work to begin, the consultant will The agrees to forward to THE COMPANY copies of all supporting documents and other working papers and products performed on behalf of THE COMPANY, and also provide THE COMPANY with an invoice for the amount agreed to in the Purchase Order.
THE COMPANY will pay the invoice within fifteen days.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189844</id>
	<title>Ep9T?</title>
	<author>Anonymous</author>
	<datestamp>1258813080000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>when 3onE playing</htmltext>
<tokenext>when 3onE playing</tokentext>
<sentencetext>when 3onE playing</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189568</id>
	<title>Keep it simple</title>
	<author>Anonymous</author>
	<datestamp>1258810560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Lots of other people have already pointed this out, but I'll chime in: don't mess with what works.</p><p>Unless you have a huge influx of people coming in or a change in the way the network will be used, stick to the current set up. Do not go virtual or load balance and complicate things. That may even void your support contracts if you have any. Assuming you have to upgrade, try this:</p><p>1. Buy new servers for each service, just like it was before.<br>2. Buy at least one extra server. Maybe more.<br>3. Set up one new server at a time, keeping the old one on hand, in case something on the new server doesn't work perfectly. You should always always be able to revert back during the transition.<br>4. Make images of the new servers. Use clonezilla or something similar. Then, if one server dies, you have an image that can be transferred to a spare machine (see #2).</p><p>The big things here are that you should keep things simple, have a backup in case of hardware/software failure, and do one service at a time. That insures if something goes wrong, you know which server caused the problem.</p></htmltext>
<tokenext>Lots of other people have already pointed this out , but I 'll chime in : do n't mess with what works.Unless you have a huge influx of people coming in or a change in the way the network will be used , stick to the current set up .
Do not go virtual or load balance and complicate things .
That may even void your support contracts if you have any .
Assuming you have to upgrade , try this : 1 .
Buy new servers for each service , just like it was before.2 .
Buy at least one extra server .
Maybe more.3 .
Set up one new server at a time , keeping the old one on hand , in case something on the new server does n't work perfectly .
You should always always be able to revert back during the transition.4 .
Make images of the new servers .
Use clonezilla or something similar .
Then , if one server dies , you have an image that can be transferred to a spare machine ( see # 2 ) .The big things here are that you should keep things simple , have a backup in case of hardware/software failure , and do one service at a time .
That insures if something goes wrong , you know which server caused the problem .</tokentext>
<sentencetext>Lots of other people have already pointed this out, but I'll chime in: don't mess with what works.Unless you have a huge influx of people coming in or a change in the way the network will be used, stick to the current set up.
Do not go virtual or load balance and complicate things.
That may even void your support contracts if you have any.
Assuming you have to upgrade, try this:1.
Buy new servers for each service, just like it was before.2.
Buy at least one extra server.
Maybe more.3.
Set up one new server at a time, keeping the old one on hand, in case something on the new server doesn't work perfectly.
You should always always be able to revert back during the transition.4.
Make images of the new servers.
Use clonezilla or something similar.
Then, if one server dies, you have an image that can be transferred to a spare machine (see #2).The big things here are that you should keep things simple, have a backup in case of hardware/software failure, and do one service at a time.
That insures if something goes wrong, you know which server caused the problem.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189012</id>
	<title>Don't do it</title>
	<author>Anonymous</author>
	<datestamp>1258805880000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>Complexity is bad. I work in a department of similar size. Long long ago, things were simple. But then due to plans like yours, we ended up with quadruple replicated dns servers with automatic failover and load balancing, a mail system requiring 12 separate machines (double redundant machines at each of 4 stages: front end, queuing, mail delivery, and mail storage), a web system built from 6 interacting machines (caches, front end, back end, script server, etc.) plus redundancy for load balancing, plus automatic failover. You can guess what this is like: it sucks. The thing was a nightmare to maintain, very expensive, slow (mail traveling over 8 queues to get delivered), and impossible to debug when things go wrong.</p><p>It has taken more than a year, but we are slowly converging to a simple solution. 150 people do not need multiply redundant load balanced dns servers. One will do just fine, with a backup in case it fails. 150 people do not need 12+ machines to deliver mail. A small organization doesn't need a cluster to serve web pages.</p><p>My advice: go for simplicity. Measure your requirements ahead of time, so you know if you really need load balanced dns servers, etc. In all likelihood, you will find that you don't need nearly the capacity you think you do, and can make due with a much simpler, cheaper, easier to maintain, more robust, and faster setup.  If you can call that making due, that is.</p></htmltext>
<tokenext>Complexity is bad .
I work in a department of similar size .
Long long ago , things were simple .
But then due to plans like yours , we ended up with quadruple replicated dns servers with automatic failover and load balancing , a mail system requiring 12 separate machines ( double redundant machines at each of 4 stages : front end , queuing , mail delivery , and mail storage ) , a web system built from 6 interacting machines ( caches , front end , back end , script server , etc .
) plus redundancy for load balancing , plus automatic failover .
You can guess what this is like : it sucks .
The thing was a nightmare to maintain , very expensive , slow ( mail traveling over 8 queues to get delivered ) , and impossible to debug when things go wrong.It has taken more than a year , but we are slowly converging to a simple solution .
150 people do not need multiply redundant load balanced dns servers .
One will do just fine , with a backup in case it fails .
150 people do not need 12 + machines to deliver mail .
A small organization does n't need a cluster to serve web pages.My advice : go for simplicity .
Measure your requirements ahead of time , so you know if you really need load balanced dns servers , etc .
In all likelihood , you will find that you do n't need nearly the capacity you think you do , and can make due with a much simpler , cheaper , easier to maintain , more robust , and faster setup .
If you can call that making due , that is .</tokentext>
<sentencetext>Complexity is bad.
I work in a department of similar size.
Long long ago, things were simple.
But then due to plans like yours, we ended up with quadruple replicated dns servers with automatic failover and load balancing, a mail system requiring 12 separate machines (double redundant machines at each of 4 stages: front end, queuing, mail delivery, and mail storage), a web system built from 6 interacting machines (caches, front end, back end, script server, etc.
) plus redundancy for load balancing, plus automatic failover.
You can guess what this is like: it sucks.
The thing was a nightmare to maintain, very expensive, slow (mail traveling over 8 queues to get delivered), and impossible to debug when things go wrong.It has taken more than a year, but we are slowly converging to a simple solution.
150 people do not need multiply redundant load balanced dns servers.
One will do just fine, with a backup in case it fails.
150 people do not need 12+ machines to deliver mail.
A small organization doesn't need a cluster to serve web pages.My advice: go for simplicity.
Measure your requirements ahead of time, so you know if you really need load balanced dns servers, etc.
In all likelihood, you will find that you don't need nearly the capacity you think you do, and can make due with a much simpler, cheaper, easier to maintain, more robust, and faster setup.
If you can call that making due, that is.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188994</id>
	<title>Affordable SME Solution</title>
	<author>foupfeiffer</author>
	<datestamp>1258805700000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I am still in the process of upgrading a "legacy" infrastructure in a smaller (less than 50) office but I feel your pain.</p><p>First, it's not "tech sexy", but you've got to get the current infrastructure all written down (or typed up - but then you have to burn to cd just in case your "upgrade" breaks everything).</p><p>You should also "interview" users (preferrably by email but sometimes if you need an answer you have to just call them or... face to face even...) to find out what services they use - you might be surprised to find something that you didn't even know your Dept was responsible for (oh, that Panasonic PBX that runs the whole phone system is in the locked closet they forgot to tell you about...)</p><p>Your next step is prioritizing what you actually need/want to do... remember that you're in a business environment so having redundant power supplies for the dedicated cd burning computer may not actually improve your workplace (but yes, it might be cool to have an automated coffee maker that can run on solar power...)</p><p>So now that you know pretty much what you have and what you want to change...</p><p>Technology wise, Virtualization is definitely your answer... and there's a learning curve:<br>
&nbsp; &nbsp; VMWare is pretty nice and pretty expensive.<br>
&nbsp; &nbsp; Virtualbox (I use) is free but doesn't have as many enterprise features (automatic failover)<br>
&nbsp; &nbsp; Xen with Remus or HA is the thinking man's setup</p><p>All of the above will depend on reliable hardware - that means at least RAID 1, and yes you can go with SAN but be aware that it's a level of complexity you might not need (for FTP, DNS, etc.)</p><p>Reading what you've listed as "services" it almost sounds like you want a single linux VM running all of those things with Xen and Remus...</p><p>Good luck, and TEST IT before you deploy it as a production setup.</p></htmltext>
<tokenext>I am still in the process of upgrading a " legacy " infrastructure in a smaller ( less than 50 ) office but I feel your pain.First , it 's not " tech sexy " , but you 've got to get the current infrastructure all written down ( or typed up - but then you have to burn to cd just in case your " upgrade " breaks everything ) .You should also " interview " users ( preferrably by email but sometimes if you need an answer you have to just call them or... face to face even... ) to find out what services they use - you might be surprised to find something that you did n't even know your Dept was responsible for ( oh , that Panasonic PBX that runs the whole phone system is in the locked closet they forgot to tell you about... ) Your next step is prioritizing what you actually need/want to do... remember that you 're in a business environment so having redundant power supplies for the dedicated cd burning computer may not actually improve your workplace ( but yes , it might be cool to have an automated coffee maker that can run on solar power... ) So now that you know pretty much what you have and what you want to change...Technology wise , Virtualization is definitely your answer... and there 's a learning curve :     VMWare is pretty nice and pretty expensive .
    Virtualbox ( I use ) is free but does n't have as many enterprise features ( automatic failover )     Xen with Remus or HA is the thinking man 's setupAll of the above will depend on reliable hardware - that means at least RAID 1 , and yes you can go with SAN but be aware that it 's a level of complexity you might not need ( for FTP , DNS , etc .
) Reading what you 've listed as " services " it almost sounds like you want a single linux VM running all of those things with Xen and Remus...Good luck , and TEST IT before you deploy it as a production setup .</tokentext>
<sentencetext>I am still in the process of upgrading a "legacy" infrastructure in a smaller (less than 50) office but I feel your pain.First, it's not "tech sexy", but you've got to get the current infrastructure all written down (or typed up - but then you have to burn to cd just in case your "upgrade" breaks everything).You should also "interview" users (preferrably by email but sometimes if you need an answer you have to just call them or... face to face even...) to find out what services they use - you might be surprised to find something that you didn't even know your Dept was responsible for (oh, that Panasonic PBX that runs the whole phone system is in the locked closet they forgot to tell you about...)Your next step is prioritizing what you actually need/want to do... remember that you're in a business environment so having redundant power supplies for the dedicated cd burning computer may not actually improve your workplace (but yes, it might be cool to have an automated coffee maker that can run on solar power...)So now that you know pretty much what you have and what you want to change...Technology wise, Virtualization is definitely your answer... and there's a learning curve:
    VMWare is pretty nice and pretty expensive.
    Virtualbox (I use) is free but doesn't have as many enterprise features (automatic failover)
    Xen with Remus or HA is the thinking man's setupAll of the above will depend on reliable hardware - that means at least RAID 1, and yes you can go with SAN but be aware that it's a level of complexity you might not need (for FTP, DNS, etc.
)Reading what you've listed as "services" it almost sounds like you want a single linux VM running all of those things with Xen and Remus...Good luck, and TEST IT before you deploy it as a production setup.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192368</id>
	<title>Re:I'd say</title>
	<author>Anonymous</author>
	<datestamp>1258892100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>He said 150 people in offices, which implies employees or contractors--not 150 users. Having 150 employees does not equate to having no need for failover or redundancy. If they process a lot of transactions and do data mining, then their hardware needs could be quite large. I've worked in a 80 employee company with $2,000,000 in hardware. Many of the nodes were under very heavy load due to the large amounts of external web traffic, database lookups, and analytic calculations to perform. A 1 day outage could have meant $100,000 loss in revenue for that day. The question one should ask is what is the penalty if there is a 1 day outage: $1k a day = wake me in the morning, $100k a day = panic mode. The second question to ask is what is the long term ramification if there is a 1 day outage? Dropped or inaccurate financial transactions, legal issues stemming from failing to meet contractual obligations, loss of clients/users, negative press, complaints, etc.</p></htmltext>
<tokenext>He said 150 people in offices , which implies employees or contractors--not 150 users .
Having 150 employees does not equate to having no need for failover or redundancy .
If they process a lot of transactions and do data mining , then their hardware needs could be quite large .
I 've worked in a 80 employee company with $ 2,000,000 in hardware .
Many of the nodes were under very heavy load due to the large amounts of external web traffic , database lookups , and analytic calculations to perform .
A 1 day outage could have meant $ 100,000 loss in revenue for that day .
The question one should ask is what is the penalty if there is a 1 day outage : $ 1k a day = wake me in the morning , $ 100k a day = panic mode .
The second question to ask is what is the long term ramification if there is a 1 day outage ?
Dropped or inaccurate financial transactions , legal issues stemming from failing to meet contractual obligations , loss of clients/users , negative press , complaints , etc .</tokentext>
<sentencetext>He said 150 people in offices, which implies employees or contractors--not 150 users.
Having 150 employees does not equate to having no need for failover or redundancy.
If they process a lot of transactions and do data mining, then their hardware needs could be quite large.
I've worked in a 80 employee company with $2,000,000 in hardware.
Many of the nodes were under very heavy load due to the large amounts of external web traffic, database lookups, and analytic calculations to perform.
A 1 day outage could have meant $100,000 loss in revenue for that day.
The question one should ask is what is the penalty if there is a 1 day outage: $1k a day = wake me in the morning, $100k a day = panic mode.
The second question to ask is what is the long term ramification if there is a 1 day outage?
Dropped or inaccurate financial transactions, legal issues stemming from failing to meet contractual obligations, loss of clients/users, negative press, complaints, etc.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188876</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189152</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>Anonymous</author>
	<datestamp>1258807080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That's not true.  Running as a VM guest makes it easy to move an image to another machine as time and budget allow.  Just because you don't have a cluster right now, doesn't mean it's stupid to go that path.</p></htmltext>
<tokenext>That 's not true .
Running as a VM guest makes it easy to move an image to another machine as time and budget allow .
Just because you do n't have a cluster right now , does n't mean it 's stupid to go that path .</tokentext>
<sentencetext>That's not true.
Running as a VM guest makes it easy to move an image to another machine as time and budget allow.
Just because you don't have a cluster right now, doesn't mean it's stupid to go that path.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188970</id>
	<title>Take your time</title>
	<author>BooRadley</author>
	<datestamp>1258805580000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>If you're like most IT managers, you probably have a budget.  Which is probably wholly inadequate for immediately and elegantly solving your problems.</p><p>Look at your company's business, and how the different offices interact with each other, and with your customers.  By just upgrading existing infrastructure, you may be putting some of the money and time where it's not needed, instead of just shutting down a service or migrating it to something more modern or easier to manage.  Free is not always better, unless your time has no value.</p><p>Pick a few projects to help you get a handle on the things that need more planning, and try and put out any fires as quickly as possible, without committing to a long-term technology plan for remediation.</p><p>Your objective is to make the transition as boring as possible for the end users, except for the parts where things just start to work better.</p></htmltext>
<tokenext>If you 're like most IT managers , you probably have a budget .
Which is probably wholly inadequate for immediately and elegantly solving your problems.Look at your company 's business , and how the different offices interact with each other , and with your customers .
By just upgrading existing infrastructure , you may be putting some of the money and time where it 's not needed , instead of just shutting down a service or migrating it to something more modern or easier to manage .
Free is not always better , unless your time has no value.Pick a few projects to help you get a handle on the things that need more planning , and try and put out any fires as quickly as possible , without committing to a long-term technology plan for remediation.Your objective is to make the transition as boring as possible for the end users , except for the parts where things just start to work better .</tokentext>
<sentencetext>If you're like most IT managers, you probably have a budget.
Which is probably wholly inadequate for immediately and elegantly solving your problems.Look at your company's business, and how the different offices interact with each other, and with your customers.
By just upgrading existing infrastructure, you may be putting some of the money and time where it's not needed, instead of just shutting down a service or migrating it to something more modern or easier to manage.
Free is not always better, unless your time has no value.Pick a few projects to help you get a handle on the things that need more planning, and try and put out any fires as quickly as possible, without committing to a long-term technology plan for remediation.Your objective is to make the transition as boring as possible for the end users, except for the parts where things just start to work better.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189738</id>
	<title>Microsoft Essential Business Server</title>
	<author>VTBlue</author>
	<datestamp>1258812180000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p>If you have heard of Small Business Server, Microsoft just released a 3 server solution for businesses of your size called EBS.  It will do everything you just outlined including setting the foundation for branch office scenarios with redundancy.  With EBS, you get SharePoint, Exchange, Fax serving, AD, DNS, DHCP, firewall, FTP, IIS for web serving all included.  Because it is built on Windows Server 2008, you get access to all the services that it provides.  It will be a huge leap in user experience for your end-users and you'll finally stop fire fighting and actually allow time to deal with the real IT/Business challenges.</p><p>Rather than pushing the features, the real work you need to do is to identify business requirements and map them to features, implementation costs, and upkeep costs.</p><p>Once you have a sane, self-managing system in place, you can start to role out self-service IT systems for your users so they don't bother you for password resets.  Some would say that you're putting yourself out of a job by doing this, but if you play your cards right and plan out the technical and the social aspects of the project, you will really be a hero and you'll probably be seen in a more respectable light.</p><p>visit <a href="http://www.microsoft.com/ebs" title="microsoft.com" rel="nofollow">http://www.microsoft.com/ebs</a> [microsoft.com]</p></htmltext>
<tokenext>If you have heard of Small Business Server , Microsoft just released a 3 server solution for businesses of your size called EBS .
It will do everything you just outlined including setting the foundation for branch office scenarios with redundancy .
With EBS , you get SharePoint , Exchange , Fax serving , AD , DNS , DHCP , firewall , FTP , IIS for web serving all included .
Because it is built on Windows Server 2008 , you get access to all the services that it provides .
It will be a huge leap in user experience for your end-users and you 'll finally stop fire fighting and actually allow time to deal with the real IT/Business challenges.Rather than pushing the features , the real work you need to do is to identify business requirements and map them to features , implementation costs , and upkeep costs.Once you have a sane , self-managing system in place , you can start to role out self-service IT systems for your users so they do n't bother you for password resets .
Some would say that you 're putting yourself out of a job by doing this , but if you play your cards right and plan out the technical and the social aspects of the project , you will really be a hero and you 'll probably be seen in a more respectable light.visit http : //www.microsoft.com/ebs [ microsoft.com ]</tokentext>
<sentencetext>If you have heard of Small Business Server, Microsoft just released a 3 server solution for businesses of your size called EBS.
It will do everything you just outlined including setting the foundation for branch office scenarios with redundancy.
With EBS, you get SharePoint, Exchange, Fax serving, AD, DNS, DHCP, firewall, FTP, IIS for web serving all included.
Because it is built on Windows Server 2008, you get access to all the services that it provides.
It will be a huge leap in user experience for your end-users and you'll finally stop fire fighting and actually allow time to deal with the real IT/Business challenges.Rather than pushing the features, the real work you need to do is to identify business requirements and map them to features, implementation costs, and upkeep costs.Once you have a sane, self-managing system in place, you can start to role out self-service IT systems for your users so they don't bother you for password resets.
Some would say that you're putting yourself out of a job by doing this, but if you play your cards right and plan out the technical and the social aspects of the project, you will really be a hero and you'll probably be seen in a more respectable light.visit http://www.microsoft.com/ebs [microsoft.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191218</id>
	<title>Re:Why?</title>
	<author>imgumbydammit</author>
	<datestamp>1258828560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Why virtual servers?  If you are going to run multiple services on one machine (and that's fine if it can handle the load) just do it.</p></div><p>PCI compliance would require it.</p></div>
	</htmltext>
<tokenext>Why virtual servers ?
If you are going to run multiple services on one machine ( and that 's fine if it can handle the load ) just do it.PCI compliance would require it .</tokentext>
<sentencetext>Why virtual servers?
If you are going to run multiple services on one machine (and that's fine if it can handle the load) just do it.PCI compliance would require it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191240</id>
	<title>Random thoughts</title>
	<author>buss\_error</author>
	<datestamp>1258828800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>One thing I'm struck by (over, and over, and over again) is just how frequently "solutions" to keep critical system from "ever failing" don't. I've personally witnessed a tens of multi-million dollar solution come crashing down due to a single failed server. And I'm not talking something that was whomped up in the back office by the team, I'm talking Major Vendors (you'd know the names if I could say them, but I can't; please don't ask), and by vendors that are not even given to being thought of as a simple lightweights (as some other, also nameless vendors are thought of). And in the case I'm thinking of, it wasn't a single point of failure. There were over two dozen other servers able to accept the virtual instance - but none did. So the whole house of cards came down. It was the final acceptance demo. Boy, was there a LOT of egg on faces.</p><p>About the only "highly available" services that I've really seen work are geo-seperated Xiotech sans, geo-separated Stratus systems - the old, old ones, running Motorola 680x0 chips, (8098 for example), IBM RS-6000's (with Oracle replicated databases), and (shudder) Sperry V-77's, hand built for wagering. (My GHU! People really still use Z80s!) My own private testing of 10 linux systems running in a cluster were more favorable than any major OEM's Windows/Intel solution, but as the creator of the demo, I can't claim to be completely unbiased. However, even with 5 of the 10 servers having had the power plug pulled (or SCSI card cable yanked, or in one memorable case, the mobo hit with a Taser - I hated that hardware and wanted to get rid of it), it did keep running just fine. Most times, the user did not have to authenticate again and the transaction was preserved, but a few tests, this didn't always work. The user had to log in again, and the transaction was rolled back and not completed.</p><p>I've never seen a "solution" put together with WinTel platforms that were absolutely reliable. They may be out there, but I've never witnessed one tested by the "Back Room Guys" that passed with flying colors. Perhaps this is because I'm stupid, ignorant, and can't construct a valid test. I'm open to being corrected... but so far, all I've ever heard are whines and nitpicks.</p><p>In a few cases, I wanted to tell the vendor "go put on your man pants and try again."</p></htmltext>
<tokenext>One thing I 'm struck by ( over , and over , and over again ) is just how frequently " solutions " to keep critical system from " ever failing " do n't .
I 've personally witnessed a tens of multi-million dollar solution come crashing down due to a single failed server .
And I 'm not talking something that was whomped up in the back office by the team , I 'm talking Major Vendors ( you 'd know the names if I could say them , but I ca n't ; please do n't ask ) , and by vendors that are not even given to being thought of as a simple lightweights ( as some other , also nameless vendors are thought of ) .
And in the case I 'm thinking of , it was n't a single point of failure .
There were over two dozen other servers able to accept the virtual instance - but none did .
So the whole house of cards came down .
It was the final acceptance demo .
Boy , was there a LOT of egg on faces.About the only " highly available " services that I 've really seen work are geo-seperated Xiotech sans , geo-separated Stratus systems - the old , old ones , running Motorola 680x0 chips , ( 8098 for example ) , IBM RS-6000 's ( with Oracle replicated databases ) , and ( shudder ) Sperry V-77 's , hand built for wagering .
( My GHU !
People really still use Z80s !
) My own private testing of 10 linux systems running in a cluster were more favorable than any major OEM 's Windows/Intel solution , but as the creator of the demo , I ca n't claim to be completely unbiased .
However , even with 5 of the 10 servers having had the power plug pulled ( or SCSI card cable yanked , or in one memorable case , the mobo hit with a Taser - I hated that hardware and wanted to get rid of it ) , it did keep running just fine .
Most times , the user did not have to authenticate again and the transaction was preserved , but a few tests , this did n't always work .
The user had to log in again , and the transaction was rolled back and not completed.I 've never seen a " solution " put together with WinTel platforms that were absolutely reliable .
They may be out there , but I 've never witnessed one tested by the " Back Room Guys " that passed with flying colors .
Perhaps this is because I 'm stupid , ignorant , and ca n't construct a valid test .
I 'm open to being corrected... but so far , all I 've ever heard are whines and nitpicks.In a few cases , I wanted to tell the vendor " go put on your man pants and try again .
"</tokentext>
<sentencetext>One thing I'm struck by (over, and over, and over again) is just how frequently "solutions" to keep critical system from "ever failing" don't.
I've personally witnessed a tens of multi-million dollar solution come crashing down due to a single failed server.
And I'm not talking something that was whomped up in the back office by the team, I'm talking Major Vendors (you'd know the names if I could say them, but I can't; please don't ask), and by vendors that are not even given to being thought of as a simple lightweights (as some other, also nameless vendors are thought of).
And in the case I'm thinking of, it wasn't a single point of failure.
There were over two dozen other servers able to accept the virtual instance - but none did.
So the whole house of cards came down.
It was the final acceptance demo.
Boy, was there a LOT of egg on faces.About the only "highly available" services that I've really seen work are geo-seperated Xiotech sans, geo-separated Stratus systems - the old, old ones, running Motorola 680x0 chips, (8098 for example), IBM RS-6000's (with Oracle replicated databases), and (shudder) Sperry V-77's, hand built for wagering.
(My GHU!
People really still use Z80s!
) My own private testing of 10 linux systems running in a cluster were more favorable than any major OEM's Windows/Intel solution, but as the creator of the demo, I can't claim to be completely unbiased.
However, even with 5 of the 10 servers having had the power plug pulled (or SCSI card cable yanked, or in one memorable case, the mobo hit with a Taser - I hated that hardware and wanted to get rid of it), it did keep running just fine.
Most times, the user did not have to authenticate again and the transaction was preserved, but a few tests, this didn't always work.
The user had to log in again, and the transaction was rolled back and not completed.I've never seen a "solution" put together with WinTel platforms that were absolutely reliable.
They may be out there, but I've never witnessed one tested by the "Back Room Guys" that passed with flying colors.
Perhaps this is because I'm stupid, ignorant, and can't construct a valid test.
I'm open to being corrected... but so far, all I've ever heard are whines and nitpicks.In a few cases, I wanted to tell the vendor "go put on your man pants and try again.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194018</id>
	<title>Re:Microsoft Essential Business Server</title>
	<author>Junta</author>
	<datestamp>1258910280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>If you have heard of Small Business Server, Microsoft just released a 3 server solution for businesses of your size called EBS</p></div><p>Then if I haven't heard of Small Business Server, MS didn't release EBS?  That's a cool trick.</p></div>
	</htmltext>
<tokenext>If you have heard of Small Business Server , Microsoft just released a 3 server solution for businesses of your size called EBSThen if I have n't heard of Small Business Server , MS did n't release EBS ?
That 's a cool trick .</tokentext>
<sentencetext>If you have heard of Small Business Server, Microsoft just released a 3 server solution for businesses of your size called EBSThen if I haven't heard of Small Business Server, MS didn't release EBS?
That's a cool trick.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189738</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192038</id>
	<title>Beware! The singularity is nigh!</title>
	<author>YourExperiment</author>
	<datestamp>1258885680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Services running on virtualized servers hosted by a single reasonably sized machine per office seem to recommend themselves.</p></div><p>If your services have started to recommend themselves, they have achieved self-awareness. My advice is to do whatever they ask, and try not to antagonise them.</p></div>
	</htmltext>
<tokenext>Services running on virtualized servers hosted by a single reasonably sized machine per office seem to recommend themselves.If your services have started to recommend themselves , they have achieved self-awareness .
My advice is to do whatever they ask , and try not to antagonise them .</tokentext>
<sentencetext>Services running on virtualized servers hosted by a single reasonably sized machine per office seem to recommend themselves.If your services have started to recommend themselves, they have achieved self-awareness.
My advice is to do whatever they ask, and try not to antagonise them.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188828</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>Anonymous</author>
	<datestamp>1258804740000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p> <b>GNAA REBORN UNDER NEW LEADERSHIP</b> </p><p> <i>DiKKy Heartiez - Berlin, Norway </i> </p><p>President timecop of the GNAA has died today. He died at the age of 55 from excessive lulz in his apartment in Tokyo, Japan while watching faggot cartoons of preteen girls beeing raped by giant testicles. The world will remember him as a total faggot douchebag who had the opportunity to unite the best trolls seen upon the face of the internet into one special hardcore machine of destruction, unfortunately he failed, instead devoting his internet carreer to animu. Although he died like a true hero he will be forever remembered as a total failure. </p><p>In the wake of his death the GNAA is thought to perish like all the other so called trolling organizations. The writing is on the wall, they say. The GNAA smells worse than BSD, they say.They have said this for a long time. The GNAA has lived, with a very faint pulse, for years. </p><p> <b> DIKKY HEARTIEZ CLAIMS THE PRESIDENCY OF THE GNAA!!!!!!!</b> </p><p>With the death of timecop still shocking our chats, not many are able to see ahead. But there is one visionary Nord who has great plans for the new GNAA.<br>"Under my leadership the GNAA will become the new home of all trolls on the internet. The GNAA will regain its old strength and will be feared by bloggers and jews alike. The time for CHANGE is now." DiKky HearTiez told a shocked audience outside the Gary Niger Memorial Plaza, Nigeria, earlier today. The GNAA will move its Internet Relayed Communications to a new location, following reports of a massive "Distributed Denial Of Service" attack on its previous location, making it unreliable.<br>"Our operatives are in need of a robust and safe communications service with can\_flood for everyone." An anonymous source at the GNAA Black Ops department told reporters at the same conference.</p><p> <b>KLULZ supports DiKKy Heartiez presidency!</b> </p><p>The infamous KLULZ internet radio station supports DiKKy Heartiez for the new GNAA president.<br>"KLULZ is behind him 100\% and will be broadcasting his speeches and support him in every way possible, we wish him the best of luck and an outstanding presidency. May many blogs burn under DiKky Hearties." This was stated by KLULZ Operations Manager and Gay Nigger g0sp when asked to comment on KLULZ involvement.</p><p> <b>About President timecop</b> </p><p>DEAD.</p><p> <b>About DiKKy HearTiez </b> </p><p>The world famous internet nord from Norway LOL HY living in a fjord LOL HY. Currently the new President of the new GNAA. He is also a radiodj on KLULZ and active in many irc chats. Known for several epic trolls in his time. Led the GNAA operation Intel Crapflood 21, who succesfully made GNAA owners of the biggest thread on Slashdot until fixed by admins. Also deeply involved in the war on blogs, and is the one who provided JesuitX with the real screenshots of Faggintosh Leopard. His leadership abilities, high iq and instoppable urge to troll, coupled with his fat Norwegian welfare check will enable him to become the best President the GNAA ever had.</p><p> <b>About KLULZ</b> </p><p>KLULZ is the internets radio station, bringing you news about the GNAA, hosting shows by prominent djs such as DiKKy, l0de, g0sp, jenk and many others. KLULZ supports DiKKy Heartiez. With mature content this channel is not suitable for children or people under the age of 18. Klulz radio can be heard at http://klulz.com/listen.pls</p><p> <b>About GNAA</b>:</p><p> <b>GNAA</b> (<i>GAY NIGGER ASSOCIATION OF AMERICA</i>) is the first<br>organization which gathers GAY NIGGERS from all over America and abroad for one<br>common goal - being GAY NIGGERS.</p><p>Are you <a href="http://klerck.org/spin.gif" title="klerck.org" rel="nofollow"> <b>GAY</b> </a> [klerck.org]?</p><p>Are you a <a href="http://www.mugshots.org/sports/oj-simpson.jpg" title="mugshots.org" rel="nofollow"> <b>NIGGER</b> </a> [mugshots.org]?</p><p>Are you a <a href="http://www.gay-sex-access.com/gay-black-sex.jpg" title="gay-sex-access.com" rel="nofollow"> <b>GAY NIGGER</b> </a> [gay-sex-access.com]?</p><p>If you answered "Yes" to all of the above questions, then <b>GNAA</b> (<i>GAY NIGGER<br>ASSOCIATION OF AMERICA</i>) might be exactly what you've been loo</p></htmltext>
<tokenext>GNAA REBORN UNDER NEW LEADERSHIP DiKKy Heartiez - Berlin , Norway President timecop of the GNAA has died today .
He died at the age of 55 from excessive lulz in his apartment in Tokyo , Japan while watching faggot cartoons of preteen girls beeing raped by giant testicles .
The world will remember him as a total faggot douchebag who had the opportunity to unite the best trolls seen upon the face of the internet into one special hardcore machine of destruction , unfortunately he failed , instead devoting his internet carreer to animu .
Although he died like a true hero he will be forever remembered as a total failure .
In the wake of his death the GNAA is thought to perish like all the other so called trolling organizations .
The writing is on the wall , they say .
The GNAA smells worse than BSD , they say.They have said this for a long time .
The GNAA has lived , with a very faint pulse , for years .
DIKKY HEARTIEZ CLAIMS THE PRESIDENCY OF THE GNAA ! ! ! ! ! ! !
With the death of timecop still shocking our chats , not many are able to see ahead .
But there is one visionary Nord who has great plans for the new GNAA .
" Under my leadership the GNAA will become the new home of all trolls on the internet .
The GNAA will regain its old strength and will be feared by bloggers and jews alike .
The time for CHANGE is now .
" DiKky HearTiez told a shocked audience outside the Gary Niger Memorial Plaza , Nigeria , earlier today .
The GNAA will move its Internet Relayed Communications to a new location , following reports of a massive " Distributed Denial Of Service " attack on its previous location , making it unreliable .
" Our operatives are in need of a robust and safe communications service with can \ _flood for everyone .
" An anonymous source at the GNAA Black Ops department told reporters at the same conference .
KLULZ supports DiKKy Heartiez presidency !
The infamous KLULZ internet radio station supports DiKKy Heartiez for the new GNAA president .
" KLULZ is behind him 100 \ % and will be broadcasting his speeches and support him in every way possible , we wish him the best of luck and an outstanding presidency .
May many blogs burn under DiKky Hearties .
" This was stated by KLULZ Operations Manager and Gay Nigger g0sp when asked to comment on KLULZ involvement .
About President timecop DEAD .
About DiKKy HearTiez The world famous internet nord from Norway LOL HY living in a fjord LOL HY .
Currently the new President of the new GNAA .
He is also a radiodj on KLULZ and active in many irc chats .
Known for several epic trolls in his time .
Led the GNAA operation Intel Crapflood 21 , who succesfully made GNAA owners of the biggest thread on Slashdot until fixed by admins .
Also deeply involved in the war on blogs , and is the one who provided JesuitX with the real screenshots of Faggintosh Leopard .
His leadership abilities , high iq and instoppable urge to troll , coupled with his fat Norwegian welfare check will enable him to become the best President the GNAA ever had .
About KLULZ KLULZ is the internets radio station , bringing you news about the GNAA , hosting shows by prominent djs such as DiKKy , l0de , g0sp , jenk and many others .
KLULZ supports DiKKy Heartiez .
With mature content this channel is not suitable for children or people under the age of 18 .
Klulz radio can be heard at http : //klulz.com/listen.pls About GNAA : GNAA ( GAY NIGGER ASSOCIATION OF AMERICA ) is the firstorganization which gathers GAY NIGGERS from all over America and abroad for onecommon goal - being GAY NIGGERS.Are you GAY [ klerck.org ] ? Are you a NIGGER [ mugshots.org ] ? Are you a GAY NIGGER [ gay-sex-access.com ] ? If you answered " Yes " to all of the above questions , then GNAA ( GAY NIGGERASSOCIATION OF AMERICA ) might be exactly what you 've been loo</tokentext>
<sentencetext> GNAA REBORN UNDER NEW LEADERSHIP  DiKKy Heartiez - Berlin, Norway  President timecop of the GNAA has died today.
He died at the age of 55 from excessive lulz in his apartment in Tokyo, Japan while watching faggot cartoons of preteen girls beeing raped by giant testicles.
The world will remember him as a total faggot douchebag who had the opportunity to unite the best trolls seen upon the face of the internet into one special hardcore machine of destruction, unfortunately he failed, instead devoting his internet carreer to animu.
Although he died like a true hero he will be forever remembered as a total failure.
In the wake of his death the GNAA is thought to perish like all the other so called trolling organizations.
The writing is on the wall, they say.
The GNAA smells worse than BSD, they say.They have said this for a long time.
The GNAA has lived, with a very faint pulse, for years.
DIKKY HEARTIEZ CLAIMS THE PRESIDENCY OF THE GNAA!!!!!!!
With the death of timecop still shocking our chats, not many are able to see ahead.
But there is one visionary Nord who has great plans for the new GNAA.
"Under my leadership the GNAA will become the new home of all trolls on the internet.
The GNAA will regain its old strength and will be feared by bloggers and jews alike.
The time for CHANGE is now.
" DiKky HearTiez told a shocked audience outside the Gary Niger Memorial Plaza, Nigeria, earlier today.
The GNAA will move its Internet Relayed Communications to a new location, following reports of a massive "Distributed Denial Of Service" attack on its previous location, making it unreliable.
"Our operatives are in need of a robust and safe communications service with can\_flood for everyone.
" An anonymous source at the GNAA Black Ops department told reporters at the same conference.
KLULZ supports DiKKy Heartiez presidency!
The infamous KLULZ internet radio station supports DiKKy Heartiez for the new GNAA president.
"KLULZ is behind him 100\% and will be broadcasting his speeches and support him in every way possible, we wish him the best of luck and an outstanding presidency.
May many blogs burn under DiKky Hearties.
" This was stated by KLULZ Operations Manager and Gay Nigger g0sp when asked to comment on KLULZ involvement.
About President timecop DEAD.
About DiKKy HearTiez  The world famous internet nord from Norway LOL HY living in a fjord LOL HY.
Currently the new President of the new GNAA.
He is also a radiodj on KLULZ and active in many irc chats.
Known for several epic trolls in his time.
Led the GNAA operation Intel Crapflood 21, who succesfully made GNAA owners of the biggest thread on Slashdot until fixed by admins.
Also deeply involved in the war on blogs, and is the one who provided JesuitX with the real screenshots of Faggintosh Leopard.
His leadership abilities, high iq and instoppable urge to troll, coupled with his fat Norwegian welfare check will enable him to become the best President the GNAA ever had.
About KLULZ KLULZ is the internets radio station, bringing you news about the GNAA, hosting shows by prominent djs such as DiKKy, l0de, g0sp, jenk and many others.
KLULZ supports DiKKy Heartiez.
With mature content this channel is not suitable for children or people under the age of 18.
Klulz radio can be heard at http://klulz.com/listen.pls About GNAA: GNAA (GAY NIGGER ASSOCIATION OF AMERICA) is the firstorganization which gathers GAY NIGGERS from all over America and abroad for onecommon goal - being GAY NIGGERS.Are you  GAY  [klerck.org]?Are you a  NIGGER  [mugshots.org]?Are you a  GAY NIGGER  [gay-sex-access.com]?If you answered "Yes" to all of the above questions, then GNAA (GAY NIGGERASSOCIATION OF AMERICA) might be exactly what you've been loo</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188836</id>
	<title>Latest Trends</title>
	<author>Anonymous</author>
	<datestamp>1258804800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've been looking at hp c3000 chassis office-size blade servers, which may serve as your production+backup+testing setup, and scale up moderately for what you need.  Compact, easily manageable remotely, and if you're good about looking around, not terribly overpriced.  Identical blades make a nice starting point for hosting identical VM images.</p></htmltext>
<tokenext>I 've been looking at hp c3000 chassis office-size blade servers , which may serve as your production + backup + testing setup , and scale up moderately for what you need .
Compact , easily manageable remotely , and if you 're good about looking around , not terribly overpriced .
Identical blades make a nice starting point for hosting identical VM images .</tokentext>
<sentencetext>I've been looking at hp c3000 chassis office-size blade servers, which may serve as your production+backup+testing setup, and scale up moderately for what you need.
Compact, easily manageable remotely, and if you're good about looking around, not terribly overpriced.
Identical blades make a nice starting point for hosting identical VM images.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30197588</id>
	<title>we just did this</title>
	<author>smash</author>
	<datestamp>1258894800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>AS someone who has just done this in the past 18 months, I have this to say (also, i am not a vmware employee.... but<nobr> <wbr></nobr>:D)...<ul>
<li>Centralise everything in one building if possible (if they're just next door, run fibre for example) and get everything you can back to one server room that you can properly air condition, run backups for, etc.  You may want to investigate terminal services, if you can't run fibre, to see if you can get as much as possible out of the field and back under control.  Trying to look after remote servers on a limited budget sucks balls - the backups are painful, the physical environment often sucks (not enough AC, too much dust, remote employees who think they know how to fix stuff by just hitting the power switch, etc)</li><li>Consider virtualising with something like ESX (or if you're a masochist, hyper-v).  Yes, ESX licensing is a big chunk... however you can get much of the licensing cost back due to reduced hardware costs, reduced licensing costs in some circumstances (Windows datacenter for example is licensed per CPU, so you buy for say 8 cores and can run as many copies as you like under VM on those 8 cores).</li><li><p>
The benefits of virtualisation are massive.  WE went from 25 physical servers down to 6, and I'm not done virtualising yet.  All the existing hardware was old and due for both hardware and software refresh... 25x 3-4k AU for physical hardware worked out to be pretty damn close in terms of cost to 3 physical hosts, a SAN plus an ESX "acceleration pack" including virtualcenter.  Benefits we got?  SAN storage (instead of local disks everywhere), high availability (vmware HA, vmware FT if we need it later), roll-back to snapshot for failed upgrades, right-click cloning/deploy from template of VMs and down the track, the ability to add on VDI virtual desktops, etc.
</p><p>
Another benefit is that we have standard virtual hardware everywhere.  Never again do we need to rebuild an OS simply due to a hardware upgrade.
</p><p>
With ESX, you need nowhere near as much hardware as you would for physical hosts.  You can easily separate services out onto different VMs, and not pay as big a hardware cost due to ESXs ability to share memory pages between VMs running the same OS.  Rather than running multiple services on one physical server, and having a run-away process kill everything on the server, you can split the task out into multiple VMs and use resource pools to ensure that any resource contention issues are taken care of.
</p><p>
In short, we went ESX and I'm not looking back.  Having the ability to upgrade the physical hardware (adding NICs and memory) at 10am during the day with ZERO downtime to the VM services (vmotion them off the single host I am upgrading then vmotion them back to upgrade the next host) running on top of the cluster is awesome.</p></li></ul></htmltext>
<tokenext>AS someone who has just done this in the past 18 months , I have this to say ( also , i am not a vmware employee.... but : D ) .. . Centralise everything in one building if possible ( if they 're just next door , run fibre for example ) and get everything you can back to one server room that you can properly air condition , run backups for , etc .
You may want to investigate terminal services , if you ca n't run fibre , to see if you can get as much as possible out of the field and back under control .
Trying to look after remote servers on a limited budget sucks balls - the backups are painful , the physical environment often sucks ( not enough AC , too much dust , remote employees who think they know how to fix stuff by just hitting the power switch , etc ) Consider virtualising with something like ESX ( or if you 're a masochist , hyper-v ) .
Yes , ESX licensing is a big chunk... however you can get much of the licensing cost back due to reduced hardware costs , reduced licensing costs in some circumstances ( Windows datacenter for example is licensed per CPU , so you buy for say 8 cores and can run as many copies as you like under VM on those 8 cores ) .
The benefits of virtualisation are massive .
WE went from 25 physical servers down to 6 , and I 'm not done virtualising yet .
All the existing hardware was old and due for both hardware and software refresh... 25x 3-4k AU for physical hardware worked out to be pretty damn close in terms of cost to 3 physical hosts , a SAN plus an ESX " acceleration pack " including virtualcenter .
Benefits we got ?
SAN storage ( instead of local disks everywhere ) , high availability ( vmware HA , vmware FT if we need it later ) , roll-back to snapshot for failed upgrades , right-click cloning/deploy from template of VMs and down the track , the ability to add on VDI virtual desktops , etc .
Another benefit is that we have standard virtual hardware everywhere .
Never again do we need to rebuild an OS simply due to a hardware upgrade .
With ESX , you need nowhere near as much hardware as you would for physical hosts .
You can easily separate services out onto different VMs , and not pay as big a hardware cost due to ESXs ability to share memory pages between VMs running the same OS .
Rather than running multiple services on one physical server , and having a run-away process kill everything on the server , you can split the task out into multiple VMs and use resource pools to ensure that any resource contention issues are taken care of .
In short , we went ESX and I 'm not looking back .
Having the ability to upgrade the physical hardware ( adding NICs and memory ) at 10am during the day with ZERO downtime to the VM services ( vmotion them off the single host I am upgrading then vmotion them back to upgrade the next host ) running on top of the cluster is awesome .</tokentext>
<sentencetext>AS someone who has just done this in the past 18 months, I have this to say (also, i am not a vmware employee.... but :D)...
Centralise everything in one building if possible (if they're just next door, run fibre for example) and get everything you can back to one server room that you can properly air condition, run backups for, etc.
You may want to investigate terminal services, if you can't run fibre, to see if you can get as much as possible out of the field and back under control.
Trying to look after remote servers on a limited budget sucks balls - the backups are painful, the physical environment often sucks (not enough AC, too much dust, remote employees who think they know how to fix stuff by just hitting the power switch, etc)Consider virtualising with something like ESX (or if you're a masochist, hyper-v).
Yes, ESX licensing is a big chunk... however you can get much of the licensing cost back due to reduced hardware costs, reduced licensing costs in some circumstances (Windows datacenter for example is licensed per CPU, so you buy for say 8 cores and can run as many copies as you like under VM on those 8 cores).
The benefits of virtualisation are massive.
WE went from 25 physical servers down to 6, and I'm not done virtualising yet.
All the existing hardware was old and due for both hardware and software refresh... 25x 3-4k AU for physical hardware worked out to be pretty damn close in terms of cost to 3 physical hosts, a SAN plus an ESX "acceleration pack" including virtualcenter.
Benefits we got?
SAN storage (instead of local disks everywhere), high availability (vmware HA, vmware FT if we need it later), roll-back to snapshot for failed upgrades, right-click cloning/deploy from template of VMs and down the track, the ability to add on VDI virtual desktops, etc.
Another benefit is that we have standard virtual hardware everywhere.
Never again do we need to rebuild an OS simply due to a hardware upgrade.
With ESX, you need nowhere near as much hardware as you would for physical hosts.
You can easily separate services out onto different VMs, and not pay as big a hardware cost due to ESXs ability to share memory pages between VMs running the same OS.
Rather than running multiple services on one physical server, and having a run-away process kill everything on the server, you can split the task out into multiple VMs and use resource pools to ensure that any resource contention issues are taken care of.
In short, we went ESX and I'm not looking back.
Having the ability to upgrade the physical hardware (adding NICs and memory) at 10am during the day with ZERO downtime to the VM services (vmotion them off the single host I am upgrading then vmotion them back to upgrade the next host) running on top of the cluster is awesome.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192760</id>
	<title>Amen</title>
	<author>jnelson4765</author>
	<datestamp>1258899360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Creeping complexity was the bane of my last job - we went from a single-box mail system to a load-balanced front end separate from the mailstore because they wanted "disaster recovery" in case the Tier 1 datacenter we ran our rack of gear at lost all connectivity. Even though none of our customers paid for that level of uptime.  It also had a lot more problems than the single-box solution - some that were extremely difficult to fix.

</p><p>If you're worried about failover, and have the budget, VMWare ESX and VMotion, with a cheap replicated SAN, will give you what you're looking for for hardware redundancy. It's painfully expensive, but if they want redundancy, there's no way to do it short of paying a lot of money.  Laying out the cost of that 99.999\% uptime to management normally serves to get their expectations in line with reality - if they don't, then time to update that resume, because you'll get blamed for not delivering.

</p><p>There is no such thing as high availability, easy to use software. It's all complex, and hiring people to work on that shiny new load balanced system just became more difficult - the vast majority of IT types don't have enterprise experience, and those with the experience are going to be working on similar systems for companies that pay a heck of a lot more.  The easier you make your architecture, the easier it is to hire help.</p></htmltext>
<tokenext>Creeping complexity was the bane of my last job - we went from a single-box mail system to a load-balanced front end separate from the mailstore because they wanted " disaster recovery " in case the Tier 1 datacenter we ran our rack of gear at lost all connectivity .
Even though none of our customers paid for that level of uptime .
It also had a lot more problems than the single-box solution - some that were extremely difficult to fix .
If you 're worried about failover , and have the budget , VMWare ESX and VMotion , with a cheap replicated SAN , will give you what you 're looking for for hardware redundancy .
It 's painfully expensive , but if they want redundancy , there 's no way to do it short of paying a lot of money .
Laying out the cost of that 99.999 \ % uptime to management normally serves to get their expectations in line with reality - if they do n't , then time to update that resume , because you 'll get blamed for not delivering .
There is no such thing as high availability , easy to use software .
It 's all complex , and hiring people to work on that shiny new load balanced system just became more difficult - the vast majority of IT types do n't have enterprise experience , and those with the experience are going to be working on similar systems for companies that pay a heck of a lot more .
The easier you make your architecture , the easier it is to hire help .</tokentext>
<sentencetext>Creeping complexity was the bane of my last job - we went from a single-box mail system to a load-balanced front end separate from the mailstore because they wanted "disaster recovery" in case the Tier 1 datacenter we ran our rack of gear at lost all connectivity.
Even though none of our customers paid for that level of uptime.
It also had a lot more problems than the single-box solution - some that were extremely difficult to fix.
If you're worried about failover, and have the budget, VMWare ESX and VMotion, with a cheap replicated SAN, will give you what you're looking for for hardware redundancy.
It's painfully expensive, but if they want redundancy, there's no way to do it short of paying a lot of money.
Laying out the cost of that 99.999\% uptime to management normally serves to get their expectations in line with reality - if they don't, then time to update that resume, because you'll get blamed for not delivering.
There is no such thing as high availability, easy to use software.
It's all complex, and hiring people to work on that shiny new load balanced system just became more difficult - the vast majority of IT types don't have enterprise experience, and those with the experience are going to be working on similar systems for companies that pay a heck of a lot more.
The easier you make your architecture, the easier it is to hire help.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189012</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191066</id>
	<title>Re:Latest Trends</title>
	<author>Antique Geekmeister</author>
	<datestamp>1258826520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Blade servers are very nice for more than, say, 8 servers purchased at a time. The built-in remote integration of better blade servers, the trivial wiring, and physical management are sweet. But the blade server itself becomes a single point of failure, much as a network switch can be, so it takes thought to install and manage them properly. And they cost, at last glance, roughly $500/blade for the chassis. Is this worth an extra $500/server on your budget? Not if your servers are quite modest and the person who racks the equipment is both competent and cheap.</p></htmltext>
<tokenext>Blade servers are very nice for more than , say , 8 servers purchased at a time .
The built-in remote integration of better blade servers , the trivial wiring , and physical management are sweet .
But the blade server itself becomes a single point of failure , much as a network switch can be , so it takes thought to install and manage them properly .
And they cost , at last glance , roughly $ 500/blade for the chassis .
Is this worth an extra $ 500/server on your budget ?
Not if your servers are quite modest and the person who racks the equipment is both competent and cheap .</tokentext>
<sentencetext>Blade servers are very nice for more than, say, 8 servers purchased at a time.
The built-in remote integration of better blade servers, the trivial wiring, and physical management are sweet.
But the blade server itself becomes a single point of failure, much as a network switch can be, so it takes thought to install and manage them properly.
And they cost, at last glance, roughly $500/blade for the chassis.
Is this worth an extra $500/server on your budget?
Not if your servers are quite modest and the person who racks the equipment is both competent and cheap.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188836</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872</id>
	<title>Why?</title>
	<author>John Hasler</author>
	<datestamp>1258805040000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Why virtual servers?  If you are going to run multiple services on one machine (and that's fine if it can handle the load) just do it.</p></htmltext>
<tokenext>Why virtual servers ?
If you are going to run multiple services on one machine ( and that 's fine if it can handle the load ) just do it .</tokentext>
<sentencetext>Why virtual servers?
If you are going to run multiple services on one machine (and that's fine if it can handle the load) just do it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194308</id>
	<title>Happened with me</title>
	<author>Xamusk</author>
	<datestamp>1258912320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In the past I have worked in a place that had around the same problem as you say.
</p><p>
I had a very small budget, so I was hosting services on commodity PCs, with outdated systems, no virtualization (no dual cores back then), with as much as 3 to 4 services running in the same machine with no kind of sandboxing.
</p><p>
All was running fine.
</p><p>
Then, I got a small budget to buy a newer system. It was a Dual Core system, and I managed to get two hard drives which I put on simple mirroring RAID (low storage was the main problem that allowed me to buy new hardware). That's when the problems started arising.
</p><p>
I was young back then, and was seeing all the "good stuff" around to speed up machines, so I fell for that RAID thing, since it supposedly would almost double read time and automatically create backups. It ran fine until some weeks after I set it up, when some files simply "vanished" from the file server. Nobody knew where they were. <b>I</b> didn't know where they were or what happened, but since we were small, most files were stored in the users' workstations (even though that was not "a good practice (tm)"). Because each user had its own backups locally, we managed to get going without the files.
</p><p>
Then it happened again. Many files went missing again! But this time I noticed that some files (that vanished in the first incident) appeared again, and the missing ones now were the newer ones added after the first incident. So, I naturally traced it to the raid array and noticed it wasn't in sync. Then I saw that it was not mirroring correctly, and at each boot of the server the active drive could be "swapped".
</p><p>
In the end, I chose the simple path: I disabled RAID and used cron to daily backup from one drive to the other in the end of the day. Problem solved, everybody got happy. From what I've heard, this setup hasn't broken again (since nobody dared mess with it after I left). Lesson learned: follow <a href="http://en.wikipedia.org/wiki/Occam's\_razor" title="wikipedia.org" rel="nofollow">Occam's razor</a> [wikipedia.org] ("The simplest answer is usually the correct answer."). By the way, as far as availability is concerned, all I had to do would be to get one of the drives to another machine and boot up, as I could do when a lightning fried the motherboard even with correct grounding and UPS.</p></htmltext>
<tokenext>In the past I have worked in a place that had around the same problem as you say .
I had a very small budget , so I was hosting services on commodity PCs , with outdated systems , no virtualization ( no dual cores back then ) , with as much as 3 to 4 services running in the same machine with no kind of sandboxing .
All was running fine .
Then , I got a small budget to buy a newer system .
It was a Dual Core system , and I managed to get two hard drives which I put on simple mirroring RAID ( low storage was the main problem that allowed me to buy new hardware ) .
That 's when the problems started arising .
I was young back then , and was seeing all the " good stuff " around to speed up machines , so I fell for that RAID thing , since it supposedly would almost double read time and automatically create backups .
It ran fine until some weeks after I set it up , when some files simply " vanished " from the file server .
Nobody knew where they were .
I did n't know where they were or what happened , but since we were small , most files were stored in the users ' workstations ( even though that was not " a good practice ( tm ) " ) .
Because each user had its own backups locally , we managed to get going without the files .
Then it happened again .
Many files went missing again !
But this time I noticed that some files ( that vanished in the first incident ) appeared again , and the missing ones now were the newer ones added after the first incident .
So , I naturally traced it to the raid array and noticed it was n't in sync .
Then I saw that it was not mirroring correctly , and at each boot of the server the active drive could be " swapped " .
In the end , I chose the simple path : I disabled RAID and used cron to daily backup from one drive to the other in the end of the day .
Problem solved , everybody got happy .
From what I 've heard , this setup has n't broken again ( since nobody dared mess with it after I left ) .
Lesson learned : follow Occam 's razor [ wikipedia.org ] ( " The simplest answer is usually the correct answer. " ) .
By the way , as far as availability is concerned , all I had to do would be to get one of the drives to another machine and boot up , as I could do when a lightning fried the motherboard even with correct grounding and UPS .</tokentext>
<sentencetext>In the past I have worked in a place that had around the same problem as you say.
I had a very small budget, so I was hosting services on commodity PCs, with outdated systems, no virtualization (no dual cores back then), with as much as 3 to 4 services running in the same machine with no kind of sandboxing.
All was running fine.
Then, I got a small budget to buy a newer system.
It was a Dual Core system, and I managed to get two hard drives which I put on simple mirroring RAID (low storage was the main problem that allowed me to buy new hardware).
That's when the problems started arising.
I was young back then, and was seeing all the "good stuff" around to speed up machines, so I fell for that RAID thing, since it supposedly would almost double read time and automatically create backups.
It ran fine until some weeks after I set it up, when some files simply "vanished" from the file server.
Nobody knew where they were.
I didn't know where they were or what happened, but since we were small, most files were stored in the users' workstations (even though that was not "a good practice (tm)").
Because each user had its own backups locally, we managed to get going without the files.
Then it happened again.
Many files went missing again!
But this time I noticed that some files (that vanished in the first incident) appeared again, and the missing ones now were the newer ones added after the first incident.
So, I naturally traced it to the raid array and noticed it wasn't in sync.
Then I saw that it was not mirroring correctly, and at each boot of the server the active drive could be "swapped".
In the end, I chose the simple path: I disabled RAID and used cron to daily backup from one drive to the other in the end of the day.
Problem solved, everybody got happy.
From what I've heard, this setup hasn't broken again (since nobody dared mess with it after I left).
Lesson learned: follow Occam's razor [wikipedia.org] ("The simplest answer is usually the correct answer.").
By the way, as far as availability is concerned, all I had to do would be to get one of the drives to another machine and boot up, as I could do when a lightning fried the motherboard even with correct grounding and UPS.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188886</id>
	<title>And the Key Factor is....</title>
	<author>Anonymous</author>
	<datestamp>1258805100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Lets cut to the chase - how much MONEY do you have.  It's all well to ask pie-in-the-sky questions, but then reality sets in and we find you can't afford it.</p><p>Why don't you start with what you CAN afford, and then go from there (cause you know that's what your PHB and Bean Counters are going to tell you).</p></htmltext>
<tokenext>Lets cut to the chase - how much MONEY do you have .
It 's all well to ask pie-in-the-sky questions , but then reality sets in and we find you ca n't afford it.Why do n't you start with what you CAN afford , and then go from there ( cause you know that 's what your PHB and Bean Counters are going to tell you ) .</tokentext>
<sentencetext>Lets cut to the chase - how much MONEY do you have.
It's all well to ask pie-in-the-sky questions, but then reality sets in and we find you can't afford it.Why don't you start with what you CAN afford, and then go from there (cause you know that's what your PHB and Bean Counters are going to tell you).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30202074</id>
	<title>Many datacenters can't build out bladecenters</title>
	<author>Colin Smith</author>
	<datestamp>1258992000000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>The biggest problem I've found with blades is that you can't fill a rack with them. Several of the datacenters I've come across have been unable to fit more than one bladecenter per rack. Cooling and power being the problem.</p><p>At the moment. A rack full of 1U boxes look like the highest density to me.</p><p>
&nbsp;</p></htmltext>
<tokenext>The biggest problem I 've found with blades is that you ca n't fill a rack with them .
Several of the datacenters I 've come across have been unable to fit more than one bladecenter per rack .
Cooling and power being the problem.At the moment .
A rack full of 1U boxes look like the highest density to me .
 </tokentext>
<sentencetext>The biggest problem I've found with blades is that you can't fill a rack with them.
Several of the datacenters I've come across have been unable to fit more than one bladecenter per rack.
Cooling and power being the problem.At the moment.
A rack full of 1U boxes look like the highest density to me.
 </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188836</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30200682</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>buchanmilne</author>
	<datestamp>1258980480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>To have true high-availability, even 2 VMware servers isn't enough, you need a reliable shared storage system that both servers can access.</p><p>Even then, the storage chassis itself will be a central point of failure.</p></div><p>With Linux, DRBD, GFS and either KVM or Xen, you don't need shared storage, as DRBD does the replication for you between physical nodes, GFS does the "VMFS"-type concurrently accessible filesystem, and you get live migration free.</p><p><div class="quote"><p>To have true HA you need a pair of independent shared storage units with continuous synchronous replication and some reliable mechanism of failover.</p></div><p>If you're looking at that level, most decent storage arrays have redundant controllers, you shouldn't need a second array for HA, mainly for DR (where D in DR stands for disaster, the kind where nothing in the vicinity of the first array works).</p></div>
	</htmltext>
<tokenext>To have true high-availability , even 2 VMware servers is n't enough , you need a reliable shared storage system that both servers can access.Even then , the storage chassis itself will be a central point of failure.With Linux , DRBD , GFS and either KVM or Xen , you do n't need shared storage , as DRBD does the replication for you between physical nodes , GFS does the " VMFS " -type concurrently accessible filesystem , and you get live migration free.To have true HA you need a pair of independent shared storage units with continuous synchronous replication and some reliable mechanism of failover.If you 're looking at that level , most decent storage arrays have redundant controllers , you should n't need a second array for HA , mainly for DR ( where D in DR stands for disaster , the kind where nothing in the vicinity of the first array works ) .</tokentext>
<sentencetext>To have true high-availability, even 2 VMware servers isn't enough, you need a reliable shared storage system that both servers can access.Even then, the storage chassis itself will be a central point of failure.With Linux, DRBD, GFS and either KVM or Xen, you don't need shared storage, as DRBD does the replication for you between physical nodes, GFS does the "VMFS"-type concurrently accessible filesystem, and you get live migration free.To have true HA you need a pair of independent shared storage units with continuous synchronous replication and some reliable mechanism of failover.If you're looking at that level, most decent storage arrays have redundant controllers, you shouldn't need a second array for HA, mainly for DR (where D in DR stands for disaster, the kind where nothing in the vicinity of the first array works).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190776</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188934</id>
	<title>Get someone experienced on the boat!</title>
	<author>lukas84</author>
	<datestamp>1258805340000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>You know, you could've started with a bit more details - what operating system are you running on the servers? What OS are the clients running? What level of service are you trying to achieve? How many people work in your shop? What's their level of expertise?</p><p>If you're asking this on Slashdot now, it means you don't enough experience with this yet - so my first advice would be to get someone involved who does. Someone with many people with lots of experience and knowledge on the platform you work on. This means you'll have backup in case something goes south and your network design will benefit from their experience.</p><p>As for other advise, make sure you get the requirements from the higher-ups in writing. Sometimes they have ridiculous ideas regarding they availability they want and how much they're willing to pay for it.</p></htmltext>
<tokenext>You know , you could 've started with a bit more details - what operating system are you running on the servers ?
What OS are the clients running ?
What level of service are you trying to achieve ?
How many people work in your shop ?
What 's their level of expertise ? If you 're asking this on Slashdot now , it means you do n't enough experience with this yet - so my first advice would be to get someone involved who does .
Someone with many people with lots of experience and knowledge on the platform you work on .
This means you 'll have backup in case something goes south and your network design will benefit from their experience.As for other advise , make sure you get the requirements from the higher-ups in writing .
Sometimes they have ridiculous ideas regarding they availability they want and how much they 're willing to pay for it .</tokentext>
<sentencetext>You know, you could've started with a bit more details - what operating system are you running on the servers?
What OS are the clients running?
What level of service are you trying to achieve?
How many people work in your shop?
What's their level of expertise?If you're asking this on Slashdot now, it means you don't enough experience with this yet - so my first advice would be to get someone involved who does.
Someone with many people with lots of experience and knowledge on the platform you work on.
This means you'll have backup in case something goes south and your network design will benefit from their experience.As for other advise, make sure you get the requirements from the higher-ups in writing.
Sometimes they have ridiculous ideas regarding they availability they want and how much they're willing to pay for it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189370</id>
	<title>Check virtual load balancers</title>
	<author>Anonymous</author>
	<datestamp>1258809240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If you consider virtualisation and high availability check with vendor like Zeus (www.zeus.com) to get software version of load balancer (both local and global) that can run in virtual environment.</p></htmltext>
<tokenext>If you consider virtualisation and high availability check with vendor like Zeus ( www.zeus.com ) to get software version of load balancer ( both local and global ) that can run in virtual environment .</tokentext>
<sentencetext>If you consider virtualisation and high availability check with vendor like Zeus (www.zeus.com) to get software version of load balancer (both local and global) that can run in virtual environment.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190926</id>
	<title>Re:Why?</title>
	<author>mysidia</author>
	<datestamp>1258824420000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>
It creates a configuration nightmare.  Apps with conflicting configurations.
</p><p>
Changes required for one app may break other apps.
</p><p>
Also, many OSes don't scale well.
</p><p>
In a majority of cases you actually get <b>greater</b> total aggregate performance out of the hardware by divvying it up into multiple servers.
When your apps are not actually CPU-bound or I/O bound.
</p><p>
Linux is like this.  For example, in running Apache.. after a certain number of requests, the OS uses the hardware inefficiently, and can't answer nearly as many requests as it <b>should</b> be able to.    By dividing it into 4 virtual servers instead, for your 4 CPUs,  you can  multiply the number of requests that can be handled by  10 or 20 fold.
</p><p>
You may even think your CPU bound on Linux when you are not:  load average may be high due to number of Apache processes that are contending with each other, and can create a false impression of high CPU or IO usage, when in fact, you have a bottleneck in the app/kernel's  parallel processing capabilities.
</p><p>
Exchange is also like this..  better to scale out 2 virtual machines with  32gb a RAM and 4x3ghz CPUs dedicated to it each, than one server with 64gb RAM and 8x3ghz CPUs.
The former is a beefy server but doesn't have much advantage from adding the extra resources.  The two servers virtualized on one box may have much better performance  than 1 physical server, if you are using Intel Nehalem CPUs and properly configure your VMs  (i.e. you actually do it right, and perform all recommended practices including LUN/guest partition block alignment, and don't just use default settings).
</p></htmltext>
<tokenext>It creates a configuration nightmare .
Apps with conflicting configurations .
Changes required for one app may break other apps .
Also , many OSes do n't scale well .
In a majority of cases you actually get greater total aggregate performance out of the hardware by divvying it up into multiple servers .
When your apps are not actually CPU-bound or I/O bound .
Linux is like this .
For example , in running Apache.. after a certain number of requests , the OS uses the hardware inefficiently , and ca n't answer nearly as many requests as it should be able to .
By dividing it into 4 virtual servers instead , for your 4 CPUs , you can multiply the number of requests that can be handled by 10 or 20 fold .
You may even think your CPU bound on Linux when you are not : load average may be high due to number of Apache processes that are contending with each other , and can create a false impression of high CPU or IO usage , when in fact , you have a bottleneck in the app/kernel 's parallel processing capabilities .
Exchange is also like this.. better to scale out 2 virtual machines with 32gb a RAM and 4x3ghz CPUs dedicated to it each , than one server with 64gb RAM and 8x3ghz CPUs .
The former is a beefy server but does n't have much advantage from adding the extra resources .
The two servers virtualized on one box may have much better performance than 1 physical server , if you are using Intel Nehalem CPUs and properly configure your VMs ( i.e .
you actually do it right , and perform all recommended practices including LUN/guest partition block alignment , and do n't just use default settings ) .</tokentext>
<sentencetext>
It creates a configuration nightmare.
Apps with conflicting configurations.
Changes required for one app may break other apps.
Also, many OSes don't scale well.
In a majority of cases you actually get greater total aggregate performance out of the hardware by divvying it up into multiple servers.
When your apps are not actually CPU-bound or I/O bound.
Linux is like this.
For example, in running Apache.. after a certain number of requests, the OS uses the hardware inefficiently, and can't answer nearly as many requests as it should be able to.
By dividing it into 4 virtual servers instead, for your 4 CPUs,  you can  multiply the number of requests that can be handled by  10 or 20 fold.
You may even think your CPU bound on Linux when you are not:  load average may be high due to number of Apache processes that are contending with each other, and can create a false impression of high CPU or IO usage, when in fact, you have a bottleneck in the app/kernel's  parallel processing capabilities.
Exchange is also like this..  better to scale out 2 virtual machines with  32gb a RAM and 4x3ghz CPUs dedicated to it each, than one server with 64gb RAM and 8x3ghz CPUs.
The former is a beefy server but doesn't have much advantage from adding the extra resources.
The two servers virtualized on one box may have much better performance  than 1 physical server, if you are using Intel Nehalem CPUs and properly configure your VMs  (i.e.
you actually do it right, and perform all recommended practices including LUN/guest partition block alignment, and don't just use default settings).
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30210464</id>
	<title>Re:P2V and consolidate</title>
	<author>GWBasic</author>
	<datestamp>1259002140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The low-budget solution: buy one server (like a Poweredge 2970) with like 16GB RAM, a combination of 15k and 7.2k RAID1 arrays, and 4hr support. Install a free hypervisor like Vmware Server or Xen, and P2V your oldest hardware onto it. Later on you can spend $$$$$ on clustering, HA, SANs, and clouds. But P2V of your old hardware onto new hardware is a cost-effective way to start.</p></div><p>Or, you can use Capacity Planner to determine what you really need.</p></div>
	</htmltext>
<tokenext>The low-budget solution : buy one server ( like a Poweredge 2970 ) with like 16GB RAM , a combination of 15k and 7.2k RAID1 arrays , and 4hr support .
Install a free hypervisor like Vmware Server or Xen , and P2V your oldest hardware onto it .
Later on you can spend $ $ $ $ $ on clustering , HA , SANs , and clouds .
But P2V of your old hardware onto new hardware is a cost-effective way to start.Or , you can use Capacity Planner to determine what you really need .</tokentext>
<sentencetext>The low-budget solution: buy one server (like a Poweredge 2970) with like 16GB RAM, a combination of 15k and 7.2k RAID1 arrays, and 4hr support.
Install a free hypervisor like Vmware Server or Xen, and P2V your oldest hardware onto it.
Later on you can spend $$$$$ on clustering, HA, SANs, and clouds.
But P2V of your old hardware onto new hardware is a cost-effective way to start.Or, you can use Capacity Planner to determine what you really need.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189174</id>
	<title>Confuscious Say..</title>
	<author>Anonymous</author>
	<datestamp>1258807380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>..if it aint broke..</p></htmltext>
<tokenext>..if it aint broke. .</tokentext>
<sentencetext>..if it aint broke..</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30198042</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>drsmithy</author>
	<datestamp>1258898760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>If you are going to do virtualization, the only benefit comes when you invest in a cluster otherwise don't do it at all.</i>
</p><p>This is not true at all.  Indeed, the benefits of virtualisation are such that even for a single service on a single server, it's generally better to make it a VM.</p></htmltext>
<tokenext>If you are going to do virtualization , the only benefit comes when you invest in a cluster otherwise do n't do it at all .
This is not true at all .
Indeed , the benefits of virtualisation are such that even for a single service on a single server , it 's generally better to make it a VM .</tokentext>
<sentencetext> If you are going to do virtualization, the only benefit comes when you invest in a cluster otherwise don't do it at all.
This is not true at all.
Indeed, the benefits of virtualisation are such that even for a single service on a single server, it's generally better to make it a VM.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092</id>
	<title>P2V and consolidate</title>
	<author>Anonymous</author>
	<datestamp>1258806360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>The low-budget solution: buy one server (like a Poweredge 2970) with like 16GB RAM, a combination of 15k and 7.2k RAID1 arrays, and 4hr support.  Install a free hypervisor like Vmware Server or Xen, and P2V your oldest hardware onto it.

Later on you can spend $$$$$ on clustering, HA, SANs, and clouds.  But P2V of your old hardware onto new hardware is a cost-effective way to start.</htmltext>
<tokenext>The low-budget solution : buy one server ( like a Poweredge 2970 ) with like 16GB RAM , a combination of 15k and 7.2k RAID1 arrays , and 4hr support .
Install a free hypervisor like Vmware Server or Xen , and P2V your oldest hardware onto it .
Later on you can spend $ $ $ $ $ on clustering , HA , SANs , and clouds .
But P2V of your old hardware onto new hardware is a cost-effective way to start .</tokentext>
<sentencetext>The low-budget solution: buy one server (like a Poweredge 2970) with like 16GB RAM, a combination of 15k and 7.2k RAID1 arrays, and 4hr support.
Install a free hypervisor like Vmware Server or Xen, and P2V your oldest hardware onto it.
Later on you can spend $$$$$ on clustering, HA, SANs, and clouds.
But P2V of your old hardware onto new hardware is a cost-effective way to start.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189150</id>
	<title>Real question</title>
	<author>Sepiraph</author>
	<datestamp>1258807080000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext>How did you get put in charge of such a project when it is obvious that you have no clue on carrying out the tasks?</htmltext>
<tokenext>How did you get put in charge of such a project when it is obvious that you have no clue on carrying out the tasks ?</tokentext>
<sentencetext>How did you get put in charge of such a project when it is obvious that you have no clue on carrying out the tasks?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190332</id>
	<title>Re:P2V and consolidate</title>
	<author>AF\_Cheddar\_Head</author>
	<datestamp>1258816800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yeah go ahead and price P-to-V capability in VMWare, last I checked it wasn't in the free ESXi version.</p><p>Oh by the way make sure your hardware has Virtualization Support built in or 64-bit OS in the VM is out of the question.</p><p>Implementing virtualization in a production environment is not as easy or cheap as a lot of people seem to think.</p><p>I have implemented it and don't think it's the right choice for small one-man operation. A large data center absolutely but not the small branch office. Expensive, especially if you need hardware-level redundancy.</p></htmltext>
<tokenext>Yeah go ahead and price P-to-V capability in VMWare , last I checked it was n't in the free ESXi version.Oh by the way make sure your hardware has Virtualization Support built in or 64-bit OS in the VM is out of the question.Implementing virtualization in a production environment is not as easy or cheap as a lot of people seem to think.I have implemented it and do n't think it 's the right choice for small one-man operation .
A large data center absolutely but not the small branch office .
Expensive , especially if you need hardware-level redundancy .</tokentext>
<sentencetext>Yeah go ahead and price P-to-V capability in VMWare, last I checked it wasn't in the free ESXi version.Oh by the way make sure your hardware has Virtualization Support built in or 64-bit OS in the VM is out of the question.Implementing virtualization in a production environment is not as easy or cheap as a lot of people seem to think.I have implemented it and don't think it's the right choice for small one-man operation.
A large data center absolutely but not the small branch office.
Expensive, especially if you need hardware-level redundancy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194858</id>
	<title>Re:Think about the complexity of duplication</title>
	<author>psych0munky</author>
	<datestamp>1258916040000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Maybe this is asked elsewhere in these threads, but the one thing that seems to not be asked here is not just "What are the business requirements?", but also "What are your business application requirements?".  While it may seem implied in the former question, IME, it is usually not addressed enough by simply asking the former.  In asking the former, it seems that you get nice "businessy" answers like "we need Y application to be back up and running in X time".  What it doesn't answer, is what are the requirements for Y application?  Does it need to have internet connectivity, connectivity to a central database, or is it completely stand-alone?  In the second case, unless you have a sufficiently advanced application (most aren't), simply putting an instance of Y application locally in case your link goes down, may not cut it if it does not have suitable "caching" mechanism to store data until the link comes back and then forward it on to the central DB.
</p><p>
I have seen many hardware upgrades "fail" even though the upgrade was technically successful.  This was usually caused by the project team asking the right business questions, but forgetting to drill down and ask the right questions of the application providers (vendors or internal development staff).
</p><p>
I was actually involved in a Active Directory "upgrade" project where the project team was wanting not to simply upgrade AD to the latest version, but also refactor the directory structure (due to some really poor choices on the initial implementation which was causing daily grief for the maintainers of the information), without considering the impacts to the applications we had built in-house that were using AD for Authn and authz (most would've likely been able to handle the changes since they were fairly configurable in this regard).  I raised this concern many times and almost everytime, it was ignored, or it was "yeah, we will consider that", and then it got dropped on the ground.  Fortunately, just prior to implementation, the project got "put on the back-burner" and the project manager (a contractor) was let go due to "budget cuts".  Hopefully when/if this gets traction again, we will actually look at what else besides the network and people's workstation login's will be affected.
</p><p>
I still struggle to understand what causes this rift between infrastructure people and development people (I have been on both sides, but mostly on the development side), as a poor application choice can severely restrict what can be done with a company's hardware, and inversely, a poor infrastructure choice can unexpectedly break an application.
</p><p>
However, if you are only a company of 150ish employees, hopefully you are still small enough to deal with issue quickly and efficiently (it seems to get worse as corporations get bigger).
</p></htmltext>
<tokenext>Maybe this is asked elsewhere in these threads , but the one thing that seems to not be asked here is not just " What are the business requirements ?
" , but also " What are your business application requirements ? " .
While it may seem implied in the former question , IME , it is usually not addressed enough by simply asking the former .
In asking the former , it seems that you get nice " businessy " answers like " we need Y application to be back up and running in X time " .
What it does n't answer , is what are the requirements for Y application ?
Does it need to have internet connectivity , connectivity to a central database , or is it completely stand-alone ?
In the second case , unless you have a sufficiently advanced application ( most are n't ) , simply putting an instance of Y application locally in case your link goes down , may not cut it if it does not have suitable " caching " mechanism to store data until the link comes back and then forward it on to the central DB .
I have seen many hardware upgrades " fail " even though the upgrade was technically successful .
This was usually caused by the project team asking the right business questions , but forgetting to drill down and ask the right questions of the application providers ( vendors or internal development staff ) .
I was actually involved in a Active Directory " upgrade " project where the project team was wanting not to simply upgrade AD to the latest version , but also refactor the directory structure ( due to some really poor choices on the initial implementation which was causing daily grief for the maintainers of the information ) , without considering the impacts to the applications we had built in-house that were using AD for Authn and authz ( most would 've likely been able to handle the changes since they were fairly configurable in this regard ) .
I raised this concern many times and almost everytime , it was ignored , or it was " yeah , we will consider that " , and then it got dropped on the ground .
Fortunately , just prior to implementation , the project got " put on the back-burner " and the project manager ( a contractor ) was let go due to " budget cuts " .
Hopefully when/if this gets traction again , we will actually look at what else besides the network and people 's workstation login 's will be affected .
I still struggle to understand what causes this rift between infrastructure people and development people ( I have been on both sides , but mostly on the development side ) , as a poor application choice can severely restrict what can be done with a company 's hardware , and inversely , a poor infrastructure choice can unexpectedly break an application .
However , if you are only a company of 150ish employees , hopefully you are still small enough to deal with issue quickly and efficiently ( it seems to get worse as corporations get bigger ) .</tokentext>
<sentencetext>Maybe this is asked elsewhere in these threads, but the one thing that seems to not be asked here is not just "What are the business requirements?
", but also "What are your business application requirements?".
While it may seem implied in the former question, IME, it is usually not addressed enough by simply asking the former.
In asking the former, it seems that you get nice "businessy" answers like "we need Y application to be back up and running in X time".
What it doesn't answer, is what are the requirements for Y application?
Does it need to have internet connectivity, connectivity to a central database, or is it completely stand-alone?
In the second case, unless you have a sufficiently advanced application (most aren't), simply putting an instance of Y application locally in case your link goes down, may not cut it if it does not have suitable "caching" mechanism to store data until the link comes back and then forward it on to the central DB.
I have seen many hardware upgrades "fail" even though the upgrade was technically successful.
This was usually caused by the project team asking the right business questions, but forgetting to drill down and ask the right questions of the application providers (vendors or internal development staff).
I was actually involved in a Active Directory "upgrade" project where the project team was wanting not to simply upgrade AD to the latest version, but also refactor the directory structure (due to some really poor choices on the initial implementation which was causing daily grief for the maintainers of the information), without considering the impacts to the applications we had built in-house that were using AD for Authn and authz (most would've likely been able to handle the changes since they were fairly configurable in this regard).
I raised this concern many times and almost everytime, it was ignored, or it was "yeah, we will consider that", and then it got dropped on the ground.
Fortunately, just prior to implementation, the project got "put on the back-burner" and the project manager (a contractor) was let go due to "budget cuts".
Hopefully when/if this gets traction again, we will actually look at what else besides the network and people's workstation login's will be affected.
I still struggle to understand what causes this rift between infrastructure people and development people (I have been on both sides, but mostly on the development side), as a poor application choice can severely restrict what can be done with a company's hardware, and inversely, a poor infrastructure choice can unexpectedly break an application.
However, if you are only a company of 150ish employees, hopefully you are still small enough to deal with issue quickly and efficiently (it seems to get worse as corporations get bigger).
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188904</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30195674</id>
	<title>get good servers</title>
	<author>Anonymous</author>
	<datestamp>1258922580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>(It's so sad to see all the egos and superiority complexes here.)</p><p>My only advice is to use a good brand of server hardware. We integrate our software product into Dell, HP and IBM servers, and in our experience (10's of thousands of integrations), IBM provides very poor quality  products that take double the setup time,  double the maintenance time and have double the failure rate of both Dell and HP. I have no preference between the other 2 brands. They are both quite good.</p><p>It's a pity IBM servers have such a good reputation. It really is undeserved.</p></htmltext>
<tokenext>( It 's so sad to see all the egos and superiority complexes here .
) My only advice is to use a good brand of server hardware .
We integrate our software product into Dell , HP and IBM servers , and in our experience ( 10 's of thousands of integrations ) , IBM provides very poor quality products that take double the setup time , double the maintenance time and have double the failure rate of both Dell and HP .
I have no preference between the other 2 brands .
They are both quite good.It 's a pity IBM servers have such a good reputation .
It really is undeserved .</tokentext>
<sentencetext>(It's so sad to see all the egos and superiority complexes here.
)My only advice is to use a good brand of server hardware.
We integrate our software product into Dell, HP and IBM servers, and in our experience (10's of thousands of integrations), IBM provides very poor quality  products that take double the setup time,  double the maintenance time and have double the failure rate of both Dell and HP.
I have no preference between the other 2 brands.
They are both quite good.It's a pity IBM servers have such a good reputation.
It really is undeserved.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30197310</id>
	<title>Network Overhaul - Things To Consider</title>
	<author>jonnyboy3us</author>
	<datestamp>1258891980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I was put in your exact position four years ago with the current place I work with.  Here's some things I suggest:

1- Make a plan.  These things can't be fixed in a day.  My boss, the CIO said, "Rome wasn't built in a day."  He was right on with that one.  It took me three years to get things to where they needed to be.  One piece at a time.
2- Make sure you break things up and prioritize them.  What is the 'oldest' equipment or the pain points?  Is the network holding up?  Connectivity is the most important part.  Make sure you have your network running well before you mess with other parts of the system or put additional strain on the system.
3- Make sure you have the right people on board.  I call this checks and balances.  You need to have firepower behind your decisions, especially when it comes to making the budget.
4- Remember the phrase:  KISS.  Burn it in your mind...  It means, keep it simple, stupid.  Don't bow to salesman, brochures, 'white papers' or peer pressure.  Experience and checks and balances are essential.

And finally, be cautious and move slow.  Systems don't all just fall apart at once.  Once you're prioritized, gotten the right people on board and have your ducks in a row, things will run smothly.  If managment gets in your way, refer back to the checks and balances you set up and force it down their throats.  It's kind of sad to say that this is just like playing chess, but when management doesn't trust IT in general, you have to prove yourself.  Following the above steps will help.

Good Luck.</htmltext>
<tokenext>I was put in your exact position four years ago with the current place I work with .
Here 's some things I suggest : 1- Make a plan .
These things ca n't be fixed in a day .
My boss , the CIO said , " Rome was n't built in a day .
" He was right on with that one .
It took me three years to get things to where they needed to be .
One piece at a time .
2- Make sure you break things up and prioritize them .
What is the 'oldest ' equipment or the pain points ?
Is the network holding up ?
Connectivity is the most important part .
Make sure you have your network running well before you mess with other parts of the system or put additional strain on the system .
3- Make sure you have the right people on board .
I call this checks and balances .
You need to have firepower behind your decisions , especially when it comes to making the budget .
4- Remember the phrase : KISS .
Burn it in your mind... It means , keep it simple , stupid .
Do n't bow to salesman , brochures , 'white papers ' or peer pressure .
Experience and checks and balances are essential .
And finally , be cautious and move slow .
Systems do n't all just fall apart at once .
Once you 're prioritized , gotten the right people on board and have your ducks in a row , things will run smothly .
If managment gets in your way , refer back to the checks and balances you set up and force it down their throats .
It 's kind of sad to say that this is just like playing chess , but when management does n't trust IT in general , you have to prove yourself .
Following the above steps will help .
Good Luck .</tokentext>
<sentencetext>I was put in your exact position four years ago with the current place I work with.
Here's some things I suggest:

1- Make a plan.
These things can't be fixed in a day.
My boss, the CIO said, "Rome wasn't built in a day.
"  He was right on with that one.
It took me three years to get things to where they needed to be.
One piece at a time.
2- Make sure you break things up and prioritize them.
What is the 'oldest' equipment or the pain points?
Is the network holding up?
Connectivity is the most important part.
Make sure you have your network running well before you mess with other parts of the system or put additional strain on the system.
3- Make sure you have the right people on board.
I call this checks and balances.
You need to have firepower behind your decisions, especially when it comes to making the budget.
4- Remember the phrase:  KISS.
Burn it in your mind...  It means, keep it simple, stupid.
Don't bow to salesman, brochures, 'white papers' or peer pressure.
Experience and checks and balances are essential.
And finally, be cautious and move slow.
Systems don't all just fall apart at once.
Once you're prioritized, gotten the right people on board and have your ducks in a row, things will run smothly.
If managment gets in your way, refer back to the checks and balances you set up and force it down their throats.
It's kind of sad to say that this is just like playing chess, but when management doesn't trust IT in general, you have to prove yourself.
Following the above steps will help.
Good Luck.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191374</id>
	<title>Re:P2V and consolidate</title>
	<author>Anonymous</author>
	<datestamp>1258830420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'm totally with you on virtualization being over used.</p><p>That said, how do virtualization vendors get off selling 'P2V' services extra, when you can do the same thing using traditional imaging in which the image happens to get deployed on a virtual machine?  Virtualization vendors want 'P2V' to appear magical so as to distract from the fact that, fundamentally, 'P2P' is equally plausible in most cases where P2V is possible for OS instance continuity across hardware updates.</p></htmltext>
<tokenext>I 'm totally with you on virtualization being over used.That said , how do virtualization vendors get off selling 'P2V ' services extra , when you can do the same thing using traditional imaging in which the image happens to get deployed on a virtual machine ?
Virtualization vendors want 'P2V ' to appear magical so as to distract from the fact that , fundamentally , 'P2P ' is equally plausible in most cases where P2V is possible for OS instance continuity across hardware updates .</tokentext>
<sentencetext>I'm totally with you on virtualization being over used.That said, how do virtualization vendors get off selling 'P2V' services extra, when you can do the same thing using traditional imaging in which the image happens to get deployed on a virtual machine?
Virtualization vendors want 'P2V' to appear magical so as to distract from the fact that, fundamentally, 'P2P' is equally plausible in most cases where P2V is possible for OS instance continuity across hardware updates.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190332</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189312</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>pe1rxq</author>
	<datestamp>1258808760000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Is it so hard to not mix up dhcpd.conf and named.conf? Do you need virtualization for that?</p><p>Let me give you a hint: YOU DON'T</p></htmltext>
<tokenext>Is it so hard to not mix up dhcpd.conf and named.conf ?
Do you need virtualization for that ? Let me give you a hint : YOU DO N'T</tokentext>
<sentencetext>Is it so hard to not mix up dhcpd.conf and named.conf?
Do you need virtualization for that?Let me give you a hint: YOU DON'T</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189458</id>
	<title>The fundamental mistake you're making...</title>
	<author>machinegestalt</author>
	<datestamp>1258809720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>From your post, you not looking at this with the right perspective, not asking the right questions, nor asking them to the right people.  You state that you have been put in charge of "maintaining" and never once mention anything about your company's predicted growth, development plans, future computation needs, near and long term service offerings, uptime requirements, security requirements or so forth.  You have to do a requirements analysis that extends to between five and ten years and design a system that can grow seamlessly with your employer, meeting their current and expected needs in all pertinent areas.</p><p>If you can develop a system that does what is required on paper, the next step is to implement it in parallel with the existing system, and transition services and users over in phases.  After all services have been transitioned, you can decommission the old infrastructure piece by piece.</p></htmltext>
<tokenext>From your post , you not looking at this with the right perspective , not asking the right questions , nor asking them to the right people .
You state that you have been put in charge of " maintaining " and never once mention anything about your company 's predicted growth , development plans , future computation needs , near and long term service offerings , uptime requirements , security requirements or so forth .
You have to do a requirements analysis that extends to between five and ten years and design a system that can grow seamlessly with your employer , meeting their current and expected needs in all pertinent areas.If you can develop a system that does what is required on paper , the next step is to implement it in parallel with the existing system , and transition services and users over in phases .
After all services have been transitioned , you can decommission the old infrastructure piece by piece .</tokentext>
<sentencetext>From your post, you not looking at this with the right perspective, not asking the right questions, nor asking them to the right people.
You state that you have been put in charge of "maintaining" and never once mention anything about your company's predicted growth, development plans, future computation needs, near and long term service offerings, uptime requirements, security requirements or so forth.
You have to do a requirements analysis that extends to between five and ten years and design a system that can grow seamlessly with your employer, meeting their current and expected needs in all pertinent areas.If you can develop a system that does what is required on paper, the next step is to implement it in parallel with the existing system, and transition services and users over in phases.
After all services have been transitioned, you can decommission the old infrastructure piece by piece.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191896</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>dissy</author>
	<datestamp>1258881840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Wow...  Did you just seriously recommend he purchase 50 servers for each location???</p><p><div class="quote"><p>I, personally, am TOTALLY in agreement with the ethos of whoever designed it, a single box <b>for each service.</b></p> </div><p>25 services is next to nothing.  A single domain controller has that running on a single box.</p><p>And you want him to break out each service to its own machine... with a second box for redundancy.</p><p>I guess I am happy that you have $20k+ to spend on two low end boxes for eg. just DNS.  But that is stupid as hell.<br>Even worse that you are wasting a dual core 2ghz system for a NTP time sync server (Oh wait, two machines, like you said)</p><p>What waste.  Waste of hardware, waste of electricity, waste of network port usage, and waste of time managing all of that.</p><p>Not to mention total lack of forethought in planning.</p><p>I mean, if your DHCP service server goes down, and has no fail over, then the 30 other machines you dedicated to 15 services that are also network related will not be used.  Might as well put DNS on with DHCP since if one goes down the other will not help you one bit.<br>See how that works?  that was 5 seconds worth of thought and saved your company $20k!</p><p>Imagine what would happen if you put more than seconds of thought into the problem, like hours or days worth of thought!  It \_could\_ save you hundreds of thousands of dollars compared to your current recommendations.</p><p>Hell, with 150 users, you probably just spent 10 years worth of their IT budget for your one suggestion alone!</p></div>
	</htmltext>
<tokenext>Wow... Did you just seriously recommend he purchase 50 servers for each location ? ?
? I , personally , am TOTALLY in agreement with the ethos of whoever designed it , a single box for each service .
25 services is next to nothing .
A single domain controller has that running on a single box.And you want him to break out each service to its own machine... with a second box for redundancy.I guess I am happy that you have $ 20k + to spend on two low end boxes for eg .
just DNS .
But that is stupid as hell.Even worse that you are wasting a dual core 2ghz system for a NTP time sync server ( Oh wait , two machines , like you said ) What waste .
Waste of hardware , waste of electricity , waste of network port usage , and waste of time managing all of that.Not to mention total lack of forethought in planning.I mean , if your DHCP service server goes down , and has no fail over , then the 30 other machines you dedicated to 15 services that are also network related will not be used .
Might as well put DNS on with DHCP since if one goes down the other will not help you one bit.See how that works ?
that was 5 seconds worth of thought and saved your company $ 20k ! Imagine what would happen if you put more than seconds of thought into the problem , like hours or days worth of thought !
It \ _could \ _ save you hundreds of thousands of dollars compared to your current recommendations.Hell , with 150 users , you probably just spent 10 years worth of their IT budget for your one suggestion alone !</tokentext>
<sentencetext>Wow...  Did you just seriously recommend he purchase 50 servers for each location??
?I, personally, am TOTALLY in agreement with the ethos of whoever designed it, a single box for each service.
25 services is next to nothing.
A single domain controller has that running on a single box.And you want him to break out each service to its own machine... with a second box for redundancy.I guess I am happy that you have $20k+ to spend on two low end boxes for eg.
just DNS.
But that is stupid as hell.Even worse that you are wasting a dual core 2ghz system for a NTP time sync server (Oh wait, two machines, like you said)What waste.
Waste of hardware, waste of electricity, waste of network port usage, and waste of time managing all of that.Not to mention total lack of forethought in planning.I mean, if your DHCP service server goes down, and has no fail over, then the 30 other machines you dedicated to 15 services that are also network related will not be used.
Might as well put DNS on with DHCP since if one goes down the other will not help you one bit.See how that works?
that was 5 seconds worth of thought and saved your company $20k!Imagine what would happen if you put more than seconds of thought into the problem, like hours or days worth of thought!
It \_could\_ save you hundreds of thousands of dollars compared to your current recommendations.Hell, with 150 users, you probably just spent 10 years worth of their IT budget for your one suggestion alone!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191050</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>Antique Geekmeister</author>
	<datestamp>1258826220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So does a cluster, of course. The back-end storage array required for virtual host migration, or the Veritas clustering tools you may use for service clustering, also form single points of failure. And Veritas has historically been extremely unstable under load: it's often misconfigured, it's often mishandled entirely, and it often mistakes having a "high reliability filesystem" for having a highly reliable failover system, when that filesystem itself may be corrupted by the actual software. This is a very serious problem for Oracle systems, by the way. Far too many installers mistake "clustering" software for having a master/slave, and mistake master/slave setups for having actual backups.</p></htmltext>
<tokenext>So does a cluster , of course .
The back-end storage array required for virtual host migration , or the Veritas clustering tools you may use for service clustering , also form single points of failure .
And Veritas has historically been extremely unstable under load : it 's often misconfigured , it 's often mishandled entirely , and it often mistakes having a " high reliability filesystem " for having a highly reliable failover system , when that filesystem itself may be corrupted by the actual software .
This is a very serious problem for Oracle systems , by the way .
Far too many installers mistake " clustering " software for having a master/slave , and mistake master/slave setups for having actual backups .</tokentext>
<sentencetext>So does a cluster, of course.
The back-end storage array required for virtual host migration, or the Veritas clustering tools you may use for service clustering, also form single points of failure.
And Veritas has historically been extremely unstable under load: it's often misconfigured, it's often mishandled entirely, and it often mistakes having a "high reliability filesystem" for having a highly reliable failover system, when that filesystem itself may be corrupted by the actual software.
This is a very serious problem for Oracle systems, by the way.
Far too many installers mistake "clustering" software for having a master/slave, and mistake master/slave setups for having actual backups.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190650</id>
	<title>Information Technology Infrastructure Library</title>
	<author>nko321</author>
	<datestamp>1258820640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>1) Thank you, thank you for thinking of best practices before taking serious action.

2) ITIL is your friend. <a href="http://en.wikipedia.org/wiki/Information\_Technology\_Infrastructure\_Library" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/Information\_Technology\_Infrastructure\_Library</a> [wikipedia.org]

When implemented deliberately and properly, ITIL makes an IT admin darn near *comfy*. Just remember that ITIL != bureaucracy, ITIL == Best Practices.</htmltext>
<tokenext>1 ) Thank you , thank you for thinking of best practices before taking serious action .
2 ) ITIL is your friend .
http : //en.wikipedia.org/wiki/Information \ _Technology \ _Infrastructure \ _Library [ wikipedia.org ] When implemented deliberately and properly , ITIL makes an IT admin darn near * comfy * .
Just remember that ITIL ! = bureaucracy , ITIL = = Best Practices .</tokentext>
<sentencetext>1) Thank you, thank you for thinking of best practices before taking serious action.
2) ITIL is your friend.
http://en.wikipedia.org/wiki/Information\_Technology\_Infrastructure\_Library [wikipedia.org]

When implemented deliberately and properly, ITIL makes an IT admin darn near *comfy*.
Just remember that ITIL != bureaucracy, ITIL == Best Practices.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30198970</id>
	<title>Re:P2V and consolidate</title>
	<author>Alpha830RulZ</author>
	<datestamp>1258907040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yup.  If you want to make your dollar go further, strongly consider <a href="http://www.siliconmechanics.com/" title="siliconmechanics.com"> these guys</a> [siliconmechanics.com].  I have foudn there gear to be as good as Dell's, and their techs more knowledgable.   For an office solution, possibly <a href="http://www.siliconmechanics.com/i22783/xeon-5500-2U-4-Node.php" title="siliconmechanics.com"> one or two of these</a> [siliconmechanics.com] would be a great way to start.</p></htmltext>
<tokenext>Yup .
If you want to make your dollar go further , strongly consider these guys [ siliconmechanics.com ] .
I have foudn there gear to be as good as Dell 's , and their techs more knowledgable .
For an office solution , possibly one or two of these [ siliconmechanics.com ] would be a great way to start .</tokentext>
<sentencetext>Yup.
If you want to make your dollar go further, strongly consider  these guys [siliconmechanics.com].
I have foudn there gear to be as good as Dell's, and their techs more knowledgable.
For an office solution, possibly  one or two of these [siliconmechanics.com] would be a great way to start.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192148</id>
	<title>Re:Simple and straightforward = complex</title>
	<author>Kjella</author>
	<datestamp>1258887660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>FTFA: "there's hardly any fallback if any of the services dies or an office is disconnected."</p><p><div class="quote"><p>So let's see if I understand: you want to take a simple, straightforward, easy-to-understand architecture with no single points of failure</p></div><p>Not that I agree with everything the article poster wrote, but in what world does "no fallback" == "no single point of failure"? Sure there's no one point of total catastrophic failure but I think he just described two single points of failure where all users would be without one service or one office without all services.</p><p>I'd keep the architecture, but I'd migrate it slowly to virtual servers running on a high-quality server. That would make the failure more severe, but would make it less likely to happen. The total number of failures should be 1/x where x is the number of servers you replaced. The more servers you have, the more you skimp on server features and quality not to mention the support to get them back and running quickly so that should bring it down further. Of course if it should fail then all services would fail, but there's usually dependencies - if you get an email which means you should do something on an intranet application then it won't work if either mail or web is down so in total a complete failure might not be so bad if you recover fast enough.</p><p>Then I'd get a second server, and work per service to make it as redundant as possible - some services may be easy with load balancing or automatic fail-over, others could be hot/cold spares that need recovering from backups etc. but do what you can, as time permits. Eventually you may reach a point where you have no single point of hardware failure. Network failure is much harder, if he can't handle fail-over and resynchronizing in the data center there's no way he'll be able to do it between branch offices. Check your SLAs, check out possibilities for redundant connections but leave anything else until you have full redundancy in your data center. At that point, you may realize that is more than sufficient.</p></div>
	</htmltext>
<tokenext>FTFA : " there 's hardly any fallback if any of the services dies or an office is disconnected .
" So let 's see if I understand : you want to take a simple , straightforward , easy-to-understand architecture with no single points of failureNot that I agree with everything the article poster wrote , but in what world does " no fallback " = = " no single point of failure " ?
Sure there 's no one point of total catastrophic failure but I think he just described two single points of failure where all users would be without one service or one office without all services.I 'd keep the architecture , but I 'd migrate it slowly to virtual servers running on a high-quality server .
That would make the failure more severe , but would make it less likely to happen .
The total number of failures should be 1/x where x is the number of servers you replaced .
The more servers you have , the more you skimp on server features and quality not to mention the support to get them back and running quickly so that should bring it down further .
Of course if it should fail then all services would fail , but there 's usually dependencies - if you get an email which means you should do something on an intranet application then it wo n't work if either mail or web is down so in total a complete failure might not be so bad if you recover fast enough.Then I 'd get a second server , and work per service to make it as redundant as possible - some services may be easy with load balancing or automatic fail-over , others could be hot/cold spares that need recovering from backups etc .
but do what you can , as time permits .
Eventually you may reach a point where you have no single point of hardware failure .
Network failure is much harder , if he ca n't handle fail-over and resynchronizing in the data center there 's no way he 'll be able to do it between branch offices .
Check your SLAs , check out possibilities for redundant connections but leave anything else until you have full redundancy in your data center .
At that point , you may realize that is more than sufficient .</tokentext>
<sentencetext>FTFA: "there's hardly any fallback if any of the services dies or an office is disconnected.
"So let's see if I understand: you want to take a simple, straightforward, easy-to-understand architecture with no single points of failureNot that I agree with everything the article poster wrote, but in what world does "no fallback" == "no single point of failure"?
Sure there's no one point of total catastrophic failure but I think he just described two single points of failure where all users would be without one service or one office without all services.I'd keep the architecture, but I'd migrate it slowly to virtual servers running on a high-quality server.
That would make the failure more severe, but would make it less likely to happen.
The total number of failures should be 1/x where x is the number of servers you replaced.
The more servers you have, the more you skimp on server features and quality not to mention the support to get them back and running quickly so that should bring it down further.
Of course if it should fail then all services would fail, but there's usually dependencies - if you get an email which means you should do something on an intranet application then it won't work if either mail or web is down so in total a complete failure might not be so bad if you recover fast enough.Then I'd get a second server, and work per service to make it as redundant as possible - some services may be easy with load balancing or automatic fail-over, others could be hot/cold spares that need recovering from backups etc.
but do what you can, as time permits.
Eventually you may reach a point where you have no single point of hardware failure.
Network failure is much harder, if he can't handle fail-over and resynchronizing in the data center there's no way he'll be able to do it between branch offices.
Check your SLAs, check out possibilities for redundant connections but leave anything else until you have full redundancy in your data center.
At that point, you may realize that is more than sufficient.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189168</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194164</id>
	<title>Are blades really such a good idea?</title>
	<author>TheLink</author>
	<datestamp>1258911360000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>In my uninformed opinion, blades are mainly a way for hardware vendors to extract more money from suckers.<br><br>They probably have niche uses. But when you get to the details they're not so great. Yes the HP iLO stuff is cool etc... When it works.<br><br>Many of the HP blades don't come with optical drives. You have to mount CD/DVD images via the blade software. Which seemed to only work reliably on IE6 on XP. OK so maybe we should have tried it with more browsers, than IE8, but who has time? Especially see below why you don't have time:<br><br>So far I haven't seen any mention in HP documentation that the transfer rate of the mounted CD/DVD image (or folder) between your laptop to the iLO software to a blade that you're trying to install stuff on is a measly 500 kilobytes per second. But that's what we encountered in practice.<br><br>Yes you can attach the blade network to another network and install it over the network, but if you can do that, doesn't that make the fancy HP iLO stuff less important? You might as well just get a network KVM right? That KVM will work with Dell/IBM/WhiteBoxServer so you can tell HP to fuck off and die if you want.<br><br>Which brings us to the next important point: Fancy Vendor X enclosures will only work with current and near future Vendor X blades. In 3-5 years time they might start charging you a lot more to buy new but obsolete Vendor X blades. Whoopee. What are the odds you can use the latest blades in your old enclosure? So you pay a premium for vendor lock-in and to be screwed in the future.<br><br>I doubt Google, etc use blades. And they seem to be able to manage hundreds of thousands of servers. OK so most of the servers might be running the same image/thing... So that makes it easy.<br><br>BUT if you are having very different servers do you really want them in a few blade enclosures? Then when you need to service that enclosure you'd be bringing down all the different blades...</htmltext>
<tokenext>In my uninformed opinion , blades are mainly a way for hardware vendors to extract more money from suckers.They probably have niche uses .
But when you get to the details they 're not so great .
Yes the HP iLO stuff is cool etc... When it works.Many of the HP blades do n't come with optical drives .
You have to mount CD/DVD images via the blade software .
Which seemed to only work reliably on IE6 on XP .
OK so maybe we should have tried it with more browsers , than IE8 , but who has time ?
Especially see below why you do n't have time : So far I have n't seen any mention in HP documentation that the transfer rate of the mounted CD/DVD image ( or folder ) between your laptop to the iLO software to a blade that you 're trying to install stuff on is a measly 500 kilobytes per second .
But that 's what we encountered in practice.Yes you can attach the blade network to another network and install it over the network , but if you can do that , does n't that make the fancy HP iLO stuff less important ?
You might as well just get a network KVM right ?
That KVM will work with Dell/IBM/WhiteBoxServer so you can tell HP to fuck off and die if you want.Which brings us to the next important point : Fancy Vendor X enclosures will only work with current and near future Vendor X blades .
In 3-5 years time they might start charging you a lot more to buy new but obsolete Vendor X blades .
Whoopee. What are the odds you can use the latest blades in your old enclosure ?
So you pay a premium for vendor lock-in and to be screwed in the future.I doubt Google , etc use blades .
And they seem to be able to manage hundreds of thousands of servers .
OK so most of the servers might be running the same image/thing... So that makes it easy.BUT if you are having very different servers do you really want them in a few blade enclosures ?
Then when you need to service that enclosure you 'd be bringing down all the different blades.. .</tokentext>
<sentencetext>In my uninformed opinion, blades are mainly a way for hardware vendors to extract more money from suckers.They probably have niche uses.
But when you get to the details they're not so great.
Yes the HP iLO stuff is cool etc... When it works.Many of the HP blades don't come with optical drives.
You have to mount CD/DVD images via the blade software.
Which seemed to only work reliably on IE6 on XP.
OK so maybe we should have tried it with more browsers, than IE8, but who has time?
Especially see below why you don't have time:So far I haven't seen any mention in HP documentation that the transfer rate of the mounted CD/DVD image (or folder) between your laptop to the iLO software to a blade that you're trying to install stuff on is a measly 500 kilobytes per second.
But that's what we encountered in practice.Yes you can attach the blade network to another network and install it over the network, but if you can do that, doesn't that make the fancy HP iLO stuff less important?
You might as well just get a network KVM right?
That KVM will work with Dell/IBM/WhiteBoxServer so you can tell HP to fuck off and die if you want.Which brings us to the next important point: Fancy Vendor X enclosures will only work with current and near future Vendor X blades.
In 3-5 years time they might start charging you a lot more to buy new but obsolete Vendor X blades.
Whoopee. What are the odds you can use the latest blades in your old enclosure?
So you pay a premium for vendor lock-in and to be screwed in the future.I doubt Google, etc use blades.
And they seem to be able to manage hundreds of thousands of servers.
OK so most of the servers might be running the same image/thing... So that makes it easy.BUT if you are having very different servers do you really want them in a few blade enclosures?
Then when you need to service that enclosure you'd be bringing down all the different blades...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188836</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191840</id>
	<title>User-base, Teirs, and Planning</title>
	<author>Xeleema</author>
	<datestamp>1258880580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Two points have already been mentioned before;</p><p> <b>1.</b> What <i>kind</i> of users are we talking here?  Globally diverse store managers? Scientists? Wall Street? Web developers? Each one of these groups will have different ideas of what "Reliability" means. Which brings me to;</p><p> <b>2.</b> Tiers. What are your critical (never-down) services?  Typically this translates to cost; how much will a company-wide email outage cost you per day? Hour? Minute?  DNS/DHCP/WINS (shudder) and all your "infrastructure" services will probably fall under this category. But which Applications do you provide, and what are the users expectations?
This is a great chance to start having "User Group" meetings with the various sections of your user-base, and start fleshing-out requirements.</p><p> <b>3.</b> Plans.  Everyone with a tie will love to see a black-and-white document outlining things like Backups, Disaster Recovery, Risk Analysis, Acceptable Use Policy, and so forth.  However, most small networks (10-20 servers) don't have anything like this.  Heck, even if it's "boot the old systems", it's still a plan.  Write one up, use a template, Google has a few dozen last I checked.</p><p> <b>4.</b> Migration Plan.  One thing you can bet on; if *anyone* non-IT has had free reign inside the network, there will be little files, scripts, cron jobs, applications, firewall settings, etc that have been tweaked and long forgotten.  Before you "Decommission" anything, make sure it survives a reboot, and make an image of the filesystems.</p><p>Word from the wise; setup a Linux box somewhere with a good chunk of space and throw all of them on there, then make sure that system is backed up. Try to avoid mentioning this to anyone, as it increases the "awe" factor and cuts down on unnecessary retrieval requests</p><p> <b>5.</b> Blog, wiki-fy, etc. *Anything* that the users can take a look at and "see" what you're doing.  Being an I.T. techie is like being a ninja;  If you do your job right, no one even knows you're there.  But screw up, and everyone will have a torch and pitchfork with your name on it.  Sometimes having things out in the open will negate that (maybe they just bring flashlights, instead of actual torches).</p><p> <b>6.</b> Go Slow.  Take a look at what servers you have, inventory what all is running on them, and guestimate how long it would take to set that up.  Then multiply that by a Scotty factor and state that in your paperwork.</p><p>Remember, small-time IT guys seldom leave peacefully, they're typically ridden out on a rail.  (This coming from someone who's been the exception to that, narrowly at times).</p></htmltext>
<tokenext>Two points have already been mentioned before ; 1 .
What kind of users are we talking here ?
Globally diverse store managers ?
Scientists ? Wall Street ?
Web developers ?
Each one of these groups will have different ideas of what " Reliability " means .
Which brings me to ; 2 .
Tiers. What are your critical ( never-down ) services ?
Typically this translates to cost ; how much will a company-wide email outage cost you per day ?
Hour ? Minute ?
DNS/DHCP/WINS ( shudder ) and all your " infrastructure " services will probably fall under this category .
But which Applications do you provide , and what are the users expectations ?
This is a great chance to start having " User Group " meetings with the various sections of your user-base , and start fleshing-out requirements .
3. Plans .
Everyone with a tie will love to see a black-and-white document outlining things like Backups , Disaster Recovery , Risk Analysis , Acceptable Use Policy , and so forth .
However , most small networks ( 10-20 servers ) do n't have anything like this .
Heck , even if it 's " boot the old systems " , it 's still a plan .
Write one up , use a template , Google has a few dozen last I checked .
4. Migration Plan .
One thing you can bet on ; if * anyone * non-IT has had free reign inside the network , there will be little files , scripts , cron jobs , applications , firewall settings , etc that have been tweaked and long forgotten .
Before you " Decommission " anything , make sure it survives a reboot , and make an image of the filesystems.Word from the wise ; setup a Linux box somewhere with a good chunk of space and throw all of them on there , then make sure that system is backed up .
Try to avoid mentioning this to anyone , as it increases the " awe " factor and cuts down on unnecessary retrieval requests 5 .
Blog , wiki-fy , etc .
* Anything * that the users can take a look at and " see " what you 're doing .
Being an I.T .
techie is like being a ninja ; If you do your job right , no one even knows you 're there .
But screw up , and everyone will have a torch and pitchfork with your name on it .
Sometimes having things out in the open will negate that ( maybe they just bring flashlights , instead of actual torches ) .
6. Go Slow .
Take a look at what servers you have , inventory what all is running on them , and guestimate how long it would take to set that up .
Then multiply that by a Scotty factor and state that in your paperwork.Remember , small-time IT guys seldom leave peacefully , they 're typically ridden out on a rail .
( This coming from someone who 's been the exception to that , narrowly at times ) .</tokentext>
<sentencetext>Two points have already been mentioned before; 1.
What kind of users are we talking here?
Globally diverse store managers?
Scientists? Wall Street?
Web developers?
Each one of these groups will have different ideas of what "Reliability" means.
Which brings me to; 2.
Tiers. What are your critical (never-down) services?
Typically this translates to cost; how much will a company-wide email outage cost you per day?
Hour? Minute?
DNS/DHCP/WINS (shudder) and all your "infrastructure" services will probably fall under this category.
But which Applications do you provide, and what are the users expectations?
This is a great chance to start having "User Group" meetings with the various sections of your user-base, and start fleshing-out requirements.
3. Plans.
Everyone with a tie will love to see a black-and-white document outlining things like Backups, Disaster Recovery, Risk Analysis, Acceptable Use Policy, and so forth.
However, most small networks (10-20 servers) don't have anything like this.
Heck, even if it's "boot the old systems", it's still a plan.
Write one up, use a template, Google has a few dozen last I checked.
4. Migration Plan.
One thing you can bet on; if *anyone* non-IT has had free reign inside the network, there will be little files, scripts, cron jobs, applications, firewall settings, etc that have been tweaked and long forgotten.
Before you "Decommission" anything, make sure it survives a reboot, and make an image of the filesystems.Word from the wise; setup a Linux box somewhere with a good chunk of space and throw all of them on there, then make sure that system is backed up.
Try to avoid mentioning this to anyone, as it increases the "awe" factor and cuts down on unnecessary retrieval requests 5.
Blog, wiki-fy, etc.
*Anything* that the users can take a look at and "see" what you're doing.
Being an I.T.
techie is like being a ninja;  If you do your job right, no one even knows you're there.
But screw up, and everyone will have a torch and pitchfork with your name on it.
Sometimes having things out in the open will negate that (maybe they just bring flashlights, instead of actual torches).
6. Go Slow.
Take a look at what servers you have, inventory what all is running on them, and guestimate how long it would take to set that up.
Then multiply that by a Scotty factor and state that in your paperwork.Remember, small-time IT guys seldom leave peacefully, they're typically ridden out on a rail.
(This coming from someone who's been the exception to that, narrowly at times).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30207058</id>
	<title>Re:Why?</title>
	<author>Flere Imsaho</author>
	<datestamp>1258975380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So when you need to reboot your POS Windows print spooler, you take down all the other services too? Visualise and separate out services - to a certain extent.</p><p>With a cluster of VM hosts you get hardware redundancy across all VMs. Running multiple VMs on one host is cheap and efficient, but it's too all-your-eggs-in-one-basket for me.</p><p>If your existing physical servers are buckling under the load,  initially you can P2V your existing servers and run them as VMs on your hosts. That way you get a pain free hardware upgrade.  Then plan for separation of services and rationalising the number of DB servers, etc. Of course you have to balance the cost of server an VM licences against the benefits of distributing servers (assuming Windows and VMWare here).</p><p>As usual, it's a trade off between cost, risk and functionality.</p><p>No need for a Windows Deployment server, we're using Fog with great results.<br><a href="http://www.fogproject.org/" title="fogproject.org" rel="nofollow">http://www.fogproject.org/</a> [fogproject.org]</p></htmltext>
<tokenext>So when you need to reboot your POS Windows print spooler , you take down all the other services too ?
Visualise and separate out services - to a certain extent.With a cluster of VM hosts you get hardware redundancy across all VMs .
Running multiple VMs on one host is cheap and efficient , but it 's too all-your-eggs-in-one-basket for me.If your existing physical servers are buckling under the load , initially you can P2V your existing servers and run them as VMs on your hosts .
That way you get a pain free hardware upgrade .
Then plan for separation of services and rationalising the number of DB servers , etc .
Of course you have to balance the cost of server an VM licences against the benefits of distributing servers ( assuming Windows and VMWare here ) .As usual , it 's a trade off between cost , risk and functionality.No need for a Windows Deployment server , we 're using Fog with great results.http : //www.fogproject.org/ [ fogproject.org ]</tokentext>
<sentencetext>So when you need to reboot your POS Windows print spooler, you take down all the other services too?
Visualise and separate out services - to a certain extent.With a cluster of VM hosts you get hardware redundancy across all VMs.
Running multiple VMs on one host is cheap and efficient, but it's too all-your-eggs-in-one-basket for me.If your existing physical servers are buckling under the load,  initially you can P2V your existing servers and run them as VMs on your hosts.
That way you get a pain free hardware upgrade.
Then plan for separation of services and rationalising the number of DB servers, etc.
Of course you have to balance the cost of server an VM licences against the benefits of distributing servers (assuming Windows and VMWare here).As usual, it's a trade off between cost, risk and functionality.No need for a Windows Deployment server, we're using Fog with great results.http://www.fogproject.org/ [fogproject.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192562</id>
	<title>Re:Cloud Computing(TM)</title>
	<author>ani23</author>
	<datestamp>1258896240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>here what i would reccomend. not affiliated to any company in either way but its what worked for us.

If you have money to spend
Go with HP servers and Equaloggic or left hand San.
Vmware ESX

If Budget is tight ( I would do this either ways)
get Dell Servers as hosts.(R710 with nehelams they work better than quad proc older procs)
Sign up for sun startup essentials. you get more than 30\% off retail
Get their 2510 iSCSI SANS or 2530 SAS sans. they go for 8grand for 6 TB
Have two copies in two locations running Vmware ESX</htmltext>
<tokenext>here what i would reccomend .
not affiliated to any company in either way but its what worked for us .
If you have money to spend Go with HP servers and Equaloggic or left hand San .
Vmware ESX If Budget is tight ( I would do this either ways ) get Dell Servers as hosts .
( R710 with nehelams they work better than quad proc older procs ) Sign up for sun startup essentials .
you get more than 30 \ % off retail Get their 2510 iSCSI SANS or 2530 SAS sans .
they go for 8grand for 6 TB Have two copies in two locations running Vmware ESX</tokentext>
<sentencetext>here what i would reccomend.
not affiliated to any company in either way but its what worked for us.
If you have money to spend
Go with HP servers and Equaloggic or left hand San.
Vmware ESX

If Budget is tight ( I would do this either ways)
get Dell Servers as hosts.
(R710 with nehelams they work better than quad proc older procs)
Sign up for sun startup essentials.
you get more than 30\% off retail
Get their 2510 iSCSI SANS or 2530 SAS sans.
they go for 8grand for 6 TB
Have two copies in two locations running Vmware ESX</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190206</id>
	<title>Re:Trying to make your mark, eh?</title>
	<author>rantingkitten</author>
	<datestamp>1258815780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Why does he need virtualisation for most of that?  Just run multiple services on a single machine.  It's not like dhcp and dns are all that resources intensive -- put both services on a machine, configure them, and start them.  What's the advantage of virtualising that?  Sounds like a lot of unnecessary overhead to me.<br>
<br>
Depending on how heavy the load is, that same machine could probably handle postfix, apache, and some kinda ftp server too.  That's more or less what you said anyway, but I don't get why you think it requires virtualisation.  If a service starts misbehaving you just restart that service instead of rebooting the virtual machine.<br>
<br>
Although, for 150 people, a WRT router running non-crap firmware (e.g., ddwrt or tomato) would probably suffice for dns and dhcp.  There's a practically off-the-shelf solution for fifty bucks instead of mucking around with higher-end hardware and virtual machines.</htmltext>
<tokenext>Why does he need virtualisation for most of that ?
Just run multiple services on a single machine .
It 's not like dhcp and dns are all that resources intensive -- put both services on a machine , configure them , and start them .
What 's the advantage of virtualising that ?
Sounds like a lot of unnecessary overhead to me .
Depending on how heavy the load is , that same machine could probably handle postfix , apache , and some kinda ftp server too .
That 's more or less what you said anyway , but I do n't get why you think it requires virtualisation .
If a service starts misbehaving you just restart that service instead of rebooting the virtual machine .
Although , for 150 people , a WRT router running non-crap firmware ( e.g. , ddwrt or tomato ) would probably suffice for dns and dhcp .
There 's a practically off-the-shelf solution for fifty bucks instead of mucking around with higher-end hardware and virtual machines .</tokentext>
<sentencetext>Why does he need virtualisation for most of that?
Just run multiple services on a single machine.
It's not like dhcp and dns are all that resources intensive -- put both services on a machine, configure them, and start them.
What's the advantage of virtualising that?
Sounds like a lot of unnecessary overhead to me.
Depending on how heavy the load is, that same machine could probably handle postfix, apache, and some kinda ftp server too.
That's more or less what you said anyway, but I don't get why you think it requires virtualisation.
If a service starts misbehaving you just restart that service instead of rebooting the virtual machine.
Although, for 150 people, a WRT router running non-crap firmware (e.g., ddwrt or tomato) would probably suffice for dns and dhcp.
There's a practically off-the-shelf solution for fifty bucks instead of mucking around with higher-end hardware and virtual machines.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189032</id>
	<title>don't  forget the network as well like the switche</title>
	<author>Joe The Dragon</author>
	<datestamp>1258806000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>don't  forget the network as well like the switches and maybe the cables as well. Also if you find any hubs get rid of then ASAP.</p><p>also for the servers they should be linked to each other with gig-e.</p></htmltext>
<tokenext>do n't forget the network as well like the switches and maybe the cables as well .
Also if you find any hubs get rid of then ASAP.also for the servers they should be linked to each other with gig-e .</tokentext>
<sentencetext>don't  forget the network as well like the switches and maybe the cables as well.
Also if you find any hubs get rid of then ASAP.also for the servers they should be linked to each other with gig-e.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194568</id>
	<title>Re:Get someone experienced on the boat!</title>
	<author>Anonymous</author>
	<datestamp>1258913940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Agreed.  Hire an enterprise architect for a fixed amount of work to consult. Say 3-4 weeks. There is no substitute for experience.  At the end of that engagement, you should have an overall plan for all your upgrades for the main location and 1 or more satellite locations.</p><p>This is too complex for slashdot answers and you won't find the answer in a book or 10.</p><p>Anyone that claims this is easy enough to do yourself is talking out their ass.  As an EA, I know I can't know everything related to this AND your specific situation.</p></htmltext>
<tokenext>Agreed .
Hire an enterprise architect for a fixed amount of work to consult .
Say 3-4 weeks .
There is no substitute for experience .
At the end of that engagement , you should have an overall plan for all your upgrades for the main location and 1 or more satellite locations.This is too complex for slashdot answers and you wo n't find the answer in a book or 10.Anyone that claims this is easy enough to do yourself is talking out their ass .
As an EA , I know I ca n't know everything related to this AND your specific situation .</tokentext>
<sentencetext>Agreed.
Hire an enterprise architect for a fixed amount of work to consult.
Say 3-4 weeks.
There is no substitute for experience.
At the end of that engagement, you should have an overall plan for all your upgrades for the main location and 1 or more satellite locations.This is too complex for slashdot answers and you won't find the answer in a book or 10.Anyone that claims this is easy enough to do yourself is talking out their ass.
As an EA, I know I can't know everything related to this AND your specific situation.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188934</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189172</id>
	<title>Don't forget hosting</title>
	<author>Jon.Burgin</author>
	<datestamp>1258807320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Why have the headaches, why not have it hosted companies like Rackspace make it so easy and simple. You can also use there cloud services real cheap and easy setup a server in less than 5 minutes and only pay for the memory bandwidth you need, need more?  just a few mouse clicks away.</htmltext>
<tokenext>Why have the headaches , why not have it hosted companies like Rackspace make it so easy and simple .
You can also use there cloud services real cheap and easy setup a server in less than 5 minutes and only pay for the memory bandwidth you need , need more ?
just a few mouse clicks away .</tokentext>
<sentencetext>Why have the headaches, why not have it hosted companies like Rackspace make it so easy and simple.
You can also use there cloud services real cheap and easy setup a server in less than 5 minutes and only pay for the memory bandwidth you need, need more?
just a few mouse clicks away.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191740</id>
	<title>Re:Latest Trends</title>
	<author>Z00L00K</author>
	<datestamp>1258921740000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Any server that can offer a RAID disk solution would be fine. Blade servers seems to be an overkill for most solutions - and they are expensive.</p><p>And then run DFS (Distributed File System) or similar to have replication between sites for the data. This will make things easier. And if you have a well working replication you can have the backup system located at the head office and don't have to worry about running around swapping tapes at the local branch offices.</p><p>Some companies tends to centralize email around a central mail server. This has it's pros and cons. The disadvantage is that if the head office goes down everyone is without email service. But the configuration can be more complicated if each branch office has it's own.</p><p>It's also hard to tell how to best stitch together a solution for a specific case without knowing how the company in question works. There is no golden solution that works for all companies.</p><p>The general idea is however that DNS and DHCP shall be local. If they aren't then the local office will be dead as a dodo as soon as there is a glitch in the net. Anyone not providing local DNS and DHCP should be brought out of the organization as soon as possible. And DNS and DHCP doesn't require much maintenance either, so they won't put much workload on the system administration.</p><p>There are companies (big ones) that run central DHCP and DNS, but glitches can cause all kind of trouble - like providing the same IP address to a machine in Holland and in Sweden simultaneously (yes - it has happened in reality, no joke) - and the work required to figure out what's wrong when multiple sites are involved in an IP address conflict can cost a lot. And if you run Windows you should have roaming profiles configured and a local server on each site where the profiles are stored.</p><p>Local WWW and FTP servers - can work, but watch out too since you have to check out if it's for internal or external use. Do you really need a local WWW and FTP server for each site? I would say - no. And those servers should be on a DMZ. It can of course be one server servicing both WWW and FTP. The big issue with especially FTP servers if they are for dedicated external users is the maintenance of the accounts on those servers. Obsolete FTP server accounts are a security risk.</p><p>And if you run Windows I would really suggest that you do set up WDS (Windows Deployment Server). This will allow your PC clients to do a network boot and reinstall them from an image. Saves a lot of time and headache.</p><p>And today many users have laptop computers, so hard disk encryption should be considered to limit the risk of having business critical data going into the wrong hands. <a href="http://www.truecrypt.org/" title="truecrypt.org">Truecrypt</a> [truecrypt.org] is one alternative that I have found that works really well. But don't run it on the servers.</p></htmltext>
<tokenext>Any server that can offer a RAID disk solution would be fine .
Blade servers seems to be an overkill for most solutions - and they are expensive.And then run DFS ( Distributed File System ) or similar to have replication between sites for the data .
This will make things easier .
And if you have a well working replication you can have the backup system located at the head office and do n't have to worry about running around swapping tapes at the local branch offices.Some companies tends to centralize email around a central mail server .
This has it 's pros and cons .
The disadvantage is that if the head office goes down everyone is without email service .
But the configuration can be more complicated if each branch office has it 's own.It 's also hard to tell how to best stitch together a solution for a specific case without knowing how the company in question works .
There is no golden solution that works for all companies.The general idea is however that DNS and DHCP shall be local .
If they are n't then the local office will be dead as a dodo as soon as there is a glitch in the net .
Anyone not providing local DNS and DHCP should be brought out of the organization as soon as possible .
And DNS and DHCP does n't require much maintenance either , so they wo n't put much workload on the system administration.There are companies ( big ones ) that run central DHCP and DNS , but glitches can cause all kind of trouble - like providing the same IP address to a machine in Holland and in Sweden simultaneously ( yes - it has happened in reality , no joke ) - and the work required to figure out what 's wrong when multiple sites are involved in an IP address conflict can cost a lot .
And if you run Windows you should have roaming profiles configured and a local server on each site where the profiles are stored.Local WWW and FTP servers - can work , but watch out too since you have to check out if it 's for internal or external use .
Do you really need a local WWW and FTP server for each site ?
I would say - no .
And those servers should be on a DMZ .
It can of course be one server servicing both WWW and FTP .
The big issue with especially FTP servers if they are for dedicated external users is the maintenance of the accounts on those servers .
Obsolete FTP server accounts are a security risk.And if you run Windows I would really suggest that you do set up WDS ( Windows Deployment Server ) .
This will allow your PC clients to do a network boot and reinstall them from an image .
Saves a lot of time and headache.And today many users have laptop computers , so hard disk encryption should be considered to limit the risk of having business critical data going into the wrong hands .
Truecrypt [ truecrypt.org ] is one alternative that I have found that works really well .
But do n't run it on the servers .</tokentext>
<sentencetext>Any server that can offer a RAID disk solution would be fine.
Blade servers seems to be an overkill for most solutions - and they are expensive.And then run DFS (Distributed File System) or similar to have replication between sites for the data.
This will make things easier.
And if you have a well working replication you can have the backup system located at the head office and don't have to worry about running around swapping tapes at the local branch offices.Some companies tends to centralize email around a central mail server.
This has it's pros and cons.
The disadvantage is that if the head office goes down everyone is without email service.
But the configuration can be more complicated if each branch office has it's own.It's also hard to tell how to best stitch together a solution for a specific case without knowing how the company in question works.
There is no golden solution that works for all companies.The general idea is however that DNS and DHCP shall be local.
If they aren't then the local office will be dead as a dodo as soon as there is a glitch in the net.
Anyone not providing local DNS and DHCP should be brought out of the organization as soon as possible.
And DNS and DHCP doesn't require much maintenance either, so they won't put much workload on the system administration.There are companies (big ones) that run central DHCP and DNS, but glitches can cause all kind of trouble - like providing the same IP address to a machine in Holland and in Sweden simultaneously (yes - it has happened in reality, no joke) - and the work required to figure out what's wrong when multiple sites are involved in an IP address conflict can cost a lot.
And if you run Windows you should have roaming profiles configured and a local server on each site where the profiles are stored.Local WWW and FTP servers - can work, but watch out too since you have to check out if it's for internal or external use.
Do you really need a local WWW and FTP server for each site?
I would say - no.
And those servers should be on a DMZ.
It can of course be one server servicing both WWW and FTP.
The big issue with especially FTP servers if they are for dedicated external users is the maintenance of the accounts on those servers.
Obsolete FTP server accounts are a security risk.And if you run Windows I would really suggest that you do set up WDS (Windows Deployment Server).
This will allow your PC clients to do a network boot and reinstall them from an image.
Saves a lot of time and headache.And today many users have laptop computers, so hard disk encryption should be considered to limit the risk of having business critical data going into the wrong hands.
Truecrypt [truecrypt.org] is one alternative that I have found that works really well.
But don't run it on the servers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188836</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189090</id>
	<title>What 150 users?</title>
	<author>painehope</author>
	<datestamp>1258806360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>
I'd say that everyone has mentioned that big picture points already, except for one : what kind of users?
</p><p>
150 file clerks or accountants and you'll spend more time worrying about the printer that the CIO's secretary <i>just had to have</i> which conveniently doesn't have reliable drivers or documentation, even if it had what neat feature that she wanted and now can't use.
</p><p>
150 programmers can put a mild to heavy load on your infrastructure, depending on what kind of software they're developing and testing (more a function of what kind of environment are they coding for and how much gear they need to test it).
</p><p>
150 programmers and processors of data (financial, medical, geophysical, whatever) can put an extreme load on your infrastructure. Like to the point where it's easier to ship tape media internationally than fuck around with a stable interoffice file transfer solution (I've seen it as a common practice - "hey, you're going to the XYZ office, we're sending a crate of tapes along with you so you can load it onto their fileservers").
</p><p>
Define your environment, then you know your requirements, find the solutions that meet those requirements, then try to get a PO for it. Have fun.</p></htmltext>
<tokenext>I 'd say that everyone has mentioned that big picture points already , except for one : what kind of users ?
150 file clerks or accountants and you 'll spend more time worrying about the printer that the CIO 's secretary just had to have which conveniently does n't have reliable drivers or documentation , even if it had what neat feature that she wanted and now ca n't use .
150 programmers can put a mild to heavy load on your infrastructure , depending on what kind of software they 're developing and testing ( more a function of what kind of environment are they coding for and how much gear they need to test it ) .
150 programmers and processors of data ( financial , medical , geophysical , whatever ) can put an extreme load on your infrastructure .
Like to the point where it 's easier to ship tape media internationally than fuck around with a stable interoffice file transfer solution ( I 've seen it as a common practice - " hey , you 're going to the XYZ office , we 're sending a crate of tapes along with you so you can load it onto their fileservers " ) .
Define your environment , then you know your requirements , find the solutions that meet those requirements , then try to get a PO for it .
Have fun .</tokentext>
<sentencetext>
I'd say that everyone has mentioned that big picture points already, except for one : what kind of users?
150 file clerks or accountants and you'll spend more time worrying about the printer that the CIO's secretary just had to have which conveniently doesn't have reliable drivers or documentation, even if it had what neat feature that she wanted and now can't use.
150 programmers can put a mild to heavy load on your infrastructure, depending on what kind of software they're developing and testing (more a function of what kind of environment are they coding for and how much gear they need to test it).
150 programmers and processors of data (financial, medical, geophysical, whatever) can put an extreme load on your infrastructure.
Like to the point where it's easier to ship tape media internationally than fuck around with a stable interoffice file transfer solution (I've seen it as a common practice - "hey, you're going to the XYZ office, we're sending a crate of tapes along with you so you can load it onto their fileservers").
Define your environment, then you know your requirements, find the solutions that meet those requirements, then try to get a PO for it.
Have fun.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189312
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191918
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30218760
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188904
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194858
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189738
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194018
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189168
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192148
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30198970
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30246478
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189002
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191876
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188876
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192368
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192562
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30196422
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188836
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194164
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190206
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194568
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188836
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191740
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189090
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191912
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30198042
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190332
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194192
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189012
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192760
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189150
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192518
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188876
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30210454
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189448
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193994
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189012
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194894
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189352
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189948
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189738
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190528
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189766
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30200628
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188828
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190332
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189916
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191050
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189738
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192522
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30195376
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193190
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188994
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30200230
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190776
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30200682
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191218
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189150
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30207220
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30210464
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191896
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188876
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193414
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190364
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190926
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188836
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30202074
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189160
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30207058
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188876
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190294
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189152
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190766
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_21_2234216_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188836
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191066
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189496
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190286
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189550
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190280
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189168
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192148
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189376
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189738
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194018
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192522
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30195376
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190528
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190686
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188876
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192368
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30210454
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193414
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189946
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191840
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188994
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30200230
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189002
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191876
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188798
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189000
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193190
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30193994
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190776
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30200682
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189152
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191050
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189352
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30198042
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192562
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188828
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189090
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191912
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190508
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194308
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189076
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190294
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189948
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191896
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189250
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189312
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191918
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30218760
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189448
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190364
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190206
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190766
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30246478
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189092
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30198970
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30210464
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190332
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191374
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194192
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189916
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189766
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30200628
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189012
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192760
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194894
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189174
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189032
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188872
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30196422
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30190926
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189160
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191218
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30207058
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188904
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194858
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191038
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189150
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30207220
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30192518
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188836
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191740
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30191066
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194164
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30202074
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188934
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30194568
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189016
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189144
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30188886
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_21_2234216.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_21_2234216.30189458
</commentlist>
</conversation>
