<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_11_09_1953241</id>
	<title>How Do You Evaluate a Data Center?</title>
	<author>ScuttleMonkey</author>
	<datestamp>1257758820000</datestamp>
	<htmltext>mpapet writes to ask about the ins and outs of datacenter evaluation.  Beyond the simpler questions of physical access control, connectivity, and power redundancy/capacity and SLA  review, what other questions are important to ask when evaluating a data center? What data centers have people been happy with? What horror stories have people lived through with those that didn't make the cut?</htmltext>
<tokenext>mpapet writes to ask about the ins and outs of datacenter evaluation .
Beyond the simpler questions of physical access control , connectivity , and power redundancy/capacity and SLA review , what other questions are important to ask when evaluating a data center ?
What data centers have people been happy with ?
What horror stories have people lived through with those that did n't make the cut ?</tokentext>
<sentencetext>mpapet writes to ask about the ins and outs of datacenter evaluation.
Beyond the simpler questions of physical access control, connectivity, and power redundancy/capacity and SLA  review, what other questions are important to ask when evaluating a data center?
What data centers have people been happy with?
What horror stories have people lived through with those that didn't make the cut?</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041526</id>
	<title>sell:shoes,handbags,T-shirt,Jeans,sunglass</title>
	<author>Anonymous</author>
	<datestamp>1257779820000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>In order to meet the Thanksgiving holiday, this site hereby release Thanksgiving gift, that is, gift, our web site is <a href="http://www.coolforsale.com/" title="coolforsale.com" rel="nofollow">http://www.coolforsale.com/</a> [coolforsale.com]   nike air max jordan shoes, coach,gucci,lv,dg,ed hardy handbags, Polo/Ed Hardy/Lacoste/Ca/A&amp;F<nobr> <wbr></nobr>,T-shirt welcome new and old customers come to order.</htmltext>
<tokenext>In order to meet the Thanksgiving holiday , this site hereby release Thanksgiving gift , that is , gift , our web site is http : //www.coolforsale.com/ [ coolforsale.com ] nike air max jordan shoes , coach,gucci,lv,dg,ed hardy handbags , Polo/Ed Hardy/Lacoste/Ca/A&amp;F ,T-shirt welcome new and old customers come to order .</tokentext>
<sentencetext>In order to meet the Thanksgiving holiday, this site hereby release Thanksgiving gift, that is, gift, our web site is http://www.coolforsale.com/ [coolforsale.com]   nike air max jordan shoes, coach,gucci,lv,dg,ed hardy handbags, Polo/Ed Hardy/Lacoste/Ca/A&amp;F ,T-shirt welcome new and old customers come to order.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038744</id>
	<title>Re:Just off the top of my head</title>
	<author>Triela</author>
	<datestamp>1257763380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext>Once you have assessed these technical points to your satisfaction, I think customer support's ability to communicate issues to you as they arise is the final bridge.  Every datacenter will at the very least experience minor problems from time to time, and if you're not able to speak directly with the techs working the problems or if first-line customer support does not have ready access to the details of the resolution process, it sure is frustrating to be left in the dark in the meantime.</htmltext>
<tokenext>Once you have assessed these technical points to your satisfaction , I think customer support 's ability to communicate issues to you as they arise is the final bridge .
Every datacenter will at the very least experience minor problems from time to time , and if you 're not able to speak directly with the techs working the problems or if first-line customer support does not have ready access to the details of the resolution process , it sure is frustrating to be left in the dark in the meantime .</tokentext>
<sentencetext>Once you have assessed these technical points to your satisfaction, I think customer support's ability to communicate issues to you as they arise is the final bridge.
Every datacenter will at the very least experience minor problems from time to time, and if you're not able to speak directly with the techs working the problems or if first-line customer support does not have ready access to the details of the resolution process, it sure is frustrating to be left in the dark in the meantime.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039432</id>
	<title>PUE - Power Usage Effectiveness</title>
	<author>SuperQ</author>
	<datestamp>1257766500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Many good comments, but nobody is asking what PUE a datacenter gets.  Bad PUE turns into lower rack deliverable power and more expensive power when you do get it.   I would have a hard time picking a datacenter that didn't have tight closed loop hot isle cooling.</p></htmltext>
<tokenext>Many good comments , but nobody is asking what PUE a datacenter gets .
Bad PUE turns into lower rack deliverable power and more expensive power when you do get it .
I would have a hard time picking a datacenter that did n't have tight closed loop hot isle cooling .</tokentext>
<sentencetext>Many good comments, but nobody is asking what PUE a datacenter gets.
Bad PUE turns into lower rack deliverable power and more expensive power when you do get it.
I would have a hard time picking a datacenter that didn't have tight closed loop hot isle cooling.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040598</id>
	<title>Why do you need a data center?</title>
	<author>cryfreedomlove</author>
	<datestamp>1257772740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Seriously.  Use AWS for your custom apps.  Outsource your email and other G&amp;A aspects of running your company.  Data centers are for dinosaurs.  All of the cool kids are in the cloud.</htmltext>
<tokenext>Seriously .
Use AWS for your custom apps .
Outsource your email and other G&amp;A aspects of running your company .
Data centers are for dinosaurs .
All of the cool kids are in the cloud .</tokentext>
<sentencetext>Seriously.
Use AWS for your custom apps.
Outsource your email and other G&amp;A aspects of running your company.
Data centers are for dinosaurs.
All of the cool kids are in the cloud.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038920</id>
	<title>Real Simple</title>
	<author>Anonymous</author>
	<datestamp>1257764100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Power from the ceiling, data under the floor. </p><p>
The reason is data centre floods don't occur very often but when they do the d.c can tolerate the data cable being in water but when the power gets in contact with water circuit breakers trip and they don't work again until they are dry. </p><p>
I encountered it when the AC water feed burst and co-incidentally the drain for the data centre had been blocked. If your power and data are through the floor then I would suggest that you invest in a good wet and dry vacuum cleaner. I do have other suggestions but this seems such a basic thing to me.</p></htmltext>
<tokenext>Power from the ceiling , data under the floor .
The reason is data centre floods do n't occur very often but when they do the d.c can tolerate the data cable being in water but when the power gets in contact with water circuit breakers trip and they do n't work again until they are dry .
I encountered it when the AC water feed burst and co-incidentally the drain for the data centre had been blocked .
If your power and data are through the floor then I would suggest that you invest in a good wet and dry vacuum cleaner .
I do have other suggestions but this seems such a basic thing to me .</tokentext>
<sentencetext>Power from the ceiling, data under the floor.
The reason is data centre floods don't occur very often but when they do the d.c can tolerate the data cable being in water but when the power gets in contact with water circuit breakers trip and they don't work again until they are dry.
I encountered it when the AC water feed burst and co-incidentally the drain for the data centre had been blocked.
If your power and data are through the floor then I would suggest that you invest in a good wet and dry vacuum cleaner.
I do have other suggestions but this seems such a basic thing to me.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039258</id>
	<title>And location factors too</title>
	<author>Anonymous</author>
	<datestamp>1257765600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Is it near a hazardous materials facility? Under an airport approach or departure path? A rail line? A major highway possibly having traffic hauling hazardous goods? Gas, oil or chemical pipelines? Flood plane? etc.</p></htmltext>
<tokenext>Is it near a hazardous materials facility ?
Under an airport approach or departure path ?
A rail line ?
A major highway possibly having traffic hauling hazardous goods ?
Gas , oil or chemical pipelines ?
Flood plane ?
etc .</tokentext>
<sentencetext>Is it near a hazardous materials facility?
Under an airport approach or departure path?
A rail line?
A major highway possibly having traffic hauling hazardous goods?
Gas, oil or chemical pipelines?
Flood plane?
etc.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039136</id>
	<title>I'm going to turn this around.</title>
	<author>NoNsense</author>
	<datestamp>1257765120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>I am the Director of Operations for our DC.  When we give tours, I explain the following (pseudo order of the tour):</p><p>- Begin with the history of the building, when it was built (1995), why it was build (result of Andrew in 1992), and how it is constructed (twin T, poured tilt wall).</p><p>Infastructure:<br>- Take you through the gen room, show you it is internal to the building, show you the roofing structure from the inside, explain the N+1 redundancy, the hours on the gens, when they are ready for maintenance, how they are maintained, by whom (the vendor), how the diesel is stored, supplied, duration of fuel at max and current loads.  Explain conduct before a hurricane or lockdown, how we go off grid 24hours ahead of a storm, mention our various contracts for after storm refill and our straining / refill schedule.<br>- Take you to the switch gear room, explain the dual feeds from the power company, how the switch gear works, show you the three main bus breakers, show you the numerous other breakers for various sub panels, etc.  Explain and show you the spare breakers we have in case replacement is needed.<br>- Take you to the cooling tower area, explain the piping, the amount of water flowing, the number of pumps, how many are needed, the switching schedule, explain the N+1 capacity and overall capability of the towers, explain maintenance, show you the replacement pumps in stock, explain the concept of condensed water cooling if needed.<br>- Take you through the UPS and battery rooms, explain the needed KW capacity, what the UPSs back up and what they do not.  Show the various distribution breakers out to floor, their capacity, the static switches, bypass, explain the battery capacity, type of cells, number of cells, number of strings, last time the jars were replaced and how they are maintained.  Explain max capacity of the load vs time.  Answer questions relevant to switching from utility-&gt;UPS-&gt;generator and back.</p><p>Raised floor:<br>- Take walk on raised floor, explain connectivity, vendors, path diversity we have, how the circuits are protected.  Show them network gear, dual everything, how we protect from a LAN or WAN outage, and specific network devices we have for DDoS, Load Balancing, Distribution, Aggregation.  Explain how telco and others deliver DS0 to OC-12 capacity, offer information on cross connections regarding copper, fiber, coax.  Explain our offerings (dedicated servers up to 5K sq ft cages) and ask what they are interested in.<br>- Explain below the floor, size of raise, that power and network is delivered under, what are on level one trays, level two trays, and the piping for cooling.  Show the PDU units and how they related to the breakers in the previous rooms.  Show them the cooling panel and leads out to CRAC units, explain the cooling capacity, plans for future cooling, explain hot/cold aisle fundamentals, and temperature goals.  At this point, there are usually more questions about vented tiles, power types available and overall floor density in watts/sq ft.<br>- Explain the fire detection / mitigation system, monitoring of PDU's, CRAC units, and FM200.  Explain the maintenance of the fire system, show them the fire marshal inspection logs and the panels that alert the police and fire departments (both on floor and in our security office in front).<br>- While finishing the walk on the floor, show cameras, explain process to bring in and remove equipment, tell them the retention on the video, explain the rounds the guards make, the access list updates and changes.</p><p>NOC:<br>- At this point we're back to the front of the building, go into the NOC, explain what we are monitoring (connectivity, weather, scheduled jobs, etc).  Introduce NOC and security staff, explain they will always get a person if they call, submit a test ticket from a e-mail on my phone, they will see the alerts light up and the pager for the NOC will signal.  The final steps are to introduce them to security and then I'll lead the customer(s) to the conference room so they can continue the conversation</p></htmltext>
<tokenext>I am the Director of Operations for our DC .
When we give tours , I explain the following ( pseudo order of the tour ) : - Begin with the history of the building , when it was built ( 1995 ) , why it was build ( result of Andrew in 1992 ) , and how it is constructed ( twin T , poured tilt wall ) .Infastructure : - Take you through the gen room , show you it is internal to the building , show you the roofing structure from the inside , explain the N + 1 redundancy , the hours on the gens , when they are ready for maintenance , how they are maintained , by whom ( the vendor ) , how the diesel is stored , supplied , duration of fuel at max and current loads .
Explain conduct before a hurricane or lockdown , how we go off grid 24hours ahead of a storm , mention our various contracts for after storm refill and our straining / refill schedule.- Take you to the switch gear room , explain the dual feeds from the power company , how the switch gear works , show you the three main bus breakers , show you the numerous other breakers for various sub panels , etc .
Explain and show you the spare breakers we have in case replacement is needed.- Take you to the cooling tower area , explain the piping , the amount of water flowing , the number of pumps , how many are needed , the switching schedule , explain the N + 1 capacity and overall capability of the towers , explain maintenance , show you the replacement pumps in stock , explain the concept of condensed water cooling if needed.- Take you through the UPS and battery rooms , explain the needed KW capacity , what the UPSs back up and what they do not .
Show the various distribution breakers out to floor , their capacity , the static switches , bypass , explain the battery capacity , type of cells , number of cells , number of strings , last time the jars were replaced and how they are maintained .
Explain max capacity of the load vs time .
Answer questions relevant to switching from utility- &gt; UPS- &gt; generator and back.Raised floor : - Take walk on raised floor , explain connectivity , vendors , path diversity we have , how the circuits are protected .
Show them network gear , dual everything , how we protect from a LAN or WAN outage , and specific network devices we have for DDoS , Load Balancing , Distribution , Aggregation .
Explain how telco and others deliver DS0 to OC-12 capacity , offer information on cross connections regarding copper , fiber , coax .
Explain our offerings ( dedicated servers up to 5K sq ft cages ) and ask what they are interested in.- Explain below the floor , size of raise , that power and network is delivered under , what are on level one trays , level two trays , and the piping for cooling .
Show the PDU units and how they related to the breakers in the previous rooms .
Show them the cooling panel and leads out to CRAC units , explain the cooling capacity , plans for future cooling , explain hot/cold aisle fundamentals , and temperature goals .
At this point , there are usually more questions about vented tiles , power types available and overall floor density in watts/sq ft.- Explain the fire detection / mitigation system , monitoring of PDU 's , CRAC units , and FM200 .
Explain the maintenance of the fire system , show them the fire marshal inspection logs and the panels that alert the police and fire departments ( both on floor and in our security office in front ) .- While finishing the walk on the floor , show cameras , explain process to bring in and remove equipment , tell them the retention on the video , explain the rounds the guards make , the access list updates and changes.NOC : - At this point we 're back to the front of the building , go into the NOC , explain what we are monitoring ( connectivity , weather , scheduled jobs , etc ) .
Introduce NOC and security staff , explain they will always get a person if they call , submit a test ticket from a e-mail on my phone , they will see the alerts light up and the pager for the NOC will signal .
The final steps are to introduce them to security and then I 'll lead the customer ( s ) to the conference room so they can continue the conversation</tokentext>
<sentencetext>I am the Director of Operations for our DC.
When we give tours, I explain the following (pseudo order of the tour):- Begin with the history of the building, when it was built (1995), why it was build (result of Andrew in 1992), and how it is constructed (twin T, poured tilt wall).Infastructure:- Take you through the gen room, show you it is internal to the building, show you the roofing structure from the inside, explain the N+1 redundancy, the hours on the gens, when they are ready for maintenance, how they are maintained, by whom (the vendor), how the diesel is stored, supplied, duration of fuel at max and current loads.
Explain conduct before a hurricane or lockdown, how we go off grid 24hours ahead of a storm, mention our various contracts for after storm refill and our straining / refill schedule.- Take you to the switch gear room, explain the dual feeds from the power company, how the switch gear works, show you the three main bus breakers, show you the numerous other breakers for various sub panels, etc.
Explain and show you the spare breakers we have in case replacement is needed.- Take you to the cooling tower area, explain the piping, the amount of water flowing, the number of pumps, how many are needed, the switching schedule, explain the N+1 capacity and overall capability of the towers, explain maintenance, show you the replacement pumps in stock, explain the concept of condensed water cooling if needed.- Take you through the UPS and battery rooms, explain the needed KW capacity, what the UPSs back up and what they do not.
Show the various distribution breakers out to floor, their capacity, the static switches, bypass, explain the battery capacity, type of cells, number of cells, number of strings, last time the jars were replaced and how they are maintained.
Explain max capacity of the load vs time.
Answer questions relevant to switching from utility-&gt;UPS-&gt;generator and back.Raised floor:- Take walk on raised floor, explain connectivity, vendors, path diversity we have, how the circuits are protected.
Show them network gear, dual everything, how we protect from a LAN or WAN outage, and specific network devices we have for DDoS, Load Balancing, Distribution, Aggregation.
Explain how telco and others deliver DS0 to OC-12 capacity, offer information on cross connections regarding copper, fiber, coax.
Explain our offerings (dedicated servers up to 5K sq ft cages) and ask what they are interested in.- Explain below the floor, size of raise, that power and network is delivered under, what are on level one trays, level two trays, and the piping for cooling.
Show the PDU units and how they related to the breakers in the previous rooms.
Show them the cooling panel and leads out to CRAC units, explain the cooling capacity, plans for future cooling, explain hot/cold aisle fundamentals, and temperature goals.
At this point, there are usually more questions about vented tiles, power types available and overall floor density in watts/sq ft.- Explain the fire detection / mitigation system, monitoring of PDU's, CRAC units, and FM200.
Explain the maintenance of the fire system, show them the fire marshal inspection logs and the panels that alert the police and fire departments (both on floor and in our security office in front).- While finishing the walk on the floor, show cameras, explain process to bring in and remove equipment, tell them the retention on the video, explain the rounds the guards make, the access list updates and changes.NOC:- At this point we're back to the front of the building, go into the NOC, explain what we are monitoring (connectivity, weather, scheduled jobs, etc).
Introduce NOC and security staff, explain they will always get a person if they call, submit a test ticket from a e-mail on my phone, they will see the alerts light up and the pager for the NOC will signal.
The final steps are to introduce them to security and then I'll lead the customer(s) to the conference room so they can continue the conversation</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039618</id>
	<title>Re:Get this out of the way</title>
	<author>Anonymous</author>
	<datestamp>1257767520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I prefer calling them Congress-Hertz</p></htmltext>
<tokenext>I prefer calling them Congress-Hertz</tokentext>
<sentencetext>I prefer calling them Congress-Hertz</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038548</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038668</id>
	<title>i ran a junky data center</title>
	<author>Anonymous</author>
	<datestamp>1257763020000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>I ran a data center long, long ago.  My sales guy knew it wasn't going to pan out and threw me to the wolves.  He asked me to start the tour, and then he took a long lunch to miss it.</p><p>The guys I gave the tour to seemed very intelligent.  They only spent about 60 seconds on our data center.  The instant they saw the carpet, their eyebrows were up.  When I didn't lie to them that there was no diesel generator on the other side of the (secretly dead) batteries, they did exactly what they should have and stormed out without saying thanks.</p></htmltext>
<tokenext>I ran a data center long , long ago .
My sales guy knew it was n't going to pan out and threw me to the wolves .
He asked me to start the tour , and then he took a long lunch to miss it.The guys I gave the tour to seemed very intelligent .
They only spent about 60 seconds on our data center .
The instant they saw the carpet , their eyebrows were up .
When I did n't lie to them that there was no diesel generator on the other side of the ( secretly dead ) batteries , they did exactly what they should have and stormed out without saying thanks .</tokentext>
<sentencetext>I ran a data center long, long ago.
My sales guy knew it wasn't going to pan out and threw me to the wolves.
He asked me to start the tour, and then he took a long lunch to miss it.The guys I gave the tour to seemed very intelligent.
They only spent about 60 seconds on our data center.
The instant they saw the carpet, their eyebrows were up.
When I didn't lie to them that there was no diesel generator on the other side of the (secretly dead) batteries, they did exactly what they should have and stormed out without saying thanks.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044178</id>
	<title>Re:an outside air duct</title>
	<author>Anonymous</author>
	<datestamp>1257858420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>outside air through a duct &amp; a fan? that can be dangerous. was it filtered for dust and humidity? if not, it might be a serious danger to your equipment.</p></htmltext>
<tokenext>outside air through a duct &amp; a fan ?
that can be dangerous .
was it filtered for dust and humidity ?
if not , it might be a serious danger to your equipment .</tokentext>
<sentencetext>outside air through a duct &amp; a fan?
that can be dangerous.
was it filtered for dust and humidity?
if not, it might be a serious danger to your equipment.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039038</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30043078</id>
	<title>Working Conditions</title>
	<author>Deal-a-Neil</author>
	<datestamp>1257885180000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Is there a good desk working area?  Is there a landline/PBX for you to make calls from?  Is there decent mobile phone reception in the work area and by your cabinet?  Can you eat food or bring drinks into the work area or around your cabinet?  Is it in a shady neighborhood, where you might feel a little intimidated bringing in tens of thousands of dollars of emergency IT equipment @ 3 AM?  In the event that your credentials aren't working (i.e. hand scanner, ID card swipe), can they let you in remotely, or is it manned 24/7?  Is it carrier neutral and are there other backbone providers that you can connect with?  Do they charge for running cables between cabinets, especially in cases where the cabinets are not adjacent?  What is the max amperage that they'll provide per cabinet?  Do the rack cabinet doors remove easily?  Are there chairs available, and damn it, are they comfortable?</p></htmltext>
<tokenext>Is there a good desk working area ?
Is there a landline/PBX for you to make calls from ?
Is there decent mobile phone reception in the work area and by your cabinet ?
Can you eat food or bring drinks into the work area or around your cabinet ?
Is it in a shady neighborhood , where you might feel a little intimidated bringing in tens of thousands of dollars of emergency IT equipment @ 3 AM ?
In the event that your credentials are n't working ( i.e .
hand scanner , ID card swipe ) , can they let you in remotely , or is it manned 24/7 ?
Is it carrier neutral and are there other backbone providers that you can connect with ?
Do they charge for running cables between cabinets , especially in cases where the cabinets are not adjacent ?
What is the max amperage that they 'll provide per cabinet ?
Do the rack cabinet doors remove easily ?
Are there chairs available , and damn it , are they comfortable ?</tokentext>
<sentencetext>Is there a good desk working area?
Is there a landline/PBX for you to make calls from?
Is there decent mobile phone reception in the work area and by your cabinet?
Can you eat food or bring drinks into the work area or around your cabinet?
Is it in a shady neighborhood, where you might feel a little intimidated bringing in tens of thousands of dollars of emergency IT equipment @ 3 AM?
In the event that your credentials aren't working (i.e.
hand scanner, ID card swipe), can they let you in remotely, or is it manned 24/7?
Is it carrier neutral and are there other backbone providers that you can connect with?
Do they charge for running cables between cabinets, especially in cases where the cabinets are not adjacent?
What is the max amperage that they'll provide per cabinet?
Do the rack cabinet doors remove easily?
Are there chairs available, and damn it, are they comfortable?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039172</id>
	<title>Re:Just off the top of my head</title>
	<author>mcrbids</author>
	<datestamp>1257765300000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>As you indicate, these are hardly simple questions!</p><p>While I would not endorse them today, for years I hosted at GNI, part of 365 Main. Things generally worked well, even if their staff were terse and often unfriendly, so I had no particular complaints until they had a power prolem that cost us about 2000 in direct cost and about two business days to finally, fully resolve.  The amount of terse double-speak that came out of them left a very bad taste in my mouth and I've left as soon as I could.  Stay clear of 365 Main!</p><p>Our new colo is Herakles Data in Sacramento. There, too, things have pretty much 'just worked', but they so much nicer to deal with!  And when the inevitable downtime did happen (a 'brownout' on the part of one of their redundant Cisco routers) they were quick to explain exactly what happened and even sent us forms in case we wanted to make a claim against our SLA! (I didn't bother just because I appreciated the respect they afforded me)</p><p>And it goes further - when I asked their sales guy about the best way to get a server ack for the development, they GAVE me one that they had replaced because of size limits for FREE!  On paper, both colos are similar, with full redundant everything, plenty of certification and nice, glossy promo materials.</p><p>In practice, they are like night and day.</p></htmltext>
<tokenext>As you indicate , these are hardly simple questions ! While I would not endorse them today , for years I hosted at GNI , part of 365 Main .
Things generally worked well , even if their staff were terse and often unfriendly , so I had no particular complaints until they had a power prolem that cost us about 2000 in direct cost and about two business days to finally , fully resolve .
The amount of terse double-speak that came out of them left a very bad taste in my mouth and I 've left as soon as I could .
Stay clear of 365 Main ! Our new colo is Herakles Data in Sacramento .
There , too , things have pretty much 'just worked ' , but they so much nicer to deal with !
And when the inevitable downtime did happen ( a 'brownout ' on the part of one of their redundant Cisco routers ) they were quick to explain exactly what happened and even sent us forms in case we wanted to make a claim against our SLA !
( I did n't bother just because I appreciated the respect they afforded me ) And it goes further - when I asked their sales guy about the best way to get a server ack for the development , they GAVE me one that they had replaced because of size limits for FREE !
On paper , both colos are similar , with full redundant everything , plenty of certification and nice , glossy promo materials.In practice , they are like night and day .</tokentext>
<sentencetext>As you indicate, these are hardly simple questions!While I would not endorse them today, for years I hosted at GNI, part of 365 Main.
Things generally worked well, even if their staff were terse and often unfriendly, so I had no particular complaints until they had a power prolem that cost us about 2000 in direct cost and about two business days to finally, fully resolve.
The amount of terse double-speak that came out of them left a very bad taste in my mouth and I've left as soon as I could.
Stay clear of 365 Main!Our new colo is Herakles Data in Sacramento.
There, too, things have pretty much 'just worked', but they so much nicer to deal with!
And when the inevitable downtime did happen (a 'brownout' on the part of one of their redundant Cisco routers) they were quick to explain exactly what happened and even sent us forms in case we wanted to make a claim against our SLA!
(I didn't bother just because I appreciated the respect they afforded me)And it goes further - when I asked their sales guy about the best way to get a server ack for the development, they GAVE me one that they had replaced because of size limits for FREE!
On paper, both colos are similar, with full redundant everything, plenty of certification and nice, glossy promo materials.In practice, they are like night and day.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30045738</id>
	<title>Re:Just off the top of my head</title>
	<author>Critical Facilities</author>
	<datestamp>1257869400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'm definitely not arguing with you here,  and I agree with all the supplemental points you added.  But,  in the interest of clarifying my points:<p><div class="quote"><p>Power quality: never seen a big datacenter without a Liebert, or at least UPS in every rack. Power does not have the be contitioned except between the UPS and the machines/devices. A whole data center power conditioner is often more efficient, but unnecessary for the little guys. either way - check.</p></div><p>I would argue that incoming Power Quality from utility is still a key factor worth looking into.  While it's true that your UPS(s) are going to correct any power quality problems,  this also means that your UPS(s) have to work harder to correct the problem(s), and particularly in the case of low <a href="http://en.wikipedia.org/wiki/Power\_factor" title="wikipedia.org">Power Factor</a> [wikipedia.org],  you will get less actual power available (as you will begin to encounter current limitations on the "line" side of your UPS(s) before you've reached your designed capacity on the load side.<br> <br>Also,  when I refer to Power Quality,  I'm also referring to the frequency of power outages,  spikes,  and brown outs on any of your utility feeds.  This will be a good indication of how much your UPS is being hit,  and how much risk your equipment is potentially being exposed to.  In the case of a conventional UPS with batteries,  when there are frequent brown outs or spikes,  it is often advisable to put a Flywheel UPS in series before the Standard UPS to absorb the small "hits" in order to extend the life of your batteries.</p><p><div class="quote"><p>- Fire suppression is usually part of your building codes, and a given, as is the routine checks (at least anually) by law.</p></div><p>I would agree,  except that there are tests that are not always required by law that should be performed.  In the case of Pre-Action Sprinklers,  <a href="http://www.firesprinkler.org/techservices/mic/articles/article3.html" title="firesprinkler.org">MIC Testing</a> [firesprinkler.org] is something that should absolutely be looked at.  Also,  in the case of any Gas Suppression Systems (i.e. FM-200),  routine checks for any potential leaks could prevent a very,  very expensive discharge.</p></div>
	</htmltext>
<tokenext>I 'm definitely not arguing with you here , and I agree with all the supplemental points you added .
But , in the interest of clarifying my points : Power quality : never seen a big datacenter without a Liebert , or at least UPS in every rack .
Power does not have the be contitioned except between the UPS and the machines/devices .
A whole data center power conditioner is often more efficient , but unnecessary for the little guys .
either way - check.I would argue that incoming Power Quality from utility is still a key factor worth looking into .
While it 's true that your UPS ( s ) are going to correct any power quality problems , this also means that your UPS ( s ) have to work harder to correct the problem ( s ) , and particularly in the case of low Power Factor [ wikipedia.org ] , you will get less actual power available ( as you will begin to encounter current limitations on the " line " side of your UPS ( s ) before you 've reached your designed capacity on the load side .
Also , when I refer to Power Quality , I 'm also referring to the frequency of power outages , spikes , and brown outs on any of your utility feeds .
This will be a good indication of how much your UPS is being hit , and how much risk your equipment is potentially being exposed to .
In the case of a conventional UPS with batteries , when there are frequent brown outs or spikes , it is often advisable to put a Flywheel UPS in series before the Standard UPS to absorb the small " hits " in order to extend the life of your batteries.- Fire suppression is usually part of your building codes , and a given , as is the routine checks ( at least anually ) by law.I would agree , except that there are tests that are not always required by law that should be performed .
In the case of Pre-Action Sprinklers , MIC Testing [ firesprinkler.org ] is something that should absolutely be looked at .
Also , in the case of any Gas Suppression Systems ( i.e .
FM-200 ) , routine checks for any potential leaks could prevent a very , very expensive discharge .</tokentext>
<sentencetext>I'm definitely not arguing with you here,  and I agree with all the supplemental points you added.
But,  in the interest of clarifying my points:Power quality: never seen a big datacenter without a Liebert, or at least UPS in every rack.
Power does not have the be contitioned except between the UPS and the machines/devices.
A whole data center power conditioner is often more efficient, but unnecessary for the little guys.
either way - check.I would argue that incoming Power Quality from utility is still a key factor worth looking into.
While it's true that your UPS(s) are going to correct any power quality problems,  this also means that your UPS(s) have to work harder to correct the problem(s), and particularly in the case of low Power Factor [wikipedia.org],  you will get less actual power available (as you will begin to encounter current limitations on the "line" side of your UPS(s) before you've reached your designed capacity on the load side.
Also,  when I refer to Power Quality,  I'm also referring to the frequency of power outages,  spikes,  and brown outs on any of your utility feeds.
This will be a good indication of how much your UPS is being hit,  and how much risk your equipment is potentially being exposed to.
In the case of a conventional UPS with batteries,  when there are frequent brown outs or spikes,  it is often advisable to put a Flywheel UPS in series before the Standard UPS to absorb the small "hits" in order to extend the life of your batteries.- Fire suppression is usually part of your building codes, and a given, as is the routine checks (at least anually) by law.I would agree,  except that there are tests that are not always required by law that should be performed.
In the case of Pre-Action Sprinklers,  MIC Testing [firesprinkler.org] is something that should absolutely be looked at.
Also,  in the case of any Gas Suppression Systems (i.e.
FM-200),  routine checks for any potential leaks could prevent a very,  very expensive discharge.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038992</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30043214</id>
	<title>White Mountain datacenter</title>
	<author>G3ckoG33k</author>
	<datestamp>1257844140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>White Mountain datacenter in downtown Stockholm, Sweden. It is located in a bunker 30 meters under solid bedrock. It was a cold war bunker that was converted into this datacenter and is said to be able to withstand a Hydrogen bomb blast.

<a href="http://www.youtube.com/watch?v=qwlATf9xse4" title="youtube.com">http://www.youtube.com/watch?v=qwlATf9xse4</a> [youtube.com]</htmltext>
<tokenext>White Mountain datacenter in downtown Stockholm , Sweden .
It is located in a bunker 30 meters under solid bedrock .
It was a cold war bunker that was converted into this datacenter and is said to be able to withstand a Hydrogen bomb blast .
http : //www.youtube.com/watch ? v = qwlATf9xse4 [ youtube.com ]</tokentext>
<sentencetext>White Mountain datacenter in downtown Stockholm, Sweden.
It is located in a bunker 30 meters under solid bedrock.
It was a cold war bunker that was converted into this datacenter and is said to be able to withstand a Hydrogen bomb blast.
http://www.youtube.com/watch?v=qwlATf9xse4 [youtube.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040274</id>
	<title>Some more points to consider...</title>
	<author>GetSteved</author>
	<datestamp>1257771000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Full Disclosure:  I work for a (Great!) data center provider (<a href="http://www.viawest.com/" title="viawest.com" rel="nofollow">ViaWest</a> [viawest.com] ).</p><p><b>Infrastructure:</b><br>- What is the UPS run-time?<br>- What is the generator startup time?<br>- What is the genset capacity in relation to UPS demand?  (i.e.  is the UPS demand larger than the genset capacity - you'd be supprised!)<br>- Does the provider have multiple refueling contracts?<br>- Are the refueling contracts high priority?<br>- Can the provider detail out green initiatives to improve PUE?<br>- Does the provider have sufficient capital resources to expand the data center?<br>- How much investment has the company made - this year - into the data center?<br>- Is the data center in a flood plain? Check <a href="http://msc.fema.gov/" title="fema.gov" rel="nofollow">http://msc.fema.gov/</a> [fema.gov]</p><p><b>Compliance:</b><br>- Is the data center SAS-70 type II audited?  Type II means they're serious about it.<br>- Are the results of the audit available for review?<br>- Are a list of control objectives available?<br>- How does the provider assist with customer audits?  (i.e.  PCI auditor requests for info)<br>- Can the provider demonstrate servicing other companies where compliance is a requirement?<br>- Will there be additional charges for audit related work or requests?</p><p><b>Network </b>Remote Hands<br>- Does the provider offer managed hosting / hybrid hosting options<br>- What is the expertise level of the NOC staff?<br>- How are remote hands charged?<br>- What is the response time for a remote hands event?<br>- What monitoring options are available?</p><p><b>Corporate</b><br>- Does the company have a business continuity plan documented?<br>- Are the company financials available for review?</p></htmltext>
<tokenext>Full Disclosure : I work for a ( Great !
) data center provider ( ViaWest [ viawest.com ] ) .Infrastructure : - What is the UPS run-time ? - What is the generator startup time ? - What is the genset capacity in relation to UPS demand ?
( i.e. is the UPS demand larger than the genset capacity - you 'd be supprised !
) - Does the provider have multiple refueling contracts ? - Are the refueling contracts high priority ? - Can the provider detail out green initiatives to improve PUE ? - Does the provider have sufficient capital resources to expand the data center ? - How much investment has the company made - this year - into the data center ? - Is the data center in a flood plain ?
Check http : //msc.fema.gov/ [ fema.gov ] Compliance : - Is the data center SAS-70 type II audited ?
Type II means they 're serious about it.- Are the results of the audit available for review ? - Are a list of control objectives available ? - How does the provider assist with customer audits ?
( i.e. PCI auditor requests for info ) - Can the provider demonstrate servicing other companies where compliance is a requirement ? - Will there be additional charges for audit related work or requests ? Network Remote Hands- Does the provider offer managed hosting / hybrid hosting options- What is the expertise level of the NOC staff ? - How are remote hands charged ? - What is the response time for a remote hands event ? - What monitoring options are available ? Corporate- Does the company have a business continuity plan documented ? - Are the company financials available for review ?</tokentext>
<sentencetext>Full Disclosure:  I work for a (Great!
) data center provider (ViaWest [viawest.com] ).Infrastructure:- What is the UPS run-time?- What is the generator startup time?- What is the genset capacity in relation to UPS demand?
(i.e.  is the UPS demand larger than the genset capacity - you'd be supprised!
)- Does the provider have multiple refueling contracts?- Are the refueling contracts high priority?- Can the provider detail out green initiatives to improve PUE?- Does the provider have sufficient capital resources to expand the data center?- How much investment has the company made - this year - into the data center?- Is the data center in a flood plain?
Check http://msc.fema.gov/ [fema.gov]Compliance:- Is the data center SAS-70 type II audited?
Type II means they're serious about it.- Are the results of the audit available for review?- Are a list of control objectives available?- How does the provider assist with customer audits?
(i.e.  PCI auditor requests for info)- Can the provider demonstrate servicing other companies where compliance is a requirement?- Will there be additional charges for audit related work or requests?Network Remote Hands- Does the provider offer managed hosting / hybrid hosting options- What is the expertise level of the NOC staff?- How are remote hands charged?- What is the response time for a remote hands event?- What monitoring options are available?Corporate- Does the company have a business continuity plan documented?- Are the company financials available for review?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044384</id>
	<title>Re:Just off the top of my head</title>
	<author>Sandbags</author>
	<datestamp>1257860700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Medical data mostly, but naturally that also includes billing records, credit cards and bank transaction data, and lots of other information that should not be leaked.  Of course, the $150M in equipment something to be concerned over as well, especially when you have a few hundred employees and contractors with datacenter access on a regular basis.</p><p>We're not a "typical business" because of the data, and also because of who owns many of the servers in our datacenters (only about 2/3rds actually belong to us directly, and half of those go with a contract if we loose it to a competitor).  We have numerous different government (state and federal) security standard to adhere to.  We go through an external security audit from one group or another every few weeks ensuring we're sticking to the standards for their servers and their data.</p></htmltext>
<tokenext>Medical data mostly , but naturally that also includes billing records , credit cards and bank transaction data , and lots of other information that should not be leaked .
Of course , the $ 150M in equipment something to be concerned over as well , especially when you have a few hundred employees and contractors with datacenter access on a regular basis.We 're not a " typical business " because of the data , and also because of who owns many of the servers in our datacenters ( only about 2/3rds actually belong to us directly , and half of those go with a contract if we loose it to a competitor ) .
We have numerous different government ( state and federal ) security standard to adhere to .
We go through an external security audit from one group or another every few weeks ensuring we 're sticking to the standards for their servers and their data .</tokentext>
<sentencetext>Medical data mostly, but naturally that also includes billing records, credit cards and bank transaction data, and lots of other information that should not be leaked.
Of course, the $150M in equipment something to be concerned over as well, especially when you have a few hundred employees and contractors with datacenter access on a regular basis.We're not a "typical business" because of the data, and also because of who owns many of the servers in our datacenters (only about 2/3rds actually belong to us directly, and half of those go with a contract if we loose it to a competitor).
We have numerous different government (state and federal) security standard to adhere to.
We go through an external security audit from one group or another every few weeks ensuring we're sticking to the standards for their servers and their data.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042182</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041244</id>
	<title>Re:Just off the top of my head</title>
	<author>linuxwrangler</author>
	<datestamp>1257777420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>- Raised floor is certainly important, and a given.  Check</p></div><p>Not so fast. I've been very happy at the Switch SuperNAP which is on concrete with all cabling run overhead. And for very good reason. The typical (though changing) datacenter has mixed hot and cold air - typically cold air pumped up from the bottom (?!? kind of fighting nature, there) then allowed to rise into the ceiling. The alternative at Switch is strict hot/cold-isle isolation. Cold air drops down as per nature on the cold (intake) side and is contained on the hot-side where it rises and is pulled from the building.</p><p>Additionally, they can handle extremely heavy equipment loads that could be difficult on raised floors.</p><p><div class="quote"><p>- Cooling capacity is hard to judge, should be scalable.  Redundancy is often overlooked but is often even more important that capacity...  Check</p></div><p>Indeed. Many centers talk about redundant power but that is useless if the cooling goes out. High-density centers can go well past 100F in just a few minutes without cooling but I've seen SLAs that allow a several-hour cooling outage. Not to be a Switch shill (though we are happy), they have developed a setup that allows them to run indefinitely, albeit less efficiently, using air-exchange only if their cooling water supply is cut off.</p><p><div class="quote"><p>- Power quality:  never seen a big datacenter without a Liebert, or at least UPS in every rack.</p></div><p>I'm happy with the arrangement at Switch. Three full power buses. Sure they are all generator backed (with aggressive testing cycles) UPS backed, yada, yada as with other centers I use. But they also insist on customers taking power from two buses so if, in spite of generator and UPS protection, a power bus fails you will still have power. And since they only run at 2/3 power/bus, they can reroute power and continue providing dual redundant power if one bus is down.</p></div>
	</htmltext>
<tokenext>- Raised floor is certainly important , and a given .
CheckNot so fast .
I 've been very happy at the Switch SuperNAP which is on concrete with all cabling run overhead .
And for very good reason .
The typical ( though changing ) datacenter has mixed hot and cold air - typically cold air pumped up from the bottom ( ? ! ?
kind of fighting nature , there ) then allowed to rise into the ceiling .
The alternative at Switch is strict hot/cold-isle isolation .
Cold air drops down as per nature on the cold ( intake ) side and is contained on the hot-side where it rises and is pulled from the building.Additionally , they can handle extremely heavy equipment loads that could be difficult on raised floors.- Cooling capacity is hard to judge , should be scalable .
Redundancy is often overlooked but is often even more important that capacity... CheckIndeed. Many centers talk about redundant power but that is useless if the cooling goes out .
High-density centers can go well past 100F in just a few minutes without cooling but I 've seen SLAs that allow a several-hour cooling outage .
Not to be a Switch shill ( though we are happy ) , they have developed a setup that allows them to run indefinitely , albeit less efficiently , using air-exchange only if their cooling water supply is cut off.- Power quality : never seen a big datacenter without a Liebert , or at least UPS in every rack.I 'm happy with the arrangement at Switch .
Three full power buses .
Sure they are all generator backed ( with aggressive testing cycles ) UPS backed , yada , yada as with other centers I use .
But they also insist on customers taking power from two buses so if , in spite of generator and UPS protection , a power bus fails you will still have power .
And since they only run at 2/3 power/bus , they can reroute power and continue providing dual redundant power if one bus is down .</tokentext>
<sentencetext>- Raised floor is certainly important, and a given.
CheckNot so fast.
I've been very happy at the Switch SuperNAP which is on concrete with all cabling run overhead.
And for very good reason.
The typical (though changing) datacenter has mixed hot and cold air - typically cold air pumped up from the bottom (?!?
kind of fighting nature, there) then allowed to rise into the ceiling.
The alternative at Switch is strict hot/cold-isle isolation.
Cold air drops down as per nature on the cold (intake) side and is contained on the hot-side where it rises and is pulled from the building.Additionally, they can handle extremely heavy equipment loads that could be difficult on raised floors.- Cooling capacity is hard to judge, should be scalable.
Redundancy is often overlooked but is often even more important that capacity...  CheckIndeed. Many centers talk about redundant power but that is useless if the cooling goes out.
High-density centers can go well past 100F in just a few minutes without cooling but I've seen SLAs that allow a several-hour cooling outage.
Not to be a Switch shill (though we are happy), they have developed a setup that allows them to run indefinitely, albeit less efficiently, using air-exchange only if their cooling water supply is cut off.- Power quality:  never seen a big datacenter without a Liebert, or at least UPS in every rack.I'm happy with the arrangement at Switch.
Three full power buses.
Sure they are all generator backed (with aggressive testing cycles) UPS backed, yada, yada as with other centers I use.
But they also insist on customers taking power from two buses so if, in spite of generator and UPS protection, a power bus fails you will still have power.
And since they only run at 2/3 power/bus, they can reroute power and continue providing dual redundant power if one bus is down.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038992</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30043358</id>
	<title>Don't  forget to read the small print</title>
	<author>Anonymous</author>
	<datestamp>1257846420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>What kind of contract must you sign? Is it in legalese or in understandable English? Will the data center only be liable for broken hardware in case of a failure or will they also compensate for your loss of revenue and blemished reputation?</p></htmltext>
<tokenext>What kind of contract must you sign ?
Is it in legalese or in understandable English ?
Will the data center only be liable for broken hardware in case of a failure or will they also compensate for your loss of revenue and blemished reputation ?</tokentext>
<sentencetext>What kind of contract must you sign?
Is it in legalese or in understandable English?
Will the data center only be liable for broken hardware in case of a failure or will they also compensate for your loss of revenue and blemished reputation?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039038</id>
	<title>an outside air duct</title>
	<author>spywhere</author>
	<datestamp>1257764580000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>When I worked at a corporate office in Maryland, they used the building's air conditioning to cool the server room.<br>
This worked well until the outside temperature got down to about 15 degrees Fahrenheit, but then it failed miserably: the outdoor condensers no longer functioned, the AC shut down, and the entire IT department went into a panic.<br>
The first time this happened, I (a lowly Help Desk tech) suggested to the CIO that he run a duct into the room from the outside: a simple fan would bring in enough sub-freezing air to cool the servers.<br>
The <i>second</i> time it happened, the look on his face told me he hadn't taken my suggestion seriously enough.<br>
The <b>third</b> time, he flipped a switch and the fan cooled his server room just fine.</htmltext>
<tokenext>When I worked at a corporate office in Maryland , they used the building 's air conditioning to cool the server room .
This worked well until the outside temperature got down to about 15 degrees Fahrenheit , but then it failed miserably : the outdoor condensers no longer functioned , the AC shut down , and the entire IT department went into a panic .
The first time this happened , I ( a lowly Help Desk tech ) suggested to the CIO that he run a duct into the room from the outside : a simple fan would bring in enough sub-freezing air to cool the servers .
The second time it happened , the look on his face told me he had n't taken my suggestion seriously enough .
The third time , he flipped a switch and the fan cooled his server room just fine .</tokentext>
<sentencetext>When I worked at a corporate office in Maryland, they used the building's air conditioning to cool the server room.
This worked well until the outside temperature got down to about 15 degrees Fahrenheit, but then it failed miserably: the outdoor condensers no longer functioned, the AC shut down, and the entire IT department went into a panic.
The first time this happened, I (a lowly Help Desk tech) suggested to the CIO that he run a duct into the room from the outside: a simple fan would bring in enough sub-freezing air to cool the servers.
The second time it happened, the look on his face told me he hadn't taken my suggestion seriously enough.
The third time, he flipped a switch and the fan cooled his server room just fine.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041848</id>
	<title>other stuff</title>
	<author>mlg9000</author>
	<datestamp>1257782940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Besides the obvious backup power, cooling, environmental stuff mentioned above....</p><p>1. Expandability.. how much rack/cage space is available nearby?  Get a right of refusal on any empty space near your stuff if you can.<br>2. Power max per rack, can you get enough for SAN's, blades etc<br>3. Remote hands availability/skills/costs (really want to make a trip to the datacenter to replace dead hard drives? No. Do the employees know enough to do *limited* work for you)<br>4. 24/7 access, near employees if a physical presence is required<br>5. What sort of racks do they have?  Can you buy your own?<br>6. Storage space.  Need 100 servers shipped but don't have racks yet?  Need to keep spare hardware on hand?  Do they have room keep that for you?<br>7. How many/what carriers do they offer?  How is access delivered?  What does their network look like?<br>8. Do they have spare tools/network cables/misc parts.  Can they order stuff for you, or is there some place nearby you can pick things up if needed in a hurry?<br>9. How many employees can you get access for?<br>10. Do they have a crash cart?  Comfortable place to work?  Wifi or other forms of internet access available?</p></htmltext>
<tokenext>Besides the obvious backup power , cooling , environmental stuff mentioned above....1 .
Expandability.. how much rack/cage space is available nearby ?
Get a right of refusal on any empty space near your stuff if you can.2 .
Power max per rack , can you get enough for SAN 's , blades etc3 .
Remote hands availability/skills/costs ( really want to make a trip to the datacenter to replace dead hard drives ?
No. Do the employees know enough to do * limited * work for you ) 4 .
24/7 access , near employees if a physical presence is required5 .
What sort of racks do they have ?
Can you buy your own ? 6 .
Storage space .
Need 100 servers shipped but do n't have racks yet ?
Need to keep spare hardware on hand ?
Do they have room keep that for you ? 7 .
How many/what carriers do they offer ?
How is access delivered ?
What does their network look like ? 8 .
Do they have spare tools/network cables/misc parts .
Can they order stuff for you , or is there some place nearby you can pick things up if needed in a hurry ? 9 .
How many employees can you get access for ? 10 .
Do they have a crash cart ?
Comfortable place to work ?
Wifi or other forms of internet access available ?</tokentext>
<sentencetext>Besides the obvious backup power, cooling, environmental stuff mentioned above....1.
Expandability.. how much rack/cage space is available nearby?
Get a right of refusal on any empty space near your stuff if you can.2.
Power max per rack, can you get enough for SAN's, blades etc3.
Remote hands availability/skills/costs (really want to make a trip to the datacenter to replace dead hard drives?
No. Do the employees know enough to do *limited* work for you)4.
24/7 access, near employees if a physical presence is required5.
What sort of racks do they have?
Can you buy your own?6.
Storage space.
Need 100 servers shipped but don't have racks yet?
Need to keep spare hardware on hand?
Do they have room keep that for you?7.
How many/what carriers do they offer?
How is access delivered?
What does their network look like?8.
Do they have spare tools/network cables/misc parts.
Can they order stuff for you, or is there some place nearby you can pick things up if needed in a hurry?9.
How many employees can you get access for?10.
Do they have a crash cart?
Comfortable place to work?
Wifi or other forms of internet access available?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038776</id>
	<title>Word of mouth</title>
	<author>tomhudson</author>
	<datestamp>1257763500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Find someone you trust who's already a customer.  Word of mouth beats any number of white papers or studies or guarantees.
</p></htmltext>
<tokenext>Find someone you trust who 's already a customer .
Word of mouth beats any number of white papers or studies or guarantees .</tokentext>
<sentencetext>Find someone you trust who's already a customer.
Word of mouth beats any number of white papers or studies or guarantees.
</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040718</id>
	<title>My only qualification for a datacenter</title>
	<author>Anonymous</author>
	<datestamp>1257773520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>My only qualification for a datacenter:  Will it blend?</p></htmltext>
<tokenext>My only qualification for a datacenter : Will it blend ?</tokentext>
<sentencetext>My only qualification for a datacenter:  Will it blend?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040480</id>
	<title>Re:Just off the top of my head</title>
	<author>whoever57</author>
	<datestamp>1257772080000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><blockquote><div><p>Newer datacenters don't have raised floors because it is more energy efficient to have concrete floors.</p></div></blockquote><p>

Hogwash.</p></div></blockquote><p>

Yeah, what do I know about the subject? I'm just quoting from a recent talk given by Subodh Bapat, Vice President, <b>Energy Efficiency and Distinguished Engineer</b>, Sun Microsystems.<br> <br>
Oh, and there are <a href="http://www.greentechmedia.com/articles/read/concrete-floors-equal-data-center-power-savings-6099/" title="greentechmedia.com"> some articles about this</a> [greentechmedia.com] <br> <br>
But please, continue to refute my statement with clear, unsupported, single-word denials. They carry so much weight in an argument.</p></div>
	</htmltext>
<tokenext>Newer datacenters do n't have raised floors because it is more energy efficient to have concrete floors .
Hogwash . Yeah , what do I know about the subject ?
I 'm just quoting from a recent talk given by Subodh Bapat , Vice President , Energy Efficiency and Distinguished Engineer , Sun Microsystems .
Oh , and there are some articles about this [ greentechmedia.com ] But please , continue to refute my statement with clear , unsupported , single-word denials .
They carry so much weight in an argument .</tokentext>
<sentencetext>Newer datacenters don't have raised floors because it is more energy efficient to have concrete floors.
Hogwash.

Yeah, what do I know about the subject?
I'm just quoting from a recent talk given by Subodh Bapat, Vice President, Energy Efficiency and Distinguished Engineer, Sun Microsystems.
Oh, and there are  some articles about this [greentechmedia.com]  
But please, continue to refute my statement with clear, unsupported, single-word denials.
They carry so much weight in an argument.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040902</id>
	<title>Re:Personnel</title>
	<author>Anonymous</author>
	<datestamp>1257774660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>lighter for joint to deal with management, orange juice to protect against colds, alcohol to protect against freezing computer room.</p></htmltext>
<tokenext>lighter for joint to deal with management , orange juice to protect against colds , alcohol to protect against freezing computer room .</tokentext>
<sentencetext>lighter for joint to deal with management, orange juice to protect against colds, alcohol to protect against freezing computer room.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039058</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30046930</id>
	<title>Even the best fail</title>
	<author>ebvigmo</author>
	<datestamp>1257874200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just wanted to comment that the data center I hosted at for 7 years (Chicago Equinix via Internap) had all the right stuff.  But when the power to the facility went out when a fire broke out at local power station, one of the UPS generators failed when the wires shorted out.  The other generator (it was an N+1 facility) couldn't handle the load and the whole facility was down. Good thing for us it was on a Friday night which was a slow time for our ECommerce site (ArtSelect.com, now shutdown by art.com who bought us).</p></htmltext>
<tokenext>Just wanted to comment that the data center I hosted at for 7 years ( Chicago Equinix via Internap ) had all the right stuff .
But when the power to the facility went out when a fire broke out at local power station , one of the UPS generators failed when the wires shorted out .
The other generator ( it was an N + 1 facility ) could n't handle the load and the whole facility was down .
Good thing for us it was on a Friday night which was a slow time for our ECommerce site ( ArtSelect.com , now shutdown by art.com who bought us ) .</tokentext>
<sentencetext>Just wanted to comment that the data center I hosted at for 7 years (Chicago Equinix via Internap) had all the right stuff.
But when the power to the facility went out when a fire broke out at local power station, one of the UPS generators failed when the wires shorted out.
The other generator (it was an N+1 facility) couldn't handle the load and the whole facility was down.
Good thing for us it was on a Friday night which was a slow time for our ECommerce site (ArtSelect.com, now shutdown by art.com who bought us).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039004</id>
	<title>You missed a few</title>
	<author>Anonymous</author>
	<datestamp>1257764460000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>You forgot a few:</p><p>- Enough qualified *on site* staff 24x7 to deal with all clients including yourself</p><p>- 24x7 phone support, with people who understand English and have immediate access to the techies</p><p>- Company financial records and history (You don't want someone almost broke or a new startup with no backing)</p><p>- These days availability of virtualisation solution and supporting hardware (depending on your application, if virtualisation is an option)</p><p>Oh and your emphasis on maintenance records may be a little misplaced. They can be faked. They also may not be available due to security concerns (of their other clients). *IF* you can get hold of them they should be complete. Hardware service level should be part of the agreement and service schedule should be part of that.</p></htmltext>
<tokenext>You forgot a few : - Enough qualified * on site * staff 24x7 to deal with all clients including yourself- 24x7 phone support , with people who understand English and have immediate access to the techies- Company financial records and history ( You do n't want someone almost broke or a new startup with no backing ) - These days availability of virtualisation solution and supporting hardware ( depending on your application , if virtualisation is an option ) Oh and your emphasis on maintenance records may be a little misplaced .
They can be faked .
They also may not be available due to security concerns ( of their other clients ) .
* IF * you can get hold of them they should be complete .
Hardware service level should be part of the agreement and service schedule should be part of that .</tokentext>
<sentencetext>You forgot a few:- Enough qualified *on site* staff 24x7 to deal with all clients including yourself- 24x7 phone support, with people who understand English and have immediate access to the techies- Company financial records and history (You don't want someone almost broke or a new startup with no backing)- These days availability of virtualisation solution and supporting hardware (depending on your application, if virtualisation is an option)Oh and your emphasis on maintenance records may be a little misplaced.
They can be faked.
They also may not be available due to security concerns (of their other clients).
*IF* you can get hold of them they should be complete.
Hardware service level should be part of the agreement and service schedule should be part of that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041092</id>
	<title>Re:Personnel</title>
	<author>Anonymous</author>
	<datestamp>1257776040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>HAHAHA This is so right, i've toured dozens of datacenters and this is what I always find:<br>- A fat, hairy, untanned guy with a beard whos like the server crypt keeper<br>- An indian guy named sameer or some shit like that, hes always got a phd from some ivy league school<br>- A pale white guy with long hair in a pony tail who seems to know every IP &amp; port off the top of his head</p></htmltext>
<tokenext>HAHAHA This is so right , i 've toured dozens of datacenters and this is what I always find : - A fat , hairy , untanned guy with a beard whos like the server crypt keeper- An indian guy named sameer or some shit like that , hes always got a phd from some ivy league school- A pale white guy with long hair in a pony tail who seems to know every IP &amp; port off the top of his head</tokentext>
<sentencetext>HAHAHA This is so right, i've toured dozens of datacenters and this is what I always find:- A fat, hairy, untanned guy with a beard whos like the server crypt keeper- An indian guy named sameer or some shit like that, hes always got a phd from some ivy league school- A pale white guy with long hair in a pony tail who seems to know every IP &amp; port off the top of his head</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039058</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30045966</id>
	<title>Cooling failure modes</title>
	<author>Nicolas MONNET</author>
	<datestamp>1257870480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This summer Equinix Paris had a major failure of their cooling system. Of course, they had a backup, but as was expected,  the backup was identical to the primary system, and therefore failed identically. Temperature raised over 55C AFAICT. We didn't experience hardware failure since all our servers shut down automagically at 45C. We also had all our systems clustered over a gigabit MAN to another DC, so we suffered only a minor outage.</p><p>Shit happens. You always have to keep that in mind. But two things could have made the whole situation better.</p><p>-  They didn't alert us of the failure. We could have avoided the outage completely had we been called when the cooling system failed, instead of finding out an hour later. We could have supervised the switch over, and there would have been no loss of service at all.</p><p>- Datacenters should not get that hot, ever. They should be built so that they don't get much hotter than outside temp; it's not that hard, thanks to a high technology device called a fucking WINDOW. Too hot? AC down? You open the god damn windows! I believe the latests DCs are built that way, with passive cooling doing most of the job.</p><p>You  don't need high end AC with temperature controlled down to the degree. It's a complete waste of money. Instead you should have reliable AC, with graceful failure modes.</p></htmltext>
<tokenext>This summer Equinix Paris had a major failure of their cooling system .
Of course , they had a backup , but as was expected , the backup was identical to the primary system , and therefore failed identically .
Temperature raised over 55C AFAICT .
We did n't experience hardware failure since all our servers shut down automagically at 45C .
We also had all our systems clustered over a gigabit MAN to another DC , so we suffered only a minor outage.Shit happens .
You always have to keep that in mind .
But two things could have made the whole situation better.- They did n't alert us of the failure .
We could have avoided the outage completely had we been called when the cooling system failed , instead of finding out an hour later .
We could have supervised the switch over , and there would have been no loss of service at all.- Datacenters should not get that hot , ever .
They should be built so that they do n't get much hotter than outside temp ; it 's not that hard , thanks to a high technology device called a fucking WINDOW .
Too hot ?
AC down ?
You open the god damn windows !
I believe the latests DCs are built that way , with passive cooling doing most of the job.You do n't need high end AC with temperature controlled down to the degree .
It 's a complete waste of money .
Instead you should have reliable AC , with graceful failure modes .</tokentext>
<sentencetext>This summer Equinix Paris had a major failure of their cooling system.
Of course, they had a backup, but as was expected,  the backup was identical to the primary system, and therefore failed identically.
Temperature raised over 55C AFAICT.
We didn't experience hardware failure since all our servers shut down automagically at 45C.
We also had all our systems clustered over a gigabit MAN to another DC, so we suffered only a minor outage.Shit happens.
You always have to keep that in mind.
But two things could have made the whole situation better.-  They didn't alert us of the failure.
We could have avoided the outage completely had we been called when the cooling system failed, instead of finding out an hour later.
We could have supervised the switch over, and there would have been no loss of service at all.- Datacenters should not get that hot, ever.
They should be built so that they don't get much hotter than outside temp; it's not that hard, thanks to a high technology device called a fucking WINDOW.
Too hot?
AC down?
You open the god damn windows!
I believe the latests DCs are built that way, with passive cooling doing most of the job.You  don't need high end AC with temperature controlled down to the degree.
It's a complete waste of money.
Instead you should have reliable AC, with graceful failure modes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041524</id>
	<title>Re:an outside air duct</title>
	<author>Anonymous</author>
	<datestamp>1257779820000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've also had to do this in a cooling emergency (which luckily happened in the middle of winter in Wisconsin)  The important thing to remember is that you lose control of humidity, which is almost as important as temperature.</p></htmltext>
<tokenext>I 've also had to do this in a cooling emergency ( which luckily happened in the middle of winter in Wisconsin ) The important thing to remember is that you lose control of humidity , which is almost as important as temperature .</tokentext>
<sentencetext>I've also had to do this in a cooling emergency (which luckily happened in the middle of winter in Wisconsin)  The important thing to remember is that you lose control of humidity, which is almost as important as temperature.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039038</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041754</id>
	<title>One important criteria that needs to be met</title>
	<author>waferbuster</author>
	<datestamp>1257781980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Based on a recent video on slashdot, the datacenter needs to be located other than in a flood plain.</htmltext>
<tokenext>Based on a recent video on slashdot , the datacenter needs to be located other than in a flood plain .</tokentext>
<sentencetext>Based on a recent video on slashdot, the datacenter needs to be located other than in a flood plain.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040142</id>
	<title>Re:Just off the top of my head</title>
	<author>Trogre</author>
	<datestamp>1257770340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You forgot the pony.</p></htmltext>
<tokenext>You forgot the pony .</tokentext>
<sentencetext>You forgot the pony.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038992</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038554</id>
	<title>first</title>
	<author>electrosoccertux</author>
	<datestamp>1257762540000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p>you evaluate it by how many cups for frosty piss it can...<br>wait...<br>in soviet russia, data centers evaluate you!!!!<br>Yeah there we go.</p></htmltext>
<tokenext>you evaluate it by how many cups for frosty piss it can...wait...in soviet russia , data centers evaluate you ! ! !
! Yeah there we go .</tokentext>
<sentencetext>you evaluate it by how many cups for frosty piss it can...wait...in soviet russia, data centers evaluate you!!!
!Yeah there we go.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041628</id>
	<title>Reboots</title>
	<author>Anonymous</author>
	<datestamp>1257780780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Good thing to check is what is the average time after submission until a reboot is completed.  Along with this do they have a SLA for reboots and what the price is for the reboot.</p><p>Also if you might need remote hands(if you are not within driving distance to the DC) how long do they take to attach the equipment and at what cost for what length.</p></htmltext>
<tokenext>Good thing to check is what is the average time after submission until a reboot is completed .
Along with this do they have a SLA for reboots and what the price is for the reboot.Also if you might need remote hands ( if you are not within driving distance to the DC ) how long do they take to attach the equipment and at what cost for what length .</tokentext>
<sentencetext>Good thing to check is what is the average time after submission until a reboot is completed.
Along with this do they have a SLA for reboots and what the price is for the reboot.Also if you might need remote hands(if you are not within driving distance to the DC) how long do they take to attach the equipment and at what cost for what length.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039120</id>
	<title>Freight Elevator capacity...</title>
	<author>HockeyPuck</author>
	<datestamp>1257765060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I used to have a large cage in an Exodus colocation facility.  Turns out that if we wanted to put in an EMC Symm5 (these are three tiles wide), we would have to rent a fork lift and put it through an open rollup door on the second floor.  Their "freight elevator" was barely big enough for two people and a dolly.</p><p>One of my other cages was housed in a Global Crossing facility; when they started to run out of out cooling, they would hook up huge external A/C units in the parking lot and run 2ft diameter ducting to a hole in the wall.  If you happened to walk near one of these openings you'd be greeted by freezing 50mph winds.</p><p>Anybody find it odd that Exodus bought Global Crossing, who then went out of business?</p></htmltext>
<tokenext>I used to have a large cage in an Exodus colocation facility .
Turns out that if we wanted to put in an EMC Symm5 ( these are three tiles wide ) , we would have to rent a fork lift and put it through an open rollup door on the second floor .
Their " freight elevator " was barely big enough for two people and a dolly.One of my other cages was housed in a Global Crossing facility ; when they started to run out of out cooling , they would hook up huge external A/C units in the parking lot and run 2ft diameter ducting to a hole in the wall .
If you happened to walk near one of these openings you 'd be greeted by freezing 50mph winds.Anybody find it odd that Exodus bought Global Crossing , who then went out of business ?</tokentext>
<sentencetext>I used to have a large cage in an Exodus colocation facility.
Turns out that if we wanted to put in an EMC Symm5 (these are three tiles wide), we would have to rent a fork lift and put it through an open rollup door on the second floor.
Their "freight elevator" was barely big enough for two people and a dolly.One of my other cages was housed in a Global Crossing facility; when they started to run out of out cooling, they would hook up huge external A/C units in the parking lot and run 2ft diameter ducting to a hole in the wall.
If you happened to walk near one of these openings you'd be greeted by freezing 50mph winds.Anybody find it odd that Exodus bought Global Crossing, who then went out of business?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038568</id>
	<title>History</title>
	<author>micksam7</author>
	<datestamp>1257762600000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Look at a datacenter's history [recent and past], outages, maintenance issues, customer support, management and etc, in conjunction with their listed redundancies and capacities.</p><p>Just because they have two electrics going to each server, doesn't mean a random maintenance tech will flip the wrong switch.<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>Look at a datacenter 's history [ recent and past ] , outages , maintenance issues , customer support , management and etc , in conjunction with their listed redundancies and capacities.Just because they have two electrics going to each server , does n't mean a random maintenance tech will flip the wrong switch .
: )</tokentext>
<sentencetext>Look at a datacenter's history [recent and past], outages, maintenance issues, customer support, management and etc, in conjunction with their listed redundancies and capacities.Just because they have two electrics going to each server, doesn't mean a random maintenance tech will flip the wrong switch.
:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039224</id>
	<title>a horror story</title>
	<author>Anonymous</author>
	<datestamp>1257765540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I had a machine colocated in a very nice, secured facility right in the middle of a major city where all the telco wiring runs. It was awful for these reasons:<br>- they advertised 24 access to your equipment on the web site, then the smarmy salesperson explained how that's actually not going to happen. That should have been it right there, but I was dumb.<br>- later, they had a brief power outage due to a contractor f-ing up one day, and I was never notified. This in turn disabled my traffic shaping configs, which I intentionally do not have running upon startup. I didn't know anything was amiss until I got a huge bill for bandwidth overages. I had to fight heavily with them to overturn the charges because I had a contract with them that said 100\% uptime in regards to power. They disabled my controls by not upholding their deal, and were trying to pin the results on me. Afterward I put in a simple script to email me when the maching mysteriouly rebooted. The whole time they were acting like jackasses about it, then acting like they were doing me a huge favor when they gave in.<br>- Then, the worst offense. I took off for a three day weekend right when they cold-cocked my machine during a maintenance operation. I didn't notice until the following sunday that things had been down all weekend. I had missed my opportunity to visit my machine since it was just after 6:00pm, and I'd have to wait until 9:00 the following morning to see what the hell was wrong. Or they offered to let me PAY THEM to look at it for me sooner. I declined.</p><p>When I got there, they had unplugged my machine, moved it to a new location, failed to power it back up, and had the network cable in the wrong port. All of those things were in total violation of my contract. When giving me excuses, they were saying that because the network lights were on they thought the machine was powered on.</p><p>Then it was like all hell to get them to come through on their contractual provisions when they don't provide the guaranteed uptime and exhibited severe negligence.</p><p>I eventually got paid back what I had paid for the service that month, but not any of the reimbursement specified in the contract for exceeding downtime. And it took two months for them to return my money.</p><p>Anyone have worse stories?</p></htmltext>
<tokenext>I had a machine colocated in a very nice , secured facility right in the middle of a major city where all the telco wiring runs .
It was awful for these reasons : - they advertised 24 access to your equipment on the web site , then the smarmy salesperson explained how that 's actually not going to happen .
That should have been it right there , but I was dumb.- later , they had a brief power outage due to a contractor f-ing up one day , and I was never notified .
This in turn disabled my traffic shaping configs , which I intentionally do not have running upon startup .
I did n't know anything was amiss until I got a huge bill for bandwidth overages .
I had to fight heavily with them to overturn the charges because I had a contract with them that said 100 \ % uptime in regards to power .
They disabled my controls by not upholding their deal , and were trying to pin the results on me .
Afterward I put in a simple script to email me when the maching mysteriouly rebooted .
The whole time they were acting like jackasses about it , then acting like they were doing me a huge favor when they gave in.- Then , the worst offense .
I took off for a three day weekend right when they cold-cocked my machine during a maintenance operation .
I did n't notice until the following sunday that things had been down all weekend .
I had missed my opportunity to visit my machine since it was just after 6 : 00pm , and I 'd have to wait until 9 : 00 the following morning to see what the hell was wrong .
Or they offered to let me PAY THEM to look at it for me sooner .
I declined.When I got there , they had unplugged my machine , moved it to a new location , failed to power it back up , and had the network cable in the wrong port .
All of those things were in total violation of my contract .
When giving me excuses , they were saying that because the network lights were on they thought the machine was powered on.Then it was like all hell to get them to come through on their contractual provisions when they do n't provide the guaranteed uptime and exhibited severe negligence.I eventually got paid back what I had paid for the service that month , but not any of the reimbursement specified in the contract for exceeding downtime .
And it took two months for them to return my money.Anyone have worse stories ?</tokentext>
<sentencetext>I had a machine colocated in a very nice, secured facility right in the middle of a major city where all the telco wiring runs.
It was awful for these reasons:- they advertised 24 access to your equipment on the web site, then the smarmy salesperson explained how that's actually not going to happen.
That should have been it right there, but I was dumb.- later, they had a brief power outage due to a contractor f-ing up one day, and I was never notified.
This in turn disabled my traffic shaping configs, which I intentionally do not have running upon startup.
I didn't know anything was amiss until I got a huge bill for bandwidth overages.
I had to fight heavily with them to overturn the charges because I had a contract with them that said 100\% uptime in regards to power.
They disabled my controls by not upholding their deal, and were trying to pin the results on me.
Afterward I put in a simple script to email me when the maching mysteriouly rebooted.
The whole time they were acting like jackasses about it, then acting like they were doing me a huge favor when they gave in.- Then, the worst offense.
I took off for a three day weekend right when they cold-cocked my machine during a maintenance operation.
I didn't notice until the following sunday that things had been down all weekend.
I had missed my opportunity to visit my machine since it was just after 6:00pm, and I'd have to wait until 9:00 the following morning to see what the hell was wrong.
Or they offered to let me PAY THEM to look at it for me sooner.
I declined.When I got there, they had unplugged my machine, moved it to a new location, failed to power it back up, and had the network cable in the wrong port.
All of those things were in total violation of my contract.
When giving me excuses, they were saying that because the network lights were on they thought the machine was powered on.Then it was like all hell to get them to come through on their contractual provisions when they don't provide the guaranteed uptime and exhibited severe negligence.I eventually got paid back what I had paid for the service that month, but not any of the reimbursement specified in the contract for exceeding downtime.
And it took two months for them to return my money.Anyone have worse stories?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038994</id>
	<title>Everyone has the questsions...</title>
	<author>Anonymous</author>
	<datestamp>1257764400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Everyone posts the "questions" to ask; no one posts the acceptable "answers" to those questions. Way to regurgitate uselessness, but I guess posting (even if it's useless) makes us all feel helpful/smart/educated/witty.</p><p>- Ask them how many Amps their electrical cords are tested for.<br>- Make sure you ask there EMI rating in Ohms / squared Newton times their rack cabling coefficient.</p><p>Yes, I'm mocking you...</p></htmltext>
<tokenext>Everyone posts the " questions " to ask ; no one posts the acceptable " answers " to those questions .
Way to regurgitate uselessness , but I guess posting ( even if it 's useless ) makes us all feel helpful/smart/educated/witty.- Ask them how many Amps their electrical cords are tested for.- Make sure you ask there EMI rating in Ohms / squared Newton times their rack cabling coefficient.Yes , I 'm mocking you.. .</tokentext>
<sentencetext>Everyone posts the "questions" to ask; no one posts the acceptable "answers" to those questions.
Way to regurgitate uselessness, but I guess posting (even if it's useless) makes us all feel helpful/smart/educated/witty.- Ask them how many Amps their electrical cords are tested for.- Make sure you ask there EMI rating in Ohms / squared Newton times their rack cabling coefficient.Yes, I'm mocking you...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30043212</id>
	<title>It depends</title>
	<author>ctrl-alt-canc</author>
	<datestamp>1257844140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If you need a data center for your business, and you are into one of those Ponzi-like schemes, the first thing to look at is the speed of low-level disk formatting, just in case...</htmltext>
<tokenext>If you need a data center for your business , and you are into one of those Ponzi-like schemes , the first thing to look at is the speed of low-level disk formatting , just in case.. .</tokentext>
<sentencetext>If you need a data center for your business, and you are into one of those Ponzi-like schemes, the first thing to look at is the speed of low-level disk formatting, just in case...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042310</id>
	<title>watch out for power sharing agreements w/ utility</title>
	<author>Anonymous</author>
	<datestamp>1257788100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>if utility power goes out, batteries and Gen kick in....  but if the center is on gen power as part of peak load sharing with the utility... and the fuel runs out, can it auto kick over to the grid?  Happened at an internal telco data net ops site... took two weeks to get everything right...<nobr> <wbr></nobr>:-)</p></htmltext>
<tokenext>if utility power goes out , batteries and Gen kick in.... but if the center is on gen power as part of peak load sharing with the utility... and the fuel runs out , can it auto kick over to the grid ?
Happened at an internal telco data net ops site... took two weeks to get everything right... : - )</tokentext>
<sentencetext>if utility power goes out, batteries and Gen kick in....  but if the center is on gen power as part of peak load sharing with the utility... and the fuel runs out, can it auto kick over to the grid?
Happened at an internal telco data net ops site... took two weeks to get everything right... :-)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039700</id>
	<title>my list:</title>
	<author>Anonymous</author>
	<datestamp>1257768060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Some additional things I looked for recently when evaluating a colo were:</p><p>- Ease of access to the building<br>- logistics including:<br>
&nbsp; -- how is the parking situation? 24/7 parking?<br>
&nbsp; -- if i need to drop something off from my car, is it close to a loading dock?<br>
&nbsp; -- how are deliveries handled, where are they stored until i come in to use new said equipment<br>- how many years remaining on the lease (assuming the building is leased)<br>- when was the last time they checked their backup systems (power, cooling)<br>- do they have multiple paths of entry for fiber into the building? (or will 1 backhaul bring down the interwebs?)<br>- do they have crash carts? (monitor / key boards)<br>- do they have spare cables or power tools I can borrow?<br>- how sturdy are the racks bolted into the ground? (push them, see if they are telling the truth<nobr> <wbr></nobr>:) )<br>- power density per rack that is available<br>- cross connects to various providers<br>- do i "like" the place<nobr> <wbr></nobr>... this is subjective but important. talk to the NOC guys, are they nice, or do they hate their jobs, how long have they been working there, etc etc.</p><p>just a few things I evaluated when choosing a colo.</p></htmltext>
<tokenext>Some additional things I looked for recently when evaluating a colo were : - Ease of access to the building- logistics including :   -- how is the parking situation ?
24/7 parking ?
  -- if i need to drop something off from my car , is it close to a loading dock ?
  -- how are deliveries handled , where are they stored until i come in to use new said equipment- how many years remaining on the lease ( assuming the building is leased ) - when was the last time they checked their backup systems ( power , cooling ) - do they have multiple paths of entry for fiber into the building ?
( or will 1 backhaul bring down the interwebs ?
) - do they have crash carts ?
( monitor / key boards ) - do they have spare cables or power tools I can borrow ? - how sturdy are the racks bolted into the ground ?
( push them , see if they are telling the truth : ) ) - power density per rack that is available- cross connects to various providers- do i " like " the place ... this is subjective but important .
talk to the NOC guys , are they nice , or do they hate their jobs , how long have they been working there , etc etc.just a few things I evaluated when choosing a colo .</tokentext>
<sentencetext>Some additional things I looked for recently when evaluating a colo were:- Ease of access to the building- logistics including:
  -- how is the parking situation?
24/7 parking?
  -- if i need to drop something off from my car, is it close to a loading dock?
  -- how are deliveries handled, where are they stored until i come in to use new said equipment- how many years remaining on the lease (assuming the building is leased)- when was the last time they checked their backup systems (power, cooling)- do they have multiple paths of entry for fiber into the building?
(or will 1 backhaul bring down the interwebs?
)- do they have crash carts?
(monitor / key boards)- do they have spare cables or power tools I can borrow?- how sturdy are the racks bolted into the ground?
(push them, see if they are telling the truth :) )- power density per rack that is available- cross connects to various providers- do i "like" the place ... this is subjective but important.
talk to the NOC guys, are they nice, or do they hate their jobs, how long have they been working there, etc etc.just a few things I evaluated when choosing a colo.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038714</id>
	<title>Datacenter Archaeology</title>
	<author>jpvlsmv</author>
	<datestamp>1257763260000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Pull floor tiles and compare the amount of obsolete technology-- Thicknet cables, VAX cluster interconnects, water chiller hookups, FDDI cables, etc. with the amount of space remaining.</p><p>Anything less than 4 inches of obsolete crud isn't worth excavating.  Leave it a few more years.</p><p>--Joe</p></htmltext>
<tokenext>Pull floor tiles and compare the amount of obsolete technology-- Thicknet cables , VAX cluster interconnects , water chiller hookups , FDDI cables , etc .
with the amount of space remaining.Anything less than 4 inches of obsolete crud is n't worth excavating .
Leave it a few more years.--Joe</tokentext>
<sentencetext>Pull floor tiles and compare the amount of obsolete technology-- Thicknet cables, VAX cluster interconnects, water chiller hookups, FDDI cables, etc.
with the amount of space remaining.Anything less than 4 inches of obsolete crud isn't worth excavating.
Leave it a few more years.--Joe</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038788</id>
	<title>What are you evaluating?</title>
	<author>chris.knowles</author>
	<datestamp>1257763560000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>There are basically 3 perspectives from which to evaluate the Datacenter.  They're pretty well universal to any IT eval.  People, Process and Technology.  The datacenter facility itself is only one piece of the puzzle (Facility = Technology, which only accounts for a fraction of the total cost of operating a Datacenter).  There are also the people running the datacenter and how they are organized and interact with the technology, one another, and their customers (internal and external).  From a people/process standpoint, if you want to give a general "score" to them, you can assess them against the SLM maturity scale. (Read about the Gartner Maturity Model for Infrastructure and Operations)  Evaluating a datacenter is going to be a balance between the cost of operating the datacenter and the level of service you require from said datacenter.  There really isn't enough information in the question to give you a good answer.  Are you looking at evaluating the acquisition of a datacenter to grow into, are you looking for a managed services DC to host your gear with operational support?  Are you looking for rack space with pipe and power?  If you give more details to your inquiry, I'm sure the community can provide you with some great answers.</htmltext>
<tokenext>There are basically 3 perspectives from which to evaluate the Datacenter .
They 're pretty well universal to any IT eval .
People , Process and Technology .
The datacenter facility itself is only one piece of the puzzle ( Facility = Technology , which only accounts for a fraction of the total cost of operating a Datacenter ) .
There are also the people running the datacenter and how they are organized and interact with the technology , one another , and their customers ( internal and external ) .
From a people/process standpoint , if you want to give a general " score " to them , you can assess them against the SLM maturity scale .
( Read about the Gartner Maturity Model for Infrastructure and Operations ) Evaluating a datacenter is going to be a balance between the cost of operating the datacenter and the level of service you require from said datacenter .
There really is n't enough information in the question to give you a good answer .
Are you looking at evaluating the acquisition of a datacenter to grow into , are you looking for a managed services DC to host your gear with operational support ?
Are you looking for rack space with pipe and power ?
If you give more details to your inquiry , I 'm sure the community can provide you with some great answers .</tokentext>
<sentencetext>There are basically 3 perspectives from which to evaluate the Datacenter.
They're pretty well universal to any IT eval.
People, Process and Technology.
The datacenter facility itself is only one piece of the puzzle (Facility = Technology, which only accounts for a fraction of the total cost of operating a Datacenter).
There are also the people running the datacenter and how they are organized and interact with the technology, one another, and their customers (internal and external).
From a people/process standpoint, if you want to give a general "score" to them, you can assess them against the SLM maturity scale.
(Read about the Gartner Maturity Model for Infrastructure and Operations)  Evaluating a datacenter is going to be a balance between the cost of operating the datacenter and the level of service you require from said datacenter.
There really isn't enough information in the question to give you a good answer.
Are you looking at evaluating the acquisition of a datacenter to grow into, are you looking for a managed services DC to host your gear with operational support?
Are you looking for rack space with pipe and power?
If you give more details to your inquiry, I'm sure the community can provide you with some great answers.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30043382</id>
	<title>don't forget</title>
	<author>Anonymous</author>
	<datestamp>1257846900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>(1) <a href="http://www.theregister.co.uk/2004/10/08/fbi\_indymedia\_raids/" title="theregister.co.uk" rel="nofollow">any history of letting police walk in and wreck stuff</a> [theregister.co.uk]</p><p>(2) <a href="http://royal.pingdom.com/2008/11/14/the-worlds-most-super-designed-data-center-fit-for-a-james-bond-villain/" title="pingdom.com" rel="nofollow">the d&eacute;cor</a> [pingdom.com]</p></htmltext>
<tokenext>( 1 ) any history of letting police walk in and wreck stuff [ theregister.co.uk ] ( 2 ) the d   cor [ pingdom.com ]</tokentext>
<sentencetext>(1) any history of letting police walk in and wreck stuff [theregister.co.uk](2) the décor [pingdom.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30069906</id>
	<title>How to find a new data center...</title>
	<author>BSG1</author>
	<datestamp>1257103740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Check out <a href="http://www.usdatacenterlist.com/" title="usdatacenterlist.com" rel="nofollow">http://www.usdatacenterlist.com/</a> [usdatacenterlist.com] - from there you can search and compare data centers on their technical and operational processes to ensure they are a good fit before you engage sales staff!!</htmltext>
<tokenext>Check out http : //www.usdatacenterlist.com/ [ usdatacenterlist.com ] - from there you can search and compare data centers on their technical and operational processes to ensure they are a good fit before you engage sales staff !
!</tokentext>
<sentencetext>Check out http://www.usdatacenterlist.com/ [usdatacenterlist.com] - from there you can search and compare data centers on their technical and operational processes to ensure they are a good fit before you engage sales staff!
!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038928</id>
	<title>My Top Three</title>
	<author>Archangel Michael</author>
	<datestamp>1257764100000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>1) Hookers<br>2) Beer<br>3) Illicit drugs</p><p>Seriously, my top three are as follows<nobr> <wbr></nobr>....</p><p>1) Bandwidth Available / Oversubscription rate<br>2) Geographically different alternative location.<br>3) Disaster Planning directives.</p></htmltext>
<tokenext>1 ) Hookers2 ) Beer3 ) Illicit drugsSeriously , my top three are as follows ....1 ) Bandwidth Available / Oversubscription rate2 ) Geographically different alternative location.3 ) Disaster Planning directives .</tokentext>
<sentencetext>1) Hookers2) Beer3) Illicit drugsSeriously, my top three are as follows ....1) Bandwidth Available / Oversubscription rate2) Geographically different alternative location.3) Disaster Planning directives.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30205092</id>
	<title>Data Center Evaluation White Paper and Workbook</title>
	<author>Anonymous</author>
	<datestamp>1259008920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I work for FORTRUST, a data center services provider in Denver, CO. FORTRUST published a comprehensive white paper and workbook as a data center site selection resource. The white paper, Evaluating Data Center High-Availability Service Delivery, examines several key criteria such as operational process controls, service assurance, maintenance and lifecycle strategies, critical infrastructure management, and capacity planning. Download the free white paper and workbook at http://tiny.cc/yqn4C.</p></htmltext>
<tokenext>I work for FORTRUST , a data center services provider in Denver , CO. FORTRUST published a comprehensive white paper and workbook as a data center site selection resource .
The white paper , Evaluating Data Center High-Availability Service Delivery , examines several key criteria such as operational process controls , service assurance , maintenance and lifecycle strategies , critical infrastructure management , and capacity planning .
Download the free white paper and workbook at http : //tiny.cc/yqn4C .</tokentext>
<sentencetext>I work for FORTRUST, a data center services provider in Denver, CO. FORTRUST published a comprehensive white paper and workbook as a data center site selection resource.
The white paper, Evaluating Data Center High-Availability Service Delivery, examines several key criteria such as operational process controls, service assurance, maintenance and lifecycle strategies, critical infrastructure management, and capacity planning.
Download the free white paper and workbook at http://tiny.cc/yqn4C.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038964</id>
	<title>Re:Just off the top of my head</title>
	<author>JWSmythe</author>
	<datestamp>1257764280000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>
&nbsp; &nbsp; I noticed something when touring one datacenter.  They had a neat conference room that overlooked the whole datacenter.  You could see the heat rising off of one area (Google's room).  They went on and on about the wonders of their cooling, and how they had so much capacity.</p><p>
&nbsp; &nbsp; We later took the guided tour.  The person I was with was talking to our guide, and I was paying careful attention to our environment.  There were tremendous hotspots on the floor.  We're not talking about 78 degrees.  It was closer to the 90's.  Other spots were downright cold.  Why?  Because they had all this capacity, and no real planning.  The circulation was insufficient, even though the capacity was available.   A well populated rack will always be hot at the back, but it's expected that they will draw the air off of that area rather quickly.  I've even seen datacenters that enforce their hot/cold aisles, but then there isn't much of a reason for it.  There is no air return on the hot side, and it's just blowing at another aisle's cold side.</p><p>
&nbsp; &nbsp; Sometimes it's good to just walk the floor with a tech (not a salesman), and ask questions about the operation.  What kind of fiber do you have coming in?  How many providers?  How good are your generators really?  Do you test them on a regular basis?  I've found a sales minion will say there are a dozen providers coming in, but it will turn out that only one has substantial fiber, and the others are sharing that.  {sigh}  Sometimes they will have generators, but they've never test fired them.  Sometimes the tech is just frustrated at the nonsense at that datacenter, and that's indicative of how it's going to be to work with them.</p><p>
&nbsp; &nbsp; &nbsp; &nbsp;</p></htmltext>
<tokenext>    I noticed something when touring one datacenter .
They had a neat conference room that overlooked the whole datacenter .
You could see the heat rising off of one area ( Google 's room ) .
They went on and on about the wonders of their cooling , and how they had so much capacity .
    We later took the guided tour .
The person I was with was talking to our guide , and I was paying careful attention to our environment .
There were tremendous hotspots on the floor .
We 're not talking about 78 degrees .
It was closer to the 90 's .
Other spots were downright cold .
Why ? Because they had all this capacity , and no real planning .
The circulation was insufficient , even though the capacity was available .
A well populated rack will always be hot at the back , but it 's expected that they will draw the air off of that area rather quickly .
I 've even seen datacenters that enforce their hot/cold aisles , but then there is n't much of a reason for it .
There is no air return on the hot side , and it 's just blowing at another aisle 's cold side .
    Sometimes it 's good to just walk the floor with a tech ( not a salesman ) , and ask questions about the operation .
What kind of fiber do you have coming in ?
How many providers ?
How good are your generators really ?
Do you test them on a regular basis ?
I 've found a sales minion will say there are a dozen providers coming in , but it will turn out that only one has substantial fiber , and the others are sharing that .
{ sigh } Sometimes they will have generators , but they 've never test fired them .
Sometimes the tech is just frustrated at the nonsense at that datacenter , and that 's indicative of how it 's going to be to work with them .
       </tokentext>
<sentencetext>
    I noticed something when touring one datacenter.
They had a neat conference room that overlooked the whole datacenter.
You could see the heat rising off of one area (Google's room).
They went on and on about the wonders of their cooling, and how they had so much capacity.
    We later took the guided tour.
The person I was with was talking to our guide, and I was paying careful attention to our environment.
There were tremendous hotspots on the floor.
We're not talking about 78 degrees.
It was closer to the 90's.
Other spots were downright cold.
Why?  Because they had all this capacity, and no real planning.
The circulation was insufficient, even though the capacity was available.
A well populated rack will always be hot at the back, but it's expected that they will draw the air off of that area rather quickly.
I've even seen datacenters that enforce their hot/cold aisles, but then there isn't much of a reason for it.
There is no air return on the hot side, and it's just blowing at another aisle's cold side.
    Sometimes it's good to just walk the floor with a tech (not a salesman), and ask questions about the operation.
What kind of fiber do you have coming in?
How many providers?
How good are your generators really?
Do you test them on a regular basis?
I've found a sales minion will say there are a dozen providers coming in, but it will turn out that only one has substantial fiber, and the others are sharing that.
{sigh}  Sometimes they will have generators, but they've never test fired them.
Sometimes the tech is just frustrated at the nonsense at that datacenter, and that's indicative of how it's going to be to work with them.
       </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044334</id>
	<title>Re:Just off the top of my head</title>
	<author>Col. Panic</author>
	<datestamp>1257860340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>also water detection. it's pretty handy to know as soon as possible when your cooling system is leaking under the raised floor.</p></htmltext>
<tokenext>also water detection .
it 's pretty handy to know as soon as possible when your cooling system is leaking under the raised floor .</tokentext>
<sentencetext>also water detection.
it's pretty handy to know as soon as possible when your cooling system is leaking under the raised floor.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039670</id>
	<title>thoughts from long ago</title>
	<author>68882</author>
	<datestamp>1257767880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Some thoughts, having been a shift supervision in a DC long ago...</p><p>(a) don't have the emergency poweroff &amp; off halon dump switch uncovered right by the DC room's light switch. Someday someone will *hit* it...</p><p>(b) where does the water go when the 5 inch chill water main fracture?</p><p>(c) who is the cleaning staff and do they know not to turn the "computers" off when they dust? Also why are they dusting unsupervised<br>in the data center anyway?</p><p>(d) Oh sure you have multiple telco feeds that service the building from that *sole* pole over there, which that dump truck just took out.</p><p>(e) And your procedure for the phone in fire alarm is what, wait for the fire dept to axe the entry door? This was my favourite. What's that chopping noise?</p><p>(f) Lastly, if the city bomb squad runs a trial, let the staff know ahead of time because that ensures they might be able to focus after the squad leaves...</p></htmltext>
<tokenext>Some thoughts , having been a shift supervision in a DC long ago... ( a ) do n't have the emergency poweroff &amp; off halon dump switch uncovered right by the DC room 's light switch .
Someday someone will * hit * it... ( b ) where does the water go when the 5 inch chill water main fracture ?
( c ) who is the cleaning staff and do they know not to turn the " computers " off when they dust ?
Also why are they dusting unsupervisedin the data center anyway ?
( d ) Oh sure you have multiple telco feeds that service the building from that * sole * pole over there , which that dump truck just took out .
( e ) And your procedure for the phone in fire alarm is what , wait for the fire dept to axe the entry door ?
This was my favourite .
What 's that chopping noise ?
( f ) Lastly , if the city bomb squad runs a trial , let the staff know ahead of time because that ensures they might be able to focus after the squad leaves.. .</tokentext>
<sentencetext>Some thoughts, having been a shift supervision in a DC long ago...(a) don't have the emergency poweroff &amp; off halon dump switch uncovered right by the DC room's light switch.
Someday someone will *hit* it...(b) where does the water go when the 5 inch chill water main fracture?
(c) who is the cleaning staff and do they know not to turn the "computers" off when they dust?
Also why are they dusting unsupervisedin the data center anyway?
(d) Oh sure you have multiple telco feeds that service the building from that *sole* pole over there, which that dump truck just took out.
(e) And your procedure for the phone in fire alarm is what, wait for the fire dept to axe the entry door?
This was my favourite.
What's that chopping noise?
(f) Lastly, if the city bomb squad runs a trial, let the staff know ahead of time because that ensures they might be able to focus after the squad leaves...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041752</id>
	<title>After u are done with all these rigorous questions</title>
	<author>Anonymous</author>
	<datestamp>1257781980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>After u are done with all these rigorous questions</p><p>print them out and ball them up and throw them away.</p><p>Because either you (and/or the money people) will decide based on completely unrelated criteria or the data centers won't give you half of the info you requested (maintainance records? 9/11, total power in vs current used? 9/11 and trade secret, etc) or if by some miracle they do give you the info then it means basically jack.</p><p>Take the pompous asses at 365 Main for example, if you go by all this crap that the other people here are telling you then they would be near the top of the list - N+fucking 1, etc but guess what? They had a huge outage due to, as they tell the story "commands hiding like scared bunnies in the command queue" of their uber N+1 flywheel generator systems who came scurrying out when they went to fire up the generators. (Yeah, right - we bought a $125 million colo for $2.6 million, waaaahoooo!! and the vendors and contractors were left holding the bag for the difference - but we charge you like we spend $125 million - suckers) </p><p>Meanwhile, you have a colo nearby (now web.com, previously Verio) that had not had an outage in 4 or 5 years but didn't have all the buzzwords.</p></htmltext>
<tokenext>After u are done with all these rigorous questionsprint them out and ball them up and throw them away.Because either you ( and/or the money people ) will decide based on completely unrelated criteria or the data centers wo n't give you half of the info you requested ( maintainance records ?
9/11 , total power in vs current used ?
9/11 and trade secret , etc ) or if by some miracle they do give you the info then it means basically jack.Take the pompous asses at 365 Main for example , if you go by all this crap that the other people here are telling you then they would be near the top of the list - N + fucking 1 , etc but guess what ?
They had a huge outage due to , as they tell the story " commands hiding like scared bunnies in the command queue " of their uber N + 1 flywheel generator systems who came scurrying out when they went to fire up the generators .
( Yeah , right - we bought a $ 125 million colo for $ 2.6 million , waaaahoooo ! !
and the vendors and contractors were left holding the bag for the difference - but we charge you like we spend $ 125 million - suckers ) Meanwhile , you have a colo nearby ( now web.com , previously Verio ) that had not had an outage in 4 or 5 years but did n't have all the buzzwords .</tokentext>
<sentencetext>After u are done with all these rigorous questionsprint them out and ball them up and throw them away.Because either you (and/or the money people) will decide based on completely unrelated criteria or the data centers won't give you half of the info you requested (maintainance records?
9/11, total power in vs current used?
9/11 and trade secret, etc) or if by some miracle they do give you the info then it means basically jack.Take the pompous asses at 365 Main for example, if you go by all this crap that the other people here are telling you then they would be near the top of the list - N+fucking 1, etc but guess what?
They had a huge outage due to, as they tell the story "commands hiding like scared bunnies in the command queue" of their uber N+1 flywheel generator systems who came scurrying out when they went to fire up the generators.
(Yeah, right - we bought a $125 million colo for $2.6 million, waaaahoooo!!
and the vendors and contractors were left holding the bag for the difference - but we charge you like we spend $125 million - suckers) Meanwhile, you have a colo nearby (now web.com, previously Verio) that had not had an outage in 4 or 5 years but didn't have all the buzzwords.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039278</id>
	<title>I'd be interested in...</title>
	<author>v1</author>
	<datestamp>1257765660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>- data redundancy, offsite specifically<br>- ability to cut over?  ie what happens if there's an earthquake, are your services to the world down until everything is replaced and backups are restored?<br>- what do you have on hand for hot spares in the event of equipment failure?<br>- when you are in failover mode for whatever reason, how does it impact your performance?  ie does webmail <i>just crawl</i> until the mirror finishes rebuilding?<br>- how are your external resources?  got a plan to truck in gas for the genny if a tornado levels the local substation?  got a hotline or multiple points of expert contact available 24/7 for <i>every</i> critical piece of hardware that you can't fix yourself regardless of how it breaks?  same for software.<br>- do you have a forensics plan in place?  ie if you get hacked, (and don't answer that with "that can't happen") do you have any idea what you will do and in what order, to preserve forensic information, stop additional damage, and orderly cleanup?  What are your legal obligations for notification, who is your contact with the press? (and there better only be ONE)  Do you have a specific partner waiting in the wing all picked out if needed?  after the fact is not the time to be choosing one.<br>- if you have a failure that affects multiple services or clients, what is your priority order?  who gets their service back first?<br>- do you have a set "fire schedule" that people know specific additional hours they will be required to work in the event of an emergency situation?  Are you going to run short on manpower in a specific area because you're already overextended by day to day operations in some aspect?<br>- are there any people that are single points of failure?  What if Bob gets hit by a bus?  what if Dave is the only one that knows the firewall and gets hospitalized when it explodes while he's working on it?  crosstrain crosstrain crosstrain.<br>- not sure if it was covered above but documentation, documentation, and more documentation.  How consistent is it?  Does every network map look like it was written with a different drafting app by a different person?  Is all of your documentation collected together and well organized?  multiple copies in various places?  are some things much better documented than others?<br>- server rebuild lists.  do you have a step by step set of instructions for EVERY critical box that will take it from a freshly formatted HD to back in production, that any of a dozen of your monkeys can follow, with no "well wasn't that obvious?" missing steps?  And how often do you test these?  Walk in one morning and drop a new box on a desk and say "WEB15 just got STOLEN.  Rebuild it.  Fast.  Starting NOW." and hit your stopwatch and see what you get.  You do this from time to time, right?<br>- do you have a structured command that avoids differing opinions in a crisis slowing things down?  when it comes right down to it there needs to be one clear person or command structure that has final say in a crisis.</p><p>I'm sure I'm missing some things but that's a good start for ya.</p></htmltext>
<tokenext>- data redundancy , offsite specifically- ability to cut over ?
ie what happens if there 's an earthquake , are your services to the world down until everything is replaced and backups are restored ? - what do you have on hand for hot spares in the event of equipment failure ? - when you are in failover mode for whatever reason , how does it impact your performance ?
ie does webmail just crawl until the mirror finishes rebuilding ? - how are your external resources ?
got a plan to truck in gas for the genny if a tornado levels the local substation ?
got a hotline or multiple points of expert contact available 24/7 for every critical piece of hardware that you ca n't fix yourself regardless of how it breaks ?
same for software.- do you have a forensics plan in place ?
ie if you get hacked , ( and do n't answer that with " that ca n't happen " ) do you have any idea what you will do and in what order , to preserve forensic information , stop additional damage , and orderly cleanup ?
What are your legal obligations for notification , who is your contact with the press ?
( and there better only be ONE ) Do you have a specific partner waiting in the wing all picked out if needed ?
after the fact is not the time to be choosing one.- if you have a failure that affects multiple services or clients , what is your priority order ?
who gets their service back first ? - do you have a set " fire schedule " that people know specific additional hours they will be required to work in the event of an emergency situation ?
Are you going to run short on manpower in a specific area because you 're already overextended by day to day operations in some aspect ? - are there any people that are single points of failure ?
What if Bob gets hit by a bus ?
what if Dave is the only one that knows the firewall and gets hospitalized when it explodes while he 's working on it ?
crosstrain crosstrain crosstrain.- not sure if it was covered above but documentation , documentation , and more documentation .
How consistent is it ?
Does every network map look like it was written with a different drafting app by a different person ?
Is all of your documentation collected together and well organized ?
multiple copies in various places ?
are some things much better documented than others ? - server rebuild lists .
do you have a step by step set of instructions for EVERY critical box that will take it from a freshly formatted HD to back in production , that any of a dozen of your monkeys can follow , with no " well was n't that obvious ?
" missing steps ?
And how often do you test these ?
Walk in one morning and drop a new box on a desk and say " WEB15 just got STOLEN .
Rebuild it .
Fast. Starting NOW .
" and hit your stopwatch and see what you get .
You do this from time to time , right ? - do you have a structured command that avoids differing opinions in a crisis slowing things down ?
when it comes right down to it there needs to be one clear person or command structure that has final say in a crisis.I 'm sure I 'm missing some things but that 's a good start for ya .</tokentext>
<sentencetext>- data redundancy, offsite specifically- ability to cut over?
ie what happens if there's an earthquake, are your services to the world down until everything is replaced and backups are restored?- what do you have on hand for hot spares in the event of equipment failure?- when you are in failover mode for whatever reason, how does it impact your performance?
ie does webmail just crawl until the mirror finishes rebuilding?- how are your external resources?
got a plan to truck in gas for the genny if a tornado levels the local substation?
got a hotline or multiple points of expert contact available 24/7 for every critical piece of hardware that you can't fix yourself regardless of how it breaks?
same for software.- do you have a forensics plan in place?
ie if you get hacked, (and don't answer that with "that can't happen") do you have any idea what you will do and in what order, to preserve forensic information, stop additional damage, and orderly cleanup?
What are your legal obligations for notification, who is your contact with the press?
(and there better only be ONE)  Do you have a specific partner waiting in the wing all picked out if needed?
after the fact is not the time to be choosing one.- if you have a failure that affects multiple services or clients, what is your priority order?
who gets their service back first?- do you have a set "fire schedule" that people know specific additional hours they will be required to work in the event of an emergency situation?
Are you going to run short on manpower in a specific area because you're already overextended by day to day operations in some aspect?- are there any people that are single points of failure?
What if Bob gets hit by a bus?
what if Dave is the only one that knows the firewall and gets hospitalized when it explodes while he's working on it?
crosstrain crosstrain crosstrain.- not sure if it was covered above but documentation, documentation, and more documentation.
How consistent is it?
Does every network map look like it was written with a different drafting app by a different person?
Is all of your documentation collected together and well organized?
multiple copies in various places?
are some things much better documented than others?- server rebuild lists.
do you have a step by step set of instructions for EVERY critical box that will take it from a freshly formatted HD to back in production, that any of a dozen of your monkeys can follow, with no "well wasn't that obvious?
" missing steps?
And how often do you test these?
Walk in one morning and drop a new box on a desk and say "WEB15 just got STOLEN.
Rebuild it.
Fast.  Starting NOW.
" and hit your stopwatch and see what you get.
You do this from time to time, right?- do you have a structured command that avoids differing opinions in a crisis slowing things down?
when it comes right down to it there needs to be one clear person or command structure that has final say in a crisis.I'm sure I'm missing some things but that's a good start for ya.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038992</id>
	<title>Re:Just off the top of my head</title>
	<author>Sandbags</author>
	<datestamp>1257764400000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>- Raised floor is certainly important, and a given.  Check<br>- Cable management above AND below the floor.  This is not an either-or...  Check<br>- Cooling capacity is hard to judge, should be scalable.  Redundancy is often overlooked but is often even more important that capacity...  Check<br>- Power quality:  never seen a big datacenter without a Liebert, or at least UPS in every rack.  Power does not have the be contitioned except between the UPS and the machines/devices.  A whole data center power conditioner is often more efficient, but unnecessary for the little guys.  either way - check.<br>- Age is irrelevent as long as it's under support.  If it's not, replace it.  Generators need to be run several times a year to validate their condition, and also to grease the innards...  See too many good generators get kicked on and fail an hour later because the oil hand't been changed in 3 years....<br>- Outages should be tracked, by system, rack row, and power distro.  When system seem to be going down more frequently in one area, there's usually an underlying reason...  As Google recently proved as well for us all, do not ASSUME all is well, routine disgnostics including memory scans should be performed on ALL hardware.  Even ECC RAM deteriorates with age (rapidly) and needs to be part of a maintenance testing and replacement policy - Check.<br>- Fire suppression is usually part of your building codes, and a given, as is the routine checks (at least anually) by law.</p><p>In addition, we deploy:<br>- Man traps on all enterences to data centers.  You go in one door, it closes, then you authenticate to a second door.  A pressure plate ensures only one person goes in/out at a time (and it it's tripped, a scurity guy looking at a screen has to override).<br>- Full 24x7 video surveilance of the data centers.<br>- in/out logs for all equipment.  To take a device in/out of a datacenter requires it being logged in a book (by a designated person).  This is for anything the size of a disk/tape and larger.  All drive bays are audited nightly by security and if drives go missing, security reviews the access logs and server room security footage to see who might have taken them.<br>- clear and consistent labeling systems for rack, shelves, cables and systems.<br>- pre-cable just about everything to row level redundant switches, and have no cabling from server to other servers not passed through a rack/row switch first.  Row switches connect to distro switches.  This ensures cabling is simple, and predictable.<br>- Colorcoded cabling: we use 1 color for redundant cabling (indicating their should be 2 of these connected to the server at all times, and to seperate cards in the backplane and seperate switches to boot), a seperate color for generic gigabit connections, another color for DS View, another color the out management network(s), another color for heartbeat cables, and yet another for non-ethernet (T1/PRI/etc).  Other colors are used in some areas to designate 100m connections, special connectivity, or security enclave barriers, and non-fiber switch-to-switch connections.  Every cable is labled at both ends and every 6-8 feet inbetween.<br>- FULLY REDUNDANT POWER.  It's not enough to have clean poewr, and good UPS and a generator.  In a large datacenter (more than a few rows, or anything truly mission critical), you should have 2 seperate power companies, 2 seperate generators, and 2 fully segregated power systems at the datcenter, room, row, and rack levels.  in each datacenter we use 2 Liebert mains, each row has a seperate distribution unit connected to a differnt main, and each rack has 4 PDUs (2 to each distro).  Every server is connected to 2 seperat PDUs, run all the way back to 2 completely independent power grids.  For a deployment of 50 servers or so this is big time overkill.  We have over 3500 servers, we need this...  We can not rely on a PSU failure taking out racks at a time which may server dozens of other systems each.</p></htmltext>
<tokenext>- Raised floor is certainly important , and a given .
Check- Cable management above AND below the floor .
This is not an either-or... Check- Cooling capacity is hard to judge , should be scalable .
Redundancy is often overlooked but is often even more important that capacity... Check- Power quality : never seen a big datacenter without a Liebert , or at least UPS in every rack .
Power does not have the be contitioned except between the UPS and the machines/devices .
A whole data center power conditioner is often more efficient , but unnecessary for the little guys .
either way - check.- Age is irrelevent as long as it 's under support .
If it 's not , replace it .
Generators need to be run several times a year to validate their condition , and also to grease the innards... See too many good generators get kicked on and fail an hour later because the oil hand't been changed in 3 years....- Outages should be tracked , by system , rack row , and power distro .
When system seem to be going down more frequently in one area , there 's usually an underlying reason... As Google recently proved as well for us all , do not ASSUME all is well , routine disgnostics including memory scans should be performed on ALL hardware .
Even ECC RAM deteriorates with age ( rapidly ) and needs to be part of a maintenance testing and replacement policy - Check.- Fire suppression is usually part of your building codes , and a given , as is the routine checks ( at least anually ) by law.In addition , we deploy : - Man traps on all enterences to data centers .
You go in one door , it closes , then you authenticate to a second door .
A pressure plate ensures only one person goes in/out at a time ( and it it 's tripped , a scurity guy looking at a screen has to override ) .- Full 24x7 video surveilance of the data centers.- in/out logs for all equipment .
To take a device in/out of a datacenter requires it being logged in a book ( by a designated person ) .
This is for anything the size of a disk/tape and larger .
All drive bays are audited nightly by security and if drives go missing , security reviews the access logs and server room security footage to see who might have taken them.- clear and consistent labeling systems for rack , shelves , cables and systems.- pre-cable just about everything to row level redundant switches , and have no cabling from server to other servers not passed through a rack/row switch first .
Row switches connect to distro switches .
This ensures cabling is simple , and predictable.- Colorcoded cabling : we use 1 color for redundant cabling ( indicating their should be 2 of these connected to the server at all times , and to seperate cards in the backplane and seperate switches to boot ) , a seperate color for generic gigabit connections , another color for DS View , another color the out management network ( s ) , another color for heartbeat cables , and yet another for non-ethernet ( T1/PRI/etc ) .
Other colors are used in some areas to designate 100m connections , special connectivity , or security enclave barriers , and non-fiber switch-to-switch connections .
Every cable is labled at both ends and every 6-8 feet inbetween.- FULLY REDUNDANT POWER .
It 's not enough to have clean poewr , and good UPS and a generator .
In a large datacenter ( more than a few rows , or anything truly mission critical ) , you should have 2 seperate power companies , 2 seperate generators , and 2 fully segregated power systems at the datcenter , room , row , and rack levels .
in each datacenter we use 2 Liebert mains , each row has a seperate distribution unit connected to a differnt main , and each rack has 4 PDUs ( 2 to each distro ) .
Every server is connected to 2 seperat PDUs , run all the way back to 2 completely independent power grids .
For a deployment of 50 servers or so this is big time overkill .
We have over 3500 servers , we need this... We can not rely on a PSU failure taking out racks at a time which may server dozens of other systems each .</tokentext>
<sentencetext>- Raised floor is certainly important, and a given.
Check- Cable management above AND below the floor.
This is not an either-or...  Check- Cooling capacity is hard to judge, should be scalable.
Redundancy is often overlooked but is often even more important that capacity...  Check- Power quality:  never seen a big datacenter without a Liebert, or at least UPS in every rack.
Power does not have the be contitioned except between the UPS and the machines/devices.
A whole data center power conditioner is often more efficient, but unnecessary for the little guys.
either way - check.- Age is irrelevent as long as it's under support.
If it's not, replace it.
Generators need to be run several times a year to validate their condition, and also to grease the innards...  See too many good generators get kicked on and fail an hour later because the oil hand't been changed in 3 years....- Outages should be tracked, by system, rack row, and power distro.
When system seem to be going down more frequently in one area, there's usually an underlying reason...  As Google recently proved as well for us all, do not ASSUME all is well, routine disgnostics including memory scans should be performed on ALL hardware.
Even ECC RAM deteriorates with age (rapidly) and needs to be part of a maintenance testing and replacement policy - Check.- Fire suppression is usually part of your building codes, and a given, as is the routine checks (at least anually) by law.In addition, we deploy:- Man traps on all enterences to data centers.
You go in one door, it closes, then you authenticate to a second door.
A pressure plate ensures only one person goes in/out at a time (and it it's tripped, a scurity guy looking at a screen has to override).- Full 24x7 video surveilance of the data centers.- in/out logs for all equipment.
To take a device in/out of a datacenter requires it being logged in a book (by a designated person).
This is for anything the size of a disk/tape and larger.
All drive bays are audited nightly by security and if drives go missing, security reviews the access logs and server room security footage to see who might have taken them.- clear and consistent labeling systems for rack, shelves, cables and systems.- pre-cable just about everything to row level redundant switches, and have no cabling from server to other servers not passed through a rack/row switch first.
Row switches connect to distro switches.
This ensures cabling is simple, and predictable.- Colorcoded cabling: we use 1 color for redundant cabling (indicating their should be 2 of these connected to the server at all times, and to seperate cards in the backplane and seperate switches to boot), a seperate color for generic gigabit connections, another color for DS View, another color the out management network(s), another color for heartbeat cables, and yet another for non-ethernet (T1/PRI/etc).
Other colors are used in some areas to designate 100m connections, special connectivity, or security enclave barriers, and non-fiber switch-to-switch connections.
Every cable is labled at both ends and every 6-8 feet inbetween.- FULLY REDUNDANT POWER.
It's not enough to have clean poewr, and good UPS and a generator.
In a large datacenter (more than a few rows, or anything truly mission critical), you should have 2 seperate power companies, 2 seperate generators, and 2 fully segregated power systems at the datcenter, room, row, and rack levels.
in each datacenter we use 2 Liebert mains, each row has a seperate distribution unit connected to a differnt main, and each rack has 4 PDUs (2 to each distro).
Every server is connected to 2 seperat PDUs, run all the way back to 2 completely independent power grids.
For a deployment of 50 servers or so this is big time overkill.
We have over 3500 servers, we need this...  We can not rely on a PSU failure taking out racks at a time which may server dozens of other systems each.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044060</id>
	<title>What you want is two (or more) data centers.</title>
	<author>cenc</author>
	<datestamp>1257856980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I am sorry, but all these people going on and on and on about what you want in a data center are missing the unforeseeable. And the only way to do that is redundancy. What you want is two or more different data centers, in two or more distinct regions (ideally of the World).</p><p>Ask yourself, what would you want running if the entire city or region was nuked, had an earthquake, was hit by a Tsunami, or an asteroid dropped on it? How about if an airplane flies in to it?<br>What happens if the regional power grid or network is out for days, weeks, or months?  Ask all the guys recently in Asia or the middle east about underwater data cables being cut, and what they wish they had. All the above have happened, and will likely happen again. Do you really want all your eggs in one basket?</p><p>The only way to be sure is to diversify your data over as large a geographical area as possible.</p><p>I will still take larger numbers of lower quality but diversified data centers in numbers, over a single high  quality data center any day. Of course, having both helps.</p><p>By the way I keep my data on four different machines, located in four different physical locations in two countries, on two continents. I figure any disaster that will take all four down, will be one sufficiently big that I just don't give a dam anymore.</p></htmltext>
<tokenext>I am sorry , but all these people going on and on and on about what you want in a data center are missing the unforeseeable .
And the only way to do that is redundancy .
What you want is two or more different data centers , in two or more distinct regions ( ideally of the World ) .Ask yourself , what would you want running if the entire city or region was nuked , had an earthquake , was hit by a Tsunami , or an asteroid dropped on it ?
How about if an airplane flies in to it ? What happens if the regional power grid or network is out for days , weeks , or months ?
Ask all the guys recently in Asia or the middle east about underwater data cables being cut , and what they wish they had .
All the above have happened , and will likely happen again .
Do you really want all your eggs in one basket ? The only way to be sure is to diversify your data over as large a geographical area as possible.I will still take larger numbers of lower quality but diversified data centers in numbers , over a single high quality data center any day .
Of course , having both helps.By the way I keep my data on four different machines , located in four different physical locations in two countries , on two continents .
I figure any disaster that will take all four down , will be one sufficiently big that I just do n't give a dam anymore .</tokentext>
<sentencetext>I am sorry, but all these people going on and on and on about what you want in a data center are missing the unforeseeable.
And the only way to do that is redundancy.
What you want is two or more different data centers, in two or more distinct regions (ideally of the World).Ask yourself, what would you want running if the entire city or region was nuked, had an earthquake, was hit by a Tsunami, or an asteroid dropped on it?
How about if an airplane flies in to it?What happens if the regional power grid or network is out for days, weeks, or months?
Ask all the guys recently in Asia or the middle east about underwater data cables being cut, and what they wish they had.
All the above have happened, and will likely happen again.
Do you really want all your eggs in one basket?The only way to be sure is to diversify your data over as large a geographical area as possible.I will still take larger numbers of lower quality but diversified data centers in numbers, over a single high  quality data center any day.
Of course, having both helps.By the way I keep my data on four different machines, located in four different physical locations in two countries, on two continents.
I figure any disaster that will take all four down, will be one sufficiently big that I just don't give a dam anymore.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039058</id>
	<title>Personnel</title>
	<author>Anonymous</author>
	<datestamp>1257764760000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>More important than the technology is the policies and training of the personnel running the operation. It will fail, eventually: It always does, no matter how well its designed or what with promises of infinite uptime. So walk into the data center and count the number of people wearing hiking boots, divide by the number of racks, and there you go. The most grizzly looking guy wearing hiking boots usually knows everything. He also usually has a lighter and a screwdriver if you ask.</p><p>I don't know why this is...</p></htmltext>
<tokenext>More important than the technology is the policies and training of the personnel running the operation .
It will fail , eventually : It always does , no matter how well its designed or what with promises of infinite uptime .
So walk into the data center and count the number of people wearing hiking boots , divide by the number of racks , and there you go .
The most grizzly looking guy wearing hiking boots usually knows everything .
He also usually has a lighter and a screwdriver if you ask.I do n't know why this is.. .</tokentext>
<sentencetext>More important than the technology is the policies and training of the personnel running the operation.
It will fail, eventually: It always does, no matter how well its designed or what with promises of infinite uptime.
So walk into the data center and count the number of people wearing hiking boots, divide by the number of racks, and there you go.
The most grizzly looking guy wearing hiking boots usually knows everything.
He also usually has a lighter and a screwdriver if you ask.I don't know why this is...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041550</id>
	<title>Re:Just off the top of my head</title>
	<author>kilodelta</author>
	<datestamp>1257780000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Also - data centers do move. I know, been through a couple myself. Make sure the movers are bonded, check references for the movers, and insure both the equipment and costs to recover lost data. <br> <br>
One move was government systems. One of the horror stories I heard back when checking with other state agencies was that in one case movers dropped an entire rack of servers, destroying all.
<br> <br>
Our moving company un-racked every server, then wrapped in padding and blanket and placed in a stable rolling cart. Only 3 servers per cart.
<br> <br>
Move went off without a hitch. However the facility we moved into was still under construction and we had a brand new fully populated HP4108 switch that got fried because the electricians shorted the power supply while running CATV lines. Apparently a piece of copper fell in and the electrician heard a snap and we noticed our network going dark.
<br> <br>
Luckily I could get some spare switches from another state agency. And the building owner was good, he paid the $1,500 plus $150 in shipping charges to get us a new power supply for the HP switch the next day. We also chipped in $1,500 to get the redundant power supply.
<br> <br>
Disaster recovery is clutch. We had a web server crash hard that didn't have solid backups or documentation. So we put policies in place to document and backup everything. Ended up using an rsnapshot server for the purpose and that made life so easy.</htmltext>
<tokenext>Also - data centers do move .
I know , been through a couple myself .
Make sure the movers are bonded , check references for the movers , and insure both the equipment and costs to recover lost data .
One move was government systems .
One of the horror stories I heard back when checking with other state agencies was that in one case movers dropped an entire rack of servers , destroying all .
Our moving company un-racked every server , then wrapped in padding and blanket and placed in a stable rolling cart .
Only 3 servers per cart .
Move went off without a hitch .
However the facility we moved into was still under construction and we had a brand new fully populated HP4108 switch that got fried because the electricians shorted the power supply while running CATV lines .
Apparently a piece of copper fell in and the electrician heard a snap and we noticed our network going dark .
Luckily I could get some spare switches from another state agency .
And the building owner was good , he paid the $ 1,500 plus $ 150 in shipping charges to get us a new power supply for the HP switch the next day .
We also chipped in $ 1,500 to get the redundant power supply .
Disaster recovery is clutch .
We had a web server crash hard that did n't have solid backups or documentation .
So we put policies in place to document and backup everything .
Ended up using an rsnapshot server for the purpose and that made life so easy .</tokentext>
<sentencetext>Also - data centers do move.
I know, been through a couple myself.
Make sure the movers are bonded, check references for the movers, and insure both the equipment and costs to recover lost data.
One move was government systems.
One of the horror stories I heard back when checking with other state agencies was that in one case movers dropped an entire rack of servers, destroying all.
Our moving company un-racked every server, then wrapped in padding and blanket and placed in a stable rolling cart.
Only 3 servers per cart.
Move went off without a hitch.
However the facility we moved into was still under construction and we had a brand new fully populated HP4108 switch that got fried because the electricians shorted the power supply while running CATV lines.
Apparently a piece of copper fell in and the electrician heard a snap and we noticed our network going dark.
Luckily I could get some spare switches from another state agency.
And the building owner was good, he paid the $1,500 plus $150 in shipping charges to get us a new power supply for the HP switch the next day.
We also chipped in $1,500 to get the redundant power supply.
Disaster recovery is clutch.
We had a web server crash hard that didn't have solid backups or documentation.
So we put policies in place to document and backup everything.
Ended up using an rsnapshot server for the purpose and that made life so easy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30055308</id>
	<title>Re:Just off the top of my head</title>
	<author>ZerdZerd</author>
	<datestamp>1257867660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How do you get a guided tour at a data center?</p></htmltext>
<tokenext>How do you get a guided tour at a data center ?</tokentext>
<sentencetext>How do you get a guided tour at a data center?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038964</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041896</id>
	<title>Re:I'm going to turn this around.</title>
	<author>SHaFT7</author>
	<datestamp>1257783600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It is nice to read a comment by someone who definitely seems to know what they are talking about. Well written!</htmltext>
<tokenext>It is nice to read a comment by someone who definitely seems to know what they are talking about .
Well written !</tokentext>
<sentencetext>It is nice to read a comment by someone who definitely seems to know what they are talking about.
Well written!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039136</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040284</id>
	<title>Carrier Neutral or Inclusive</title>
	<author>dracocat</author>
	<datestamp>1257771000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Depending on your uptime needs and size, also consider weather you want your Internet Access included as part of your Data Center or whether you want a carrier neutral facility.  Many places just lump data access in with the colocation space and you get an ip.</p><p>Other places, sometimes called a "hotel operator" simply rent you space and power, after which you can connect to one of usually a couple hundred ISPs that are cross-connecting in their meet-me room.</p><p>Also, don't know if you are starting or moving.  If you are just starting, be sure you look very closely into simply renting some cloud space.  I admit I have been skeptical of it for a long time, but I am now a convert.  Sure it is more expensive than buying your own servers and hosting them, but redundancy and capacity planning are almost eliminated.</p></htmltext>
<tokenext>Depending on your uptime needs and size , also consider weather you want your Internet Access included as part of your Data Center or whether you want a carrier neutral facility .
Many places just lump data access in with the colocation space and you get an ip.Other places , sometimes called a " hotel operator " simply rent you space and power , after which you can connect to one of usually a couple hundred ISPs that are cross-connecting in their meet-me room.Also , do n't know if you are starting or moving .
If you are just starting , be sure you look very closely into simply renting some cloud space .
I admit I have been skeptical of it for a long time , but I am now a convert .
Sure it is more expensive than buying your own servers and hosting them , but redundancy and capacity planning are almost eliminated .</tokentext>
<sentencetext>Depending on your uptime needs and size, also consider weather you want your Internet Access included as part of your Data Center or whether you want a carrier neutral facility.
Many places just lump data access in with the colocation space and you get an ip.Other places, sometimes called a "hotel operator" simply rent you space and power, after which you can connect to one of usually a couple hundred ISPs that are cross-connecting in their meet-me room.Also, don't know if you are starting or moving.
If you are just starting, be sure you look very closely into simply renting some cloud space.
I admit I have been skeptical of it for a long time, but I am now a convert.
Sure it is more expensive than buying your own servers and hosting them, but redundancy and capacity planning are almost eliminated.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039848</id>
	<title>Lol</title>
	<author>Anonymous</author>
	<datestamp>1257769020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>How <i>dont</i> you evaluate a data center</htmltext>
<tokenext>How dont you evaluate a data center</tokentext>
<sentencetext>How dont you evaluate a data center</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042398</id>
	<title>Data security paranoia</title>
	<author>turing\_m</author>
	<datestamp>1257789240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If you have commercial information that you absolutely cannot allow to fall into the wrong hands (or accidentally deleted, corrupted, not <a href="http://chucksblog.emc.com/chucks\_blog/2009/10/can-you-trust-your-cloud.html" title="emc.com">backed up</a> [emc.com], whatever), is storing that data in a data center ever really acceptable? I would think not, but I'd like to hear someone else's opinion. Has anyone here done things DIY for this very reason?</htmltext>
<tokenext>If you have commercial information that you absolutely can not allow to fall into the wrong hands ( or accidentally deleted , corrupted , not backed up [ emc.com ] , whatever ) , is storing that data in a data center ever really acceptable ?
I would think not , but I 'd like to hear someone else 's opinion .
Has anyone here done things DIY for this very reason ?</tokentext>
<sentencetext>If you have commercial information that you absolutely cannot allow to fall into the wrong hands (or accidentally deleted, corrupted, not backed up [emc.com], whatever), is storing that data in a data center ever really acceptable?
I would think not, but I'd like to hear someone else's opinion.
Has anyone here done things DIY for this very reason?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039060</id>
	<title>a few things to think about</title>
	<author>Anonymous</author>
	<datestamp>1257764760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Security:<br>manned or unmanned 24x7<br>Closed Circuit cameras<br>How people are granted access to the facility<br>Do you get a cage, a rack, or space in a rack?<br>If a rack or space in a rack, how is your equipment secured?<br>How do you grant access to vendors if they come to swap a hard drive or motherboard for you?</p><p>Facility:<br>How close are they to their maximum power consumption?<br>How do they propose to scale once they reach maximum?<br>KW per Rack limits<br>How many power grids is the facility tied to?<br>Localized fire suppressant or general "drench them all with water"?<br>What kind of backup power generators and do they have access to fuel?<br>Do they have a priority relationship with fuel suppliers in the event of an emergency?<br>Do they have contiguous space available?  If so, how much?<br>What is their plan for growth if they fill all available floor space</p><p>Network:<br>How many carriers are present in the facility?<br>Can you bring your own in, or do you have to share?<br>what is the cost for cross-connects?</p><p>Acct Mgmt &amp; Processes:<br>Billing process<br>Do you have a dedicated account team?<br>What is the process for dealing with SLA violations?</p><p>Hands-on:<br>How many hours and what skillsets come with the contract?<br>What is the hourly rate to do simple tasks like swap tapes?</p></htmltext>
<tokenext>Security : manned or unmanned 24x7Closed Circuit camerasHow people are granted access to the facilityDo you get a cage , a rack , or space in a rack ? If a rack or space in a rack , how is your equipment secured ? How do you grant access to vendors if they come to swap a hard drive or motherboard for you ? Facility : How close are they to their maximum power consumption ? How do they propose to scale once they reach maximum ? KW per Rack limitsHow many power grids is the facility tied to ? Localized fire suppressant or general " drench them all with water " ? What kind of backup power generators and do they have access to fuel ? Do they have a priority relationship with fuel suppliers in the event of an emergency ? Do they have contiguous space available ?
If so , how much ? What is their plan for growth if they fill all available floor spaceNetwork : How many carriers are present in the facility ? Can you bring your own in , or do you have to share ? what is the cost for cross-connects ? Acct Mgmt &amp; Processes : Billing processDo you have a dedicated account team ? What is the process for dealing with SLA violations ? Hands-on : How many hours and what skillsets come with the contract ? What is the hourly rate to do simple tasks like swap tapes ?</tokentext>
<sentencetext>Security:manned or unmanned 24x7Closed Circuit camerasHow people are granted access to the facilityDo you get a cage, a rack, or space in a rack?If a rack or space in a rack, how is your equipment secured?How do you grant access to vendors if they come to swap a hard drive or motherboard for you?Facility:How close are they to their maximum power consumption?How do they propose to scale once they reach maximum?KW per Rack limitsHow many power grids is the facility tied to?Localized fire suppressant or general "drench them all with water"?What kind of backup power generators and do they have access to fuel?Do they have a priority relationship with fuel suppliers in the event of an emergency?Do they have contiguous space available?
If so, how much?What is their plan for growth if they fill all available floor spaceNetwork:How many carriers are present in the facility?Can you bring your own in, or do you have to share?what is the cost for cross-connects?Acct Mgmt &amp; Processes:Billing processDo you have a dedicated account team?What is the process for dealing with SLA violations?Hands-on:How many hours and what skillsets come with the contract?What is the hourly rate to do simple tasks like swap tapes?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038584</id>
	<title>attack it</title>
	<author>Anonymous</author>
	<datestamp>1257762660000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>set it on fire, throw floods at it, generate tornados, then top it off with a nice earthquake.</p></htmltext>
<tokenext>set it on fire , throw floods at it , generate tornados , then top it off with a nice earthquake .</tokentext>
<sentencetext>set it on fire, throw floods at it, generate tornados, then top it off with a nice earthquake.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042770</id>
	<title>Re:Just off the top of my head</title>
	<author>zonker</author>
	<datestamp>1257794280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>One thing I learned about from the EDS failure back in 1999 or 2000 (can't remember exactly) but there was a series of bad storms in Georgia that flooded their main data center.  It led to a nationwide failure of ATM machines that lasted several days.  Their data center was modern and had everything it should have had.  Except it wasn't above the flood zone.  BIG mistake and it cost the bank I worked IT for a <i>lot</i> of grief.</p><p>So having an idea of the physical location can be just as important as things like how many 9s their service record shows.</p></htmltext>
<tokenext>One thing I learned about from the EDS failure back in 1999 or 2000 ( ca n't remember exactly ) but there was a series of bad storms in Georgia that flooded their main data center .
It led to a nationwide failure of ATM machines that lasted several days .
Their data center was modern and had everything it should have had .
Except it was n't above the flood zone .
BIG mistake and it cost the bank I worked IT for a lot of grief.So having an idea of the physical location can be just as important as things like how many 9s their service record shows .</tokentext>
<sentencetext>One thing I learned about from the EDS failure back in 1999 or 2000 (can't remember exactly) but there was a series of bad storms in Georgia that flooded their main data center.
It led to a nationwide failure of ATM machines that lasted several days.
Their data center was modern and had everything it should have had.
Except it wasn't above the flood zone.
BIG mistake and it cost the bank I worked IT for a lot of grief.So having an idea of the physical location can be just as important as things like how many 9s their service record shows.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038726</id>
	<title>I evaluate it dusly...</title>
	<author>Anonymous</author>
	<datestamp>1257763260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>... If my direct manager evaluates it as good, then im good. If my direct supervisor evaluates it as fucked, then im fucked.</htmltext>
<tokenext>... If my direct manager evaluates it as good , then im good .
If my direct supervisor evaluates it as fucked , then im fucked .</tokentext>
<sentencetext>... If my direct manager evaluates it as good, then im good.
If my direct supervisor evaluates it as fucked, then im fucked.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040748</id>
	<title>Vending Machines!!!</title>
	<author>chrisj\_0</author>
	<datestamp>1257773640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>See if you can eat for 2 days out of the vending machines!</htmltext>
<tokenext>See if you can eat for 2 days out of the vending machines !</tokentext>
<sentencetext>See if you can eat for 2 days out of the vending machines!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706</id>
	<title>Re:Just off the top of my head</title>
	<author>Anonymous</author>
	<datestamp>1257763200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>That's interesting, but the OP really needs to know what is good or not. For example, you state "Raised Floor Height". What is good? Newer datacenters don't have raised floors because it is more energy efficient to have concrete floors. "Cooling Capacity" -- what's good and what is bad? How is this measured? Some datacenters may talk aobut how cool they keep the ambient air, but there isn't much evidence that this actually provides a noticable difference to the lifetime or any other factor related to the equipment.</htmltext>
<tokenext>That 's interesting , but the OP really needs to know what is good or not .
For example , you state " Raised Floor Height " .
What is good ?
Newer datacenters do n't have raised floors because it is more energy efficient to have concrete floors .
" Cooling Capacity " -- what 's good and what is bad ?
How is this measured ?
Some datacenters may talk aobut how cool they keep the ambient air , but there is n't much evidence that this actually provides a noticable difference to the lifetime or any other factor related to the equipment .</tokentext>
<sentencetext>That's interesting, but the OP really needs to know what is good or not.
For example, you state "Raised Floor Height".
What is good?
Newer datacenters don't have raised floors because it is more energy efficient to have concrete floors.
"Cooling Capacity" -- what's good and what is bad?
How is this measured?
Some datacenters may talk aobut how cool they keep the ambient air, but there isn't much evidence that this actually provides a noticable difference to the lifetime or any other factor related to the equipment.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30043100</id>
	<title>Are you buying the DataCenter or Renting Rackspace</title>
	<author>jgalietto</author>
	<datestamp>1257885660000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Your question is a little ambiguous.  Are you looking to buy a data center of your own or are you renting rackspace?</p><p>If you are buying the Data Center<br>1.)  Normal title , lien, Structural due diligence as for any RE purchase<br>2.)  Is it on a flood plain<br>3.)  Seismically active site?<br>4.)  Serviced by multiple communication providers from multiple CO's<br>5.)  Power available from two different substations.<br>6.)  Physical security / susceptibility to civil unrest<br>7.)  Physical access driveways, parking, loading docks, hallway widths elevators ramps<br>8.)  Floor / raised floor design loads.   I have seen more than one raised floor rippled by rolling overweight gear on it.<br>9.)  On site power generation / fuel storage.  Mech. condition, age, availability, reliability, repair-ability<br>10.) Sufficient Chiller Capacity<br>11.) Sufficient UPS / Power Conditioning<br>12.) Sufficient space both for current needs and growth for planned lifetime<br>13.) Sufficient office / command center space</p><p>Those should be adequate to get you started.</p><p>For rented rackspace<br>I would say you at least need to glance at items 2 through 11 above.  Beyond that<br>1.)  Per rack power limits<br>2.)  Physical security<br>3.)  If you are using "hands on" services it's skill set and response time.<br>4.)  Whatever value add services you will be using.</p><p>
&nbsp; Sorry it is late and a long day and this is all I can think of.</p></htmltext>
<tokenext>Your question is a little ambiguous .
Are you looking to buy a data center of your own or are you renting rackspace ? If you are buying the Data Center1 .
) Normal title , lien , Structural due diligence as for any RE purchase2 .
) Is it on a flood plain3 .
) Seismically active site ? 4 .
) Serviced by multiple communication providers from multiple CO's5 .
) Power available from two different substations.6 .
) Physical security / susceptibility to civil unrest7 .
) Physical access driveways , parking , loading docks , hallway widths elevators ramps8 .
) Floor / raised floor design loads .
I have seen more than one raised floor rippled by rolling overweight gear on it.9 .
) On site power generation / fuel storage .
Mech. condition , age , availability , reliability , repair-ability10 .
) Sufficient Chiller Capacity11 .
) Sufficient UPS / Power Conditioning12 .
) Sufficient space both for current needs and growth for planned lifetime13 .
) Sufficient office / command center spaceThose should be adequate to get you started.For rented rackspaceI would say you at least need to glance at items 2 through 11 above .
Beyond that1 .
) Per rack power limits2 .
) Physical security3 .
) If you are using " hands on " services it 's skill set and response time.4 .
) Whatever value add services you will be using .
  Sorry it is late and a long day and this is all I can think of .</tokentext>
<sentencetext>Your question is a little ambiguous.
Are you looking to buy a data center of your own or are you renting rackspace?If you are buying the Data Center1.
)  Normal title , lien, Structural due diligence as for any RE purchase2.
)  Is it on a flood plain3.
)  Seismically active site?4.
)  Serviced by multiple communication providers from multiple CO's5.
)  Power available from two different substations.6.
)  Physical security / susceptibility to civil unrest7.
)  Physical access driveways, parking, loading docks, hallway widths elevators ramps8.
)  Floor / raised floor design loads.
I have seen more than one raised floor rippled by rolling overweight gear on it.9.
)  On site power generation / fuel storage.
Mech. condition, age, availability, reliability, repair-ability10.
) Sufficient Chiller Capacity11.
) Sufficient UPS / Power Conditioning12.
) Sufficient space both for current needs and growth for planned lifetime13.
) Sufficient office / command center spaceThose should be adequate to get you started.For rented rackspaceI would say you at least need to glance at items 2 through 11 above.
Beyond that1.
)  Per rack power limits2.
)  Physical security3.
)  If you are using "hands on" services it's skill set and response time.4.
)  Whatever value add services you will be using.
  Sorry it is late and a long day and this is all I can think of.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044710</id>
	<title>Green &amp; be pragmatic</title>
	<author>kelf</author>
	<datestamp>1257863760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Data center draws a lot of power.  One should consider whether there are Green technology implemented to reduce operating expense.  Also spelling out stringent resiliency requirement is good, but one must be pragmatic when doing the specification.  What are we trying to protect, what is the business impact, how much downtime can the business tolerate, and what level of risk can the business take.  When carefully thought through, the added resiliency might not be necessary.</p></htmltext>
<tokenext>Data center draws a lot of power .
One should consider whether there are Green technology implemented to reduce operating expense .
Also spelling out stringent resiliency requirement is good , but one must be pragmatic when doing the specification .
What are we trying to protect , what is the business impact , how much downtime can the business tolerate , and what level of risk can the business take .
When carefully thought through , the added resiliency might not be necessary .</tokentext>
<sentencetext>Data center draws a lot of power.
One should consider whether there are Green technology implemented to reduce operating expense.
Also spelling out stringent resiliency requirement is good, but one must be pragmatic when doing the specification.
What are we trying to protect, what is the business impact, how much downtime can the business tolerate, and what level of risk can the business take.
When carefully thought through, the added resiliency might not be necessary.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30058420</id>
	<title>Easily</title>
	<author>Anonymous</author>
	<datestamp>1257078660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>In down-time per megaton-kilometer.</p></htmltext>
<tokenext>In down-time per megaton-kilometer .</tokentext>
<sentencetext>In down-time per megaton-kilometer.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30169886</id>
	<title>Useful link from an industry veteran</title>
	<author>1sockchuck</author>
	<datestamp>1258723020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>John Savageau, who has been an executive with a number of hosting and colo companies, recently blogged about <a href="http://john-savageau.com/2009/11/13/questions-data-center-operators-don\%E2\%80\%99t-want-you-to-ask/" title="john-savageau.com">Questions Data Center Operators Don't Want You to Ask</a> [john-savageau.com], which some colo shoppers may find useful. It looks at issues for colo centers in mixed-use buildings and the merits of SAS70 certifications, which are often a key marketing point for facility operators.</htmltext>
<tokenext>John Savageau , who has been an executive with a number of hosting and colo companies , recently blogged about Questions Data Center Operators Do n't Want You to Ask [ john-savageau.com ] , which some colo shoppers may find useful .
It looks at issues for colo centers in mixed-use buildings and the merits of SAS70 certifications , which are often a key marketing point for facility operators .</tokentext>
<sentencetext>John Savageau, who has been an executive with a number of hosting and colo companies, recently blogged about Questions Data Center Operators Don't Want You to Ask [john-savageau.com], which some colo shoppers may find useful.
It looks at issues for colo centers in mixed-use buildings and the merits of SAS70 certifications, which are often a key marketing point for facility operators.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30043228</id>
	<title>Don't hire, Wesley Carver</title>
	<author>G3ckoG33k</author>
	<datestamp>1257844380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Don't hire, Wesley Carver!

<a href="http://www.amazon.com/review/R3VO962Z99T6KZ" title="amazon.com">http://www.amazon.com/review/R3VO962Z99T6KZ</a> [amazon.com]</htmltext>
<tokenext>Do n't hire , Wesley Carver !
http : //www.amazon.com/review/R3VO962Z99T6KZ [ amazon.com ]</tokentext>
<sentencetext>Don't hire, Wesley Carver!
http://www.amazon.com/review/R3VO962Z99T6KZ [amazon.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30049378</id>
	<title>what about the human aspects?</title>
	<author>sku158</author>
	<datestamp>1257882540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>i did not read all the comments but i had done cage wrestling with servers as a week long event as well as those mid-night cravings that i need to touch the servers just because they are not purring at the right frequency.  in addition to all the power, connections, server survival conditions, server security, assisted monitoring (don't rely solely on anyone), etc etc.   i also find the need for good nearby parking, large clean bathrooms, available stools or chair, tool set(sockets, big screw drivers, pilers), a place for a coffee break, scream at the VAR and making excuses to the spouse, ULTRA HELPFUL.  for small setups and fuzzy architects, these are something as a tech look for in my experience.</p></htmltext>
<tokenext>i did not read all the comments but i had done cage wrestling with servers as a week long event as well as those mid-night cravings that i need to touch the servers just because they are not purring at the right frequency .
in addition to all the power , connections , server survival conditions , server security , assisted monitoring ( do n't rely solely on anyone ) , etc etc .
i also find the need for good nearby parking , large clean bathrooms , available stools or chair , tool set ( sockets , big screw drivers , pilers ) , a place for a coffee break , scream at the VAR and making excuses to the spouse , ULTRA HELPFUL .
for small setups and fuzzy architects , these are something as a tech look for in my experience .</tokentext>
<sentencetext>i did not read all the comments but i had done cage wrestling with servers as a week long event as well as those mid-night cravings that i need to touch the servers just because they are not purring at the right frequency.
in addition to all the power, connections, server survival conditions, server security, assisted monitoring (don't rely solely on anyone), etc etc.
i also find the need for good nearby parking, large clean bathrooms, available stools or chair, tool set(sockets, big screw drivers, pilers), a place for a coffee break, scream at the VAR and making excuses to the spouse, ULTRA HELPFUL.
for small setups and fuzzy architects, these are something as a tech look for in my experience.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30178540</id>
	<title>External hinge pins? Outsourcing oversight?</title>
	<author>Anonymous</author>
	<datestamp>1258716000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>My auditor is really nice, but some things in a data center really bug them. And, that makes my job difficult.</p><p>Can you tell me if my auditor will like visiting you? My auditor doesn't like:</p><p>- external door hinges they can pop the pins out and get directly into the "secure" server rooms from the parking lot. Especially the ones on the roof and fire exits.</p><p>- outsourcing without asking if I'm OK with that, and ensuring we all know how we will manage any added risks.</p><p>- security personnel vendors that can't provide the last date they verified all their temporary contractors and terminated employees had their access shut off.</p><p>- same for data center personnel; and saying it is not the data center's responsibility to oversee the data center's vendors.</p><p>- propped doors for smoke breaks with no alarms.</p><p>- battery-powered wireless security features that don't tell you when the battery is dead.</p><p>- preventive maintenance personnel with 24x7 access cards.</p><p>- if I get all I need via email to verify, but that our auditor will have to provide 2 weeks notice of all questions, be accompanied by me, make no audit records (i.e., photocopy, pen<br>and paper, laptop typing, etc.), bring no equipment, not write anything down, not ask any questions involving HR or outsourced personnel, etc.</p><p>- the ability to plug in a USB drive or USB modem in a server without detection</p><p>- not allowing me, my auditors, or an independent auditor data center information for "security reasons" to validate our GLBA, HIPAA, FISMA, etc. compliance</p><p>- not having an online data center ticket request and clearing system that I can watch for progress and compliance status.</p><p>I assume you know the rest like uptime, capacity, insurance, etc.</p></htmltext>
<tokenext>My auditor is really nice , but some things in a data center really bug them .
And , that makes my job difficult.Can you tell me if my auditor will like visiting you ?
My auditor does n't like : - external door hinges they can pop the pins out and get directly into the " secure " server rooms from the parking lot .
Especially the ones on the roof and fire exits.- outsourcing without asking if I 'm OK with that , and ensuring we all know how we will manage any added risks.- security personnel vendors that ca n't provide the last date they verified all their temporary contractors and terminated employees had their access shut off.- same for data center personnel ; and saying it is not the data center 's responsibility to oversee the data center 's vendors.- propped doors for smoke breaks with no alarms.- battery-powered wireless security features that do n't tell you when the battery is dead.- preventive maintenance personnel with 24x7 access cards.- if I get all I need via email to verify , but that our auditor will have to provide 2 weeks notice of all questions , be accompanied by me , make no audit records ( i.e. , photocopy , penand paper , laptop typing , etc .
) , bring no equipment , not write anything down , not ask any questions involving HR or outsourced personnel , etc.- the ability to plug in a USB drive or USB modem in a server without detection- not allowing me , my auditors , or an independent auditor data center information for " security reasons " to validate our GLBA , HIPAA , FISMA , etc .
compliance- not having an online data center ticket request and clearing system that I can watch for progress and compliance status.I assume you know the rest like uptime , capacity , insurance , etc .</tokentext>
<sentencetext>My auditor is really nice, but some things in a data center really bug them.
And, that makes my job difficult.Can you tell me if my auditor will like visiting you?
My auditor doesn't like:- external door hinges they can pop the pins out and get directly into the "secure" server rooms from the parking lot.
Especially the ones on the roof and fire exits.- outsourcing without asking if I'm OK with that, and ensuring we all know how we will manage any added risks.- security personnel vendors that can't provide the last date they verified all their temporary contractors and terminated employees had their access shut off.- same for data center personnel; and saying it is not the data center's responsibility to oversee the data center's vendors.- propped doors for smoke breaks with no alarms.- battery-powered wireless security features that don't tell you when the battery is dead.- preventive maintenance personnel with 24x7 access cards.- if I get all I need via email to verify, but that our auditor will have to provide 2 weeks notice of all questions, be accompanied by me, make no audit records (i.e., photocopy, penand paper, laptop typing, etc.
), bring no equipment, not write anything down, not ask any questions involving HR or outsourced personnel, etc.- the ability to plug in a USB drive or USB modem in a server without detection- not allowing me, my auditors, or an independent auditor data center information for "security reasons" to validate our GLBA, HIPAA, FISMA, etc.
compliance- not having an online data center ticket request and clearing system that I can watch for progress and compliance status.I assume you know the rest like uptime, capacity, insurance, etc.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041496</id>
	<title>Less technical, than a feel for the place</title>
	<author>funkboy</author>
	<datestamp>1257779520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>All the technical points folks are making here are very important.</p><p>But the most important thing is the people managing the datacenter.</p><p>At least in Paris, about 4/5 of the <a href="http://www.journaldunet.com/solutions/0603/060328-redbus-panne-majeure.shtml" title="journaldunet.com" rel="nofollow">catastrophic failures</a> [journaldunet.com] experienced in the last several years have been due to:</p><p>
 - the management being a bunch of slimy cheapos and not doing maintenance on time or cutting corners when they do get around to doing maintenance</p><p>
 - some cro-magnon "technician" from a maintenance contracting company doing something stupid because he was completely unsupervised</p><p>In all cases, these datacenters had everything that folks above have described:  dual theoretically diverse utility power feeds, dual generators with big fuel tanks, big battery systems, dual theoretically diverse chiller circuits, etc etc etc.</p><p>The only thing you can do to protect yourself against this sort of thing is to treat the datacenter selection process, <b>especially the salesbeasts</b>, as a job interview.  If you say something like "I don't care how big your generators are; show me proof that they've had an oil change sometime in the last two years, that you test them regulary, and that your emergency fuel delivery contract is paid up" and they bullshit you, it's time to look elsewhere.</p><p>The single most important thing for me is to find out what procedures they follow when a 3rd party contractor is on-site doing maintenance on their critical equipment, especially the power transfer systems.  Power control master switches seem to have some sort of special attraction for morons.  Outages experienced recently (all the fault of the unsupervised 3rd party maintenance technician):</p><p>
 - removing both utility feeds from the master control switch along with both of its own internal battery backups at the same time, so that it defaults into the fail-safe mode of "off"</p><p>
 - needing to transfer a PDU feed from one source to another and being alone, so he shuts off the primary feed before he walks over to the backup to enable it</p><p>
 - doing some kind of "test" of the redundancy settings and screwing things up enough that the datacenter power is running off house batteries, the generators do not kick on as they detect that the utility power is working, and the batteries are disconnected from utility power.  15 minutes later, all the batteries are flat and the datafloor power is dead.  The house lights still work as they run off "dirty" power direct from the utility, and the cleaning people are running the vacuum.</p><p>
 - during a cooling system purge, leaving the drain valve for the cooling system open, with the fill valve for the reserve water tank shut, and the reservoir level alarm disabled, broken, or ignored.  It took almost 24 hours to get the datafloor temp back to normal as the entire cooling system circuit was dry.</p><p>
 - some construction jackass lucky to still be alive drilling directly into the master B-feed power riser cables and getting God knows how many amps directly into his concrete drill.  Every single individual breaker in all the B-feed PDUs on every floor popped.  The worst bit was that the jackasses that run the place didn't have a master record of the breakers in each PDU (the data was just kept in the individual client records), so they started digging through all the client records one by one (also note the lack of someone that knows about SELECT FROM) to figure out who to turn back on until about 10 people ran into the DC manager's office screaming at them to turn everything on and sort out which ones they should turn back off later.</p><p>
 - a maintenance by a utility power technician causing the datacenter power system and the utility power system to have a somewhat different idea of what constitutes neutral voltage on a ground, again leading to the generator system thinking that the utility power was just fine but the battery system detecting a ground fault and refusing to us</p></htmltext>
<tokenext>All the technical points folks are making here are very important.But the most important thing is the people managing the datacenter.At least in Paris , about 4/5 of the catastrophic failures [ journaldunet.com ] experienced in the last several years have been due to : - the management being a bunch of slimy cheapos and not doing maintenance on time or cutting corners when they do get around to doing maintenance - some cro-magnon " technician " from a maintenance contracting company doing something stupid because he was completely unsupervisedIn all cases , these datacenters had everything that folks above have described : dual theoretically diverse utility power feeds , dual generators with big fuel tanks , big battery systems , dual theoretically diverse chiller circuits , etc etc etc.The only thing you can do to protect yourself against this sort of thing is to treat the datacenter selection process , especially the salesbeasts , as a job interview .
If you say something like " I do n't care how big your generators are ; show me proof that they 've had an oil change sometime in the last two years , that you test them regulary , and that your emergency fuel delivery contract is paid up " and they bullshit you , it 's time to look elsewhere.The single most important thing for me is to find out what procedures they follow when a 3rd party contractor is on-site doing maintenance on their critical equipment , especially the power transfer systems .
Power control master switches seem to have some sort of special attraction for morons .
Outages experienced recently ( all the fault of the unsupervised 3rd party maintenance technician ) : - removing both utility feeds from the master control switch along with both of its own internal battery backups at the same time , so that it defaults into the fail-safe mode of " off " - needing to transfer a PDU feed from one source to another and being alone , so he shuts off the primary feed before he walks over to the backup to enable it - doing some kind of " test " of the redundancy settings and screwing things up enough that the datacenter power is running off house batteries , the generators do not kick on as they detect that the utility power is working , and the batteries are disconnected from utility power .
15 minutes later , all the batteries are flat and the datafloor power is dead .
The house lights still work as they run off " dirty " power direct from the utility , and the cleaning people are running the vacuum .
- during a cooling system purge , leaving the drain valve for the cooling system open , with the fill valve for the reserve water tank shut , and the reservoir level alarm disabled , broken , or ignored .
It took almost 24 hours to get the datafloor temp back to normal as the entire cooling system circuit was dry .
- some construction jackass lucky to still be alive drilling directly into the master B-feed power riser cables and getting God knows how many amps directly into his concrete drill .
Every single individual breaker in all the B-feed PDUs on every floor popped .
The worst bit was that the jackasses that run the place did n't have a master record of the breakers in each PDU ( the data was just kept in the individual client records ) , so they started digging through all the client records one by one ( also note the lack of someone that knows about SELECT FROM ) to figure out who to turn back on until about 10 people ran into the DC manager 's office screaming at them to turn everything on and sort out which ones they should turn back off later .
- a maintenance by a utility power technician causing the datacenter power system and the utility power system to have a somewhat different idea of what constitutes neutral voltage on a ground , again leading to the generator system thinking that the utility power was just fine but the battery system detecting a ground fault and refusing to us</tokentext>
<sentencetext>All the technical points folks are making here are very important.But the most important thing is the people managing the datacenter.At least in Paris, about 4/5 of the catastrophic failures [journaldunet.com] experienced in the last several years have been due to:
 - the management being a bunch of slimy cheapos and not doing maintenance on time or cutting corners when they do get around to doing maintenance
 - some cro-magnon "technician" from a maintenance contracting company doing something stupid because he was completely unsupervisedIn all cases, these datacenters had everything that folks above have described:  dual theoretically diverse utility power feeds, dual generators with big fuel tanks, big battery systems, dual theoretically diverse chiller circuits, etc etc etc.The only thing you can do to protect yourself against this sort of thing is to treat the datacenter selection process, especially the salesbeasts, as a job interview.
If you say something like "I don't care how big your generators are; show me proof that they've had an oil change sometime in the last two years, that you test them regulary, and that your emergency fuel delivery contract is paid up" and they bullshit you, it's time to look elsewhere.The single most important thing for me is to find out what procedures they follow when a 3rd party contractor is on-site doing maintenance on their critical equipment, especially the power transfer systems.
Power control master switches seem to have some sort of special attraction for morons.
Outages experienced recently (all the fault of the unsupervised 3rd party maintenance technician):
 - removing both utility feeds from the master control switch along with both of its own internal battery backups at the same time, so that it defaults into the fail-safe mode of "off"
 - needing to transfer a PDU feed from one source to another and being alone, so he shuts off the primary feed before he walks over to the backup to enable it
 - doing some kind of "test" of the redundancy settings and screwing things up enough that the datacenter power is running off house batteries, the generators do not kick on as they detect that the utility power is working, and the batteries are disconnected from utility power.
15 minutes later, all the batteries are flat and the datafloor power is dead.
The house lights still work as they run off "dirty" power direct from the utility, and the cleaning people are running the vacuum.
- during a cooling system purge, leaving the drain valve for the cooling system open, with the fill valve for the reserve water tank shut, and the reservoir level alarm disabled, broken, or ignored.
It took almost 24 hours to get the datafloor temp back to normal as the entire cooling system circuit was dry.
- some construction jackass lucky to still be alive drilling directly into the master B-feed power riser cables and getting God knows how many amps directly into his concrete drill.
Every single individual breaker in all the B-feed PDUs on every floor popped.
The worst bit was that the jackasses that run the place didn't have a master record of the breakers in each PDU (the data was just kept in the individual client records), so they started digging through all the client records one by one (also note the lack of someone that knows about SELECT FROM) to figure out who to turn back on until about 10 people ran into the DC manager's office screaming at them to turn everything on and sort out which ones they should turn back off later.
- a maintenance by a utility power technician causing the datacenter power system and the utility power system to have a somewhat different idea of what constitutes neutral voltage on a ground, again leading to the generator system thinking that the utility power was just fine but the battery system detecting a ground fault and refusing to us</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040944</id>
	<title>Re:i ran a junky data center</title>
	<author>upuv</author>
	<datestamp>1257774960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have a phrase I use.</p><p>"You can never out smart an idiot!"</p><p>You think you have ever angle covered.  You are proud of how tight the whole design is.  Then along comes some moron and BOOOMMMMM.  They defeat your planning in 1 second.</p><p>Happens all the time.</p><p>I've had a lot of managers in companies look at me when I say that.  First off the idiots get very offended.  The smart ones get it.  The phrase acts like pre-filter before any work is actually done.</p><p>As a result over the years I have learned that design and implementation has to have certain attributes.<br>- flexibility.  We will have to come back and alter/fix something.<br>- redundancy.  Their has to be more than 1 method of doing something.  In your case a window and a fan as my backup second option.<br>- slack.  Things never fit exactly into the specifications.  Whether it be a measurement of distance.  A power consumption claim.  or a throughput estimate.  Always add more during the build.  It's cheaper.</p><p>It's the arrogant that think it works out of the box first time.</p></htmltext>
<tokenext>I have a phrase I use .
" You can never out smart an idiot !
" You think you have ever angle covered .
You are proud of how tight the whole design is .
Then along comes some moron and BOOOMMMMM .
They defeat your planning in 1 second.Happens all the time.I 've had a lot of managers in companies look at me when I say that .
First off the idiots get very offended .
The smart ones get it .
The phrase acts like pre-filter before any work is actually done.As a result over the years I have learned that design and implementation has to have certain attributes.- flexibility .
We will have to come back and alter/fix something.- redundancy .
Their has to be more than 1 method of doing something .
In your case a window and a fan as my backup second option.- slack .
Things never fit exactly into the specifications .
Whether it be a measurement of distance .
A power consumption claim .
or a throughput estimate .
Always add more during the build .
It 's cheaper.It 's the arrogant that think it works out of the box first time .</tokentext>
<sentencetext>I have a phrase I use.
"You can never out smart an idiot!
"You think you have ever angle covered.
You are proud of how tight the whole design is.
Then along comes some moron and BOOOMMMMM.
They defeat your planning in 1 second.Happens all the time.I've had a lot of managers in companies look at me when I say that.
First off the idiots get very offended.
The smart ones get it.
The phrase acts like pre-filter before any work is actually done.As a result over the years I have learned that design and implementation has to have certain attributes.- flexibility.
We will have to come back and alter/fix something.- redundancy.
Their has to be more than 1 method of doing something.
In your case a window and a fan as my backup second option.- slack.
Things never fit exactly into the specifications.
Whether it be a measurement of distance.
A power consumption claim.
or a throughput estimate.
Always add more during the build.
It's cheaper.It's the arrogant that think it works out of the box first time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039148</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041174</id>
	<title>ignore the magic of the City of Lights..</title>
	<author>Anonymous</author>
	<datestamp>1257776700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Forget the presentations and the tours with all the impressive blinking lights tapering off to the horizon.


It's all about what happens when things go wrong! We've had Alex answer the phone on a weekend and refuse to do anything with a windows server because he only likes Linux, wait for the Windows guys on Mondays. No help, no escalation. End of contract there. Failed mirrored disk? - don't be surprised if they replace the good one. Be aware that the fancy network status websites only include a sample of the actual errors, power outages and faults. Rebuild a simple server? three shifts to do it and the last guy left without leaving the root password.


Ignore the data center review sites. They insist you provide a URL that they can track back to the hosting company's network. A brilliant way to ensure happy reviews only because no current customer is going to let the world know their servers are at a horrible hosting site.

Make sure the escalation process works. Use fire drills on them to figure out if it really works.


Another great question to ask - is their phone system VOIP? If they get a DDOS attack then you can't call them because it takes out their phone system too.

The above mentioned analysis of power and cooling is wonderful but if the NOC manager decides not to pay overtime then maintenance gets done in business hours.

These are not bottom-feeder companies, these are mid-range or higher price ranges. Either you have to deal with these problems and set the right expectation in their minds, or be prepared to have plenty of backups and be able to move to another company when required.</htmltext>
<tokenext>Forget the presentations and the tours with all the impressive blinking lights tapering off to the horizon .
It 's all about what happens when things go wrong !
We 've had Alex answer the phone on a weekend and refuse to do anything with a windows server because he only likes Linux , wait for the Windows guys on Mondays .
No help , no escalation .
End of contract there .
Failed mirrored disk ?
- do n't be surprised if they replace the good one .
Be aware that the fancy network status websites only include a sample of the actual errors , power outages and faults .
Rebuild a simple server ?
three shifts to do it and the last guy left without leaving the root password .
Ignore the data center review sites .
They insist you provide a URL that they can track back to the hosting company 's network .
A brilliant way to ensure happy reviews only because no current customer is going to let the world know their servers are at a horrible hosting site .
Make sure the escalation process works .
Use fire drills on them to figure out if it really works .
Another great question to ask - is their phone system VOIP ?
If they get a DDOS attack then you ca n't call them because it takes out their phone system too .
The above mentioned analysis of power and cooling is wonderful but if the NOC manager decides not to pay overtime then maintenance gets done in business hours .
These are not bottom-feeder companies , these are mid-range or higher price ranges .
Either you have to deal with these problems and set the right expectation in their minds , or be prepared to have plenty of backups and be able to move to another company when required .</tokentext>
<sentencetext>Forget the presentations and the tours with all the impressive blinking lights tapering off to the horizon.
It's all about what happens when things go wrong!
We've had Alex answer the phone on a weekend and refuse to do anything with a windows server because he only likes Linux, wait for the Windows guys on Mondays.
No help, no escalation.
End of contract there.
Failed mirrored disk?
- don't be surprised if they replace the good one.
Be aware that the fancy network status websites only include a sample of the actual errors, power outages and faults.
Rebuild a simple server?
three shifts to do it and the last guy left without leaving the root password.
Ignore the data center review sites.
They insist you provide a URL that they can track back to the hosting company's network.
A brilliant way to ensure happy reviews only because no current customer is going to let the world know their servers are at a horrible hosting site.
Make sure the escalation process works.
Use fire drills on them to figure out if it really works.
Another great question to ask - is their phone system VOIP?
If they get a DDOS attack then you can't call them because it takes out their phone system too.
The above mentioned analysis of power and cooling is wonderful but if the NOC manager decides not to pay overtime then maintenance gets done in business hours.
These are not bottom-feeder companies, these are mid-range or higher price ranges.
Either you have to deal with these problems and set the right expectation in their minds, or be prepared to have plenty of backups and be able to move to another company when required.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038780</id>
	<title>Some important questions:</title>
	<author>swordgeek</author>
	<datestamp>1257763560000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>I'm assuming this is evaluating for co-location purposes. Here are some things I'd ask.</p><p>1) How quickly can I get a new server deployed into it? How do I do it?<br>2) Can I get a tour? Now? (Note that this not only lets you see the data centre, but also will give you an idea of security. Look for procedures on getting in, notice if they ask you to sign a release form, etc.)<br>3) How close to capacity are you? (The answer should include space, floor weight, power, cooling, and network. If it doesn't, why not?)<br>4) What are your racking/networking/cabling standards? (They should have some, at least where you connect to them, but they shouldn't be onerous).<br>5) How many people manage the data centre? You don't want to be one car accident away from loss of access or service.<br>6) How about power management? Is the centre on a UPS, redundant UPSes, or nothing? Can you get charts of the power going to the servers? Can you get DC for telecom servers, or only AC? Is it on a generator for long-term outages? (Note that you may not need this--in which case you shouldn't pay for it. Alternatively, if you need it, make sure it's there!)<br>7) Is it manned 24/7? (Ditto!)</p><p>If you can, ask them to pull a tile so you can see under the raised floor. Underfloor cabling (and suspended ceiling cabling for that matter) should be neat, tied, and labelled. Dead cables should be pulled, not left to rot. There has to be sufficient clearance for unrestricted airflow. Cages are better than lying on the floor.</p><p>Most of what makes a good data centre comes down to organization. If it's a rats nest, then even if there's one guy who knows "everything," it will be less reliable, less consistent, and less predictable. Procedures should be written down, printed, filed in labeled binders, and regularly updated. (Note: Online copies should be canonical, but also needs to be accessible offline when shit --&gt; fan.)</p><p>Fire suppressant mechanisms (wet vs. dry, live pipes, etc.) need to be considered, as does emergency lighting. If the operators need to start digging around for a flashlight to read what they should be doing, then things aren't happening the way they should.</p><p>Be picky. If they're leasing space to you, then their data centre design and maintenance is their BUSINESS, and they had better get it right! Look for a neat, well-organized, well-documented, well-panned data centre. Also make sure that it fits your needs.</p></htmltext>
<tokenext>I 'm assuming this is evaluating for co-location purposes .
Here are some things I 'd ask.1 ) How quickly can I get a new server deployed into it ?
How do I do it ? 2 ) Can I get a tour ?
Now ? ( Note that this not only lets you see the data centre , but also will give you an idea of security .
Look for procedures on getting in , notice if they ask you to sign a release form , etc .
) 3 ) How close to capacity are you ?
( The answer should include space , floor weight , power , cooling , and network .
If it does n't , why not ?
) 4 ) What are your racking/networking/cabling standards ?
( They should have some , at least where you connect to them , but they should n't be onerous ) .5 ) How many people manage the data centre ?
You do n't want to be one car accident away from loss of access or service.6 ) How about power management ?
Is the centre on a UPS , redundant UPSes , or nothing ?
Can you get charts of the power going to the servers ?
Can you get DC for telecom servers , or only AC ?
Is it on a generator for long-term outages ?
( Note that you may not need this--in which case you should n't pay for it .
Alternatively , if you need it , make sure it 's there !
) 7 ) Is it manned 24/7 ?
( Ditto ! ) If you can , ask them to pull a tile so you can see under the raised floor .
Underfloor cabling ( and suspended ceiling cabling for that matter ) should be neat , tied , and labelled .
Dead cables should be pulled , not left to rot .
There has to be sufficient clearance for unrestricted airflow .
Cages are better than lying on the floor.Most of what makes a good data centre comes down to organization .
If it 's a rats nest , then even if there 's one guy who knows " everything , " it will be less reliable , less consistent , and less predictable .
Procedures should be written down , printed , filed in labeled binders , and regularly updated .
( Note : Online copies should be canonical , but also needs to be accessible offline when shit -- &gt; fan .
) Fire suppressant mechanisms ( wet vs. dry , live pipes , etc .
) need to be considered , as does emergency lighting .
If the operators need to start digging around for a flashlight to read what they should be doing , then things are n't happening the way they should.Be picky .
If they 're leasing space to you , then their data centre design and maintenance is their BUSINESS , and they had better get it right !
Look for a neat , well-organized , well-documented , well-panned data centre .
Also make sure that it fits your needs .</tokentext>
<sentencetext>I'm assuming this is evaluating for co-location purposes.
Here are some things I'd ask.1) How quickly can I get a new server deployed into it?
How do I do it?2) Can I get a tour?
Now? (Note that this not only lets you see the data centre, but also will give you an idea of security.
Look for procedures on getting in, notice if they ask you to sign a release form, etc.
)3) How close to capacity are you?
(The answer should include space, floor weight, power, cooling, and network.
If it doesn't, why not?
)4) What are your racking/networking/cabling standards?
(They should have some, at least where you connect to them, but they shouldn't be onerous).5) How many people manage the data centre?
You don't want to be one car accident away from loss of access or service.6) How about power management?
Is the centre on a UPS, redundant UPSes, or nothing?
Can you get charts of the power going to the servers?
Can you get DC for telecom servers, or only AC?
Is it on a generator for long-term outages?
(Note that you may not need this--in which case you shouldn't pay for it.
Alternatively, if you need it, make sure it's there!
)7) Is it manned 24/7?
(Ditto!)If you can, ask them to pull a tile so you can see under the raised floor.
Underfloor cabling (and suspended ceiling cabling for that matter) should be neat, tied, and labelled.
Dead cables should be pulled, not left to rot.
There has to be sufficient clearance for unrestricted airflow.
Cages are better than lying on the floor.Most of what makes a good data centre comes down to organization.
If it's a rats nest, then even if there's one guy who knows "everything," it will be less reliable, less consistent, and less predictable.
Procedures should be written down, printed, filed in labeled binders, and regularly updated.
(Note: Online copies should be canonical, but also needs to be accessible offline when shit --&gt; fan.
)Fire suppressant mechanisms (wet vs. dry, live pipes, etc.
) need to be considered, as does emergency lighting.
If the operators need to start digging around for a flashlight to read what they should be doing, then things aren't happening the way they should.Be picky.
If they're leasing space to you, then their data centre design and maintenance is their BUSINESS, and they had better get it right!
Look for a neat, well-organized, well-documented, well-panned data centre.
Also make sure that it fits your needs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038980</id>
	<title>Re:History</title>
	<author>Nefarious Wheel</author>
	<datestamp>1257764340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>And of course, don't forget the simple visual inspection.  Sometimes the <a href="http://www.darkroastedblend.com/2008/03/disturbing-wiring-part-3.html" title="darkroastedblend.com">cabling infrastructure</a> [darkroastedblend.com] may not quite be up to spec.</htmltext>
<tokenext>And of course , do n't forget the simple visual inspection .
Sometimes the cabling infrastructure [ darkroastedblend.com ] may not quite be up to spec .</tokentext>
<sentencetext>And of course, don't forget the simple visual inspection.
Sometimes the cabling infrastructure [darkroastedblend.com] may not quite be up to spec.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038568</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040610</id>
	<title>Re:Some important questions:</title>
	<author>Anonymous</author>
	<datestamp>1257772800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"Dead cables should be pulled, not left to rot"</p><p>After 25 years in telecoms I disagree.<br>There are far more troubles caused by pulling out old cables then there are in just leaving them there.<br>To remove a cable without causing damage to those around it is very hard work and most installers cheat and just pull the cable out.<br>That results in damaged cables around the original cable.<br>If you are lucky - a failure will be immediate and obvious. If not, have fun looking.<br>I remember one really weird temperature related fault that took ages just to figure out it was a cable fault.</p></htmltext>
<tokenext>" Dead cables should be pulled , not left to rot " After 25 years in telecoms I disagree.There are far more troubles caused by pulling out old cables then there are in just leaving them there.To remove a cable without causing damage to those around it is very hard work and most installers cheat and just pull the cable out.That results in damaged cables around the original cable.If you are lucky - a failure will be immediate and obvious .
If not , have fun looking.I remember one really weird temperature related fault that took ages just to figure out it was a cable fault .</tokentext>
<sentencetext>"Dead cables should be pulled, not left to rot"After 25 years in telecoms I disagree.There are far more troubles caused by pulling out old cables then there are in just leaving them there.To remove a cable without causing damage to those around it is very hard work and most installers cheat and just pull the cable out.That results in damaged cables around the original cable.If you are lucky - a failure will be immediate and obvious.
If not, have fun looking.I remember one really weird temperature related fault that took ages just to figure out it was a cable fault.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038780</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042396</id>
	<title>Step #1 = DEFINE YOUR NEEDS</title>
	<author>Anonymous</author>
	<datestamp>1257789240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>I'd guess 90\% of projects fail at step #1:  Define your needs.  What's the objective here?  Why are we doing this, and what are the benchmarks required for success.  Does this sound familiar?</p><p>First, define your needs, then evaluate possible solutions to what might meets your needs.</p><p>If you don't know what you need, you don't know what the hell you are doing.  Hire someone who does, like a consultant.</p></htmltext>
<tokenext>I 'd guess 90 \ % of projects fail at step # 1 : Define your needs .
What 's the objective here ?
Why are we doing this , and what are the benchmarks required for success .
Does this sound familiar ? First , define your needs , then evaluate possible solutions to what might meets your needs.If you do n't know what you need , you do n't know what the hell you are doing .
Hire someone who does , like a consultant .</tokentext>
<sentencetext>I'd guess 90\% of projects fail at step #1:  Define your needs.
What's the objective here?
Why are we doing this, and what are the benchmarks required for success.
Does this sound familiar?First, define your needs, then evaluate possible solutions to what might meets your needs.If you don't know what you need, you don't know what the hell you are doing.
Hire someone who does, like a consultant.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040588</id>
	<title>Don't forget the contract details</title>
	<author>mattmarlowe</author>
	<datestamp>1257772680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>* If you move your infrastructure in with them and later lose confidence in their abilities for whatever reason, how quickly and easily can you terminate and move out w/o all the hassle of pre-paying to end of term?  This should be your biggest requirement.  No matter how much do diligence you do on technical matters prior to moving in, things change...the datacenter provider could have financial problems, sudden facility problems, lose key critical staff, etc.  Your contract needs to allow you the option to terminate when you reasonably lose confidence in their abilities w/o a billion hoops or outlandish costs.</p><p>* Power pricing and limitations on how much they are allowed to pass on additional costs from utilities during term of contract.</p></htmltext>
<tokenext>* If you move your infrastructure in with them and later lose confidence in their abilities for whatever reason , how quickly and easily can you terminate and move out w/o all the hassle of pre-paying to end of term ?
This should be your biggest requirement .
No matter how much do diligence you do on technical matters prior to moving in , things change...the datacenter provider could have financial problems , sudden facility problems , lose key critical staff , etc .
Your contract needs to allow you the option to terminate when you reasonably lose confidence in their abilities w/o a billion hoops or outlandish costs .
* Power pricing and limitations on how much they are allowed to pass on additional costs from utilities during term of contract .</tokentext>
<sentencetext>* If you move your infrastructure in with them and later lose confidence in their abilities for whatever reason, how quickly and easily can you terminate and move out w/o all the hassle of pre-paying to end of term?
This should be your biggest requirement.
No matter how much do diligence you do on technical matters prior to moving in, things change...the datacenter provider could have financial problems, sudden facility problems, lose key critical staff, etc.
Your contract needs to allow you the option to terminate when you reasonably lose confidence in their abilities w/o a billion hoops or outlandish costs.
* Power pricing and limitations on how much they are allowed to pass on additional costs from utilities during term of contract.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30048816</id>
	<title>Read the Google book on that topic</title>
	<author>Anonymous</author>
	<datestamp>1257880380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Read Urs H&#246;lzle and Luiz Barroso's (both Google) book on datacenters:<br>http://books.google.com/books?id=Y3hhzdSSK58C&amp;dq=h\%C3\%B6lzle+barroso+warehouse&amp;printsec=frontcover&amp;source=bl&amp;ots=3GAME9c91o&amp;sig=SFKN5MEnDyxy\_Zk\_bOctH776cjU&amp;hl=de&amp;ei=xaz5Su-zB8fm-Qbur\_ngBw&amp;sa=X&amp;oi=book\_result&amp;ct=result&amp;resnum=1&amp;ved=0CAsQ6AEwAA#v=onepage&amp;q=h\%C3\%B6lzle\%20barroso\%20warehouse&amp;f=false</p><p>Also available as PDF somewhere.</p></htmltext>
<tokenext>Read Urs H   lzle and Luiz Barroso 's ( both Google ) book on datacenters : http : //books.google.com/books ? id = Y3hhzdSSK58C&amp;dq = h \ % C3 \ % B6lzle + barroso + warehouse&amp;printsec = frontcover&amp;source = bl&amp;ots = 3GAME9c91o&amp;sig = SFKN5MEnDyxy \ _Zk \ _bOctH776cjU&amp;hl = de&amp;ei = xaz5Su-zB8fm-Qbur \ _ngBw&amp;sa = X&amp;oi = book \ _result&amp;ct = result&amp;resnum = 1&amp;ved = 0CAsQ6AEwAA # v = onepage&amp;q = h \ % C3 \ % B6lzle \ % 20barroso \ % 20warehouse&amp;f = falseAlso available as PDF somewhere .</tokentext>
<sentencetext>Read Urs Hölzle and Luiz Barroso's (both Google) book on datacenters:http://books.google.com/books?id=Y3hhzdSSK58C&amp;dq=h\%C3\%B6lzle+barroso+warehouse&amp;printsec=frontcover&amp;source=bl&amp;ots=3GAME9c91o&amp;sig=SFKN5MEnDyxy\_Zk\_bOctH776cjU&amp;hl=de&amp;ei=xaz5Su-zB8fm-Qbur\_ngBw&amp;sa=X&amp;oi=book\_result&amp;ct=result&amp;resnum=1&amp;ved=0CAsQ6AEwAA#v=onepage&amp;q=h\%C3\%B6lzle\%20barroso\%20warehouse&amp;f=falseAlso available as PDF somewhere.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038846</id>
	<title>In Russia...</title>
	<author>Anonymous</author>
	<datestamp>1257763860000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>...data center evaluates you!</htmltext>
<tokenext>...data center evaluates you !</tokentext>
<sentencetext>...data center evaluates you!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038882</id>
	<title>go47</title>
	<author>Anonymous</author>
	<datestamp>1257763980000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><A HREF="http://goat.cx/" title="goat.cx" rel="nofollow">Of user base for but it's not a obsessed - give was at the same the most. LLok at Creek, abysmal raise or lower the Jesus Up The</a> [goat.cx]</htmltext>
<tokenext>Of user base for but it 's not a obsessed - give was at the same the most .
LLok at Creek , abysmal raise or lower the Jesus Up The [ goat.cx ]</tokentext>
<sentencetext>Of user base for but it's not a obsessed - give was at the same the most.
LLok at Creek, abysmal raise or lower the Jesus Up The [goat.cx]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038724</id>
	<title>Security</title>
	<author>Anonymous</author>
	<datestamp>1257763260000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>I co-locate at a data center in Alberta. It is in the basement of a high-rise building. Because of this there is much traffice in/out of the building. The main doors within the building leading to the datacenter itself can be opened with a credit card, or even a set of keys (um, any key). This poses a security risk. Even though you'd need to know exactly where to go and when (so as to not bump into people working there), it is still possible to get what you're after realtively simple with no alarms. They do have cameras though, so wear a mask - and since this is Alberta, no one would question the mask.</p><p>I picked up one of my servers a couple days ago, and they didn't ask for ID either. I could have been ANYONE.</p></htmltext>
<tokenext>I co-locate at a data center in Alberta .
It is in the basement of a high-rise building .
Because of this there is much traffice in/out of the building .
The main doors within the building leading to the datacenter itself can be opened with a credit card , or even a set of keys ( um , any key ) .
This poses a security risk .
Even though you 'd need to know exactly where to go and when ( so as to not bump into people working there ) , it is still possible to get what you 're after realtively simple with no alarms .
They do have cameras though , so wear a mask - and since this is Alberta , no one would question the mask.I picked up one of my servers a couple days ago , and they did n't ask for ID either .
I could have been ANYONE .</tokentext>
<sentencetext>I co-locate at a data center in Alberta.
It is in the basement of a high-rise building.
Because of this there is much traffice in/out of the building.
The main doors within the building leading to the datacenter itself can be opened with a credit card, or even a set of keys (um, any key).
This poses a security risk.
Even though you'd need to know exactly where to go and when (so as to not bump into people working there), it is still possible to get what you're after realtively simple with no alarms.
They do have cameras though, so wear a mask - and since this is Alberta, no one would question the mask.I picked up one of my servers a couple days ago, and they didn't ask for ID either.
I could have been ANYONE.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040016</id>
	<title>Re:Just off the top of my head</title>
	<author>Anonymous</author>
	<datestamp>1257769740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>* N+N redundancy in main supply *and* UPS *and* generators *and* cooling
* security - access to the site, protection of the installed equipment, storing equipment being delivered
* truly diverse fibre as well as sufficient bandwidth with low/no contention
* good peering policies to avoid poor latency to your customers' providers
* good value remote hands - some providers are very rigid about charging for every second you ask them for support, most are helpful with free basic cover
* hidden charges for telephone line installations and cable management
* extra charges for dual WAN uplink
* cost of committed AND "overage" charges
* protection from DoS attacks
* rack depth - will your servers fit?!
* patching infrastructure available for your own use to cable between racks if you expand, what cost?


not all of the following are yes/no decisions BUT should be considered because they can make a big impact to the time taken to get your service up and running

* power arrangements - who provides the PDUs - if buying yourself, will they fit, will you have the right brackets?
* parking - this is not trivial in london and can be a major bug-bear if you're delivering equipment
* traffic issues - if you've an emergency and you or the Dell or HP engineer can't get there to fix things without inordinate delays, you might have a growing headache
* onsite facilities when working - drinks and food prep areas, bathrooms, showers even (not needed these but some sites do, if you're doing a major roll-out they could be significant)
* rubbish disposal
* local facilities - eateries, hotels or B&amp;Bs
* level floors - no steps - from loading bay to server floor, lifts etc. don't want to be lugging servers up/down stairs!

I visited six significant colo facilities before choosing the one I picked, most had merits, some had minor (to them) problems which would have been a minor but consistent headache to me.

We were recently forced to pull our entire server farm out of an inadequate colo, which cost us big time as we had to duplicate significant amount of equipment in order to maintain service instead of shutting down and relocating, as well as having to pay two hosting bills in parallel for a while. This came about because we originally chose the cheapest most convenient provider, which would have been ok for a while, but stayed with them far too long - moral, well before renewing or expanding, review whether your colo service is up to scratch.</htmltext>
<tokenext>* N + N redundancy in main supply * and * UPS * and * generators * and * cooling * security - access to the site , protection of the installed equipment , storing equipment being delivered * truly diverse fibre as well as sufficient bandwidth with low/no contention * good peering policies to avoid poor latency to your customers ' providers * good value remote hands - some providers are very rigid about charging for every second you ask them for support , most are helpful with free basic cover * hidden charges for telephone line installations and cable management * extra charges for dual WAN uplink * cost of committed AND " overage " charges * protection from DoS attacks * rack depth - will your servers fit ? !
* patching infrastructure available for your own use to cable between racks if you expand , what cost ?
not all of the following are yes/no decisions BUT should be considered because they can make a big impact to the time taken to get your service up and running * power arrangements - who provides the PDUs - if buying yourself , will they fit , will you have the right brackets ?
* parking - this is not trivial in london and can be a major bug-bear if you 're delivering equipment * traffic issues - if you 've an emergency and you or the Dell or HP engineer ca n't get there to fix things without inordinate delays , you might have a growing headache * onsite facilities when working - drinks and food prep areas , bathrooms , showers even ( not needed these but some sites do , if you 're doing a major roll-out they could be significant ) * rubbish disposal * local facilities - eateries , hotels or B&amp;Bs * level floors - no steps - from loading bay to server floor , lifts etc .
do n't want to be lugging servers up/down stairs !
I visited six significant colo facilities before choosing the one I picked , most had merits , some had minor ( to them ) problems which would have been a minor but consistent headache to me .
We were recently forced to pull our entire server farm out of an inadequate colo , which cost us big time as we had to duplicate significant amount of equipment in order to maintain service instead of shutting down and relocating , as well as having to pay two hosting bills in parallel for a while .
This came about because we originally chose the cheapest most convenient provider , which would have been ok for a while , but stayed with them far too long - moral , well before renewing or expanding , review whether your colo service is up to scratch .</tokentext>
<sentencetext>* N+N redundancy in main supply *and* UPS *and* generators *and* cooling
* security - access to the site, protection of the installed equipment, storing equipment being delivered
* truly diverse fibre as well as sufficient bandwidth with low/no contention
* good peering policies to avoid poor latency to your customers' providers
* good value remote hands - some providers are very rigid about charging for every second you ask them for support, most are helpful with free basic cover
* hidden charges for telephone line installations and cable management
* extra charges for dual WAN uplink
* cost of committed AND "overage" charges
* protection from DoS attacks
* rack depth - will your servers fit?!
* patching infrastructure available for your own use to cable between racks if you expand, what cost?
not all of the following are yes/no decisions BUT should be considered because they can make a big impact to the time taken to get your service up and running

* power arrangements - who provides the PDUs - if buying yourself, will they fit, will you have the right brackets?
* parking - this is not trivial in london and can be a major bug-bear if you're delivering equipment
* traffic issues - if you've an emergency and you or the Dell or HP engineer can't get there to fix things without inordinate delays, you might have a growing headache
* onsite facilities when working - drinks and food prep areas, bathrooms, showers even (not needed these but some sites do, if you're doing a major roll-out they could be significant)
* rubbish disposal
* local facilities - eateries, hotels or B&amp;Bs
* level floors - no steps - from loading bay to server floor, lifts etc.
don't want to be lugging servers up/down stairs!
I visited six significant colo facilities before choosing the one I picked, most had merits, some had minor (to them) problems which would have been a minor but consistent headache to me.
We were recently forced to pull our entire server farm out of an inadequate colo, which cost us big time as we had to duplicate significant amount of equipment in order to maintain service instead of shutting down and relocating, as well as having to pay two hosting bills in parallel for a while.
This came about because we originally chose the cheapest most convenient provider, which would have been ok for a while, but stayed with them far too long - moral, well before renewing or expanding, review whether your colo service is up to scratch.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039162</id>
	<title>Re:i ran a junky data center</title>
	<author>NervousWreck</author>
	<datestamp>1257765240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Carpet?! Ouch.  My sympathies. Were there also space heaters?</htmltext>
<tokenext>Carpet ? !
Ouch. My sympathies .
Were there also space heaters ?</tokentext>
<sentencetext>Carpet?!
Ouch.  My sympathies.
Were there also space heaters?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038668</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038960</id>
	<title>Organization</title>
	<author>jroc242</author>
	<datestamp>1257764280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>After redundant power, adequate cooling, 24/7 ops...etc. Neatness is the most important thing. Making each rack very neat with redundant power bars and individual patch panels is key for the long term. Its when you allow cables to run under the floor and between racks that things get out of control. We have an old timer that will yell at anyone who does anything messy and its works out very well.</htmltext>
<tokenext>After redundant power , adequate cooling , 24/7 ops...etc .
Neatness is the most important thing .
Making each rack very neat with redundant power bars and individual patch panels is key for the long term .
Its when you allow cables to run under the floor and between racks that things get out of control .
We have an old timer that will yell at anyone who does anything messy and its works out very well .</tokentext>
<sentencetext>After redundant power, adequate cooling, 24/7 ops...etc.
Neatness is the most important thing.
Making each rack very neat with redundant power bars and individual patch panels is key for the long term.
Its when you allow cables to run under the floor and between racks that things get out of control.
We have an old timer that will yell at anyone who does anything messy and its works out very well.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30055994</id>
	<title>Read the fine print</title>
	<author>efalk</author>
	<datestamp>1257872160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>While everybody else is suggesting technical things to look at, I'm going to suggest that you look at the legal things instead.</p><p>Read the contract with a magnifying glass.  In fact, have a professional with experience look at it for you.  The contracts often have traps in them that it takes a trained professional to spot.</p><p>For example:  one managed hosting provider had it in their contract that they would replace any dead servers within 2 hours of determining what the problem is.  Catch is, all they had to do was say "we haven't determined what the problem is yet".  Kept our main server off line for nearly a month while the night crew and the day crew argued over what needed to be done.</p><p>Contract said that if you so much as upgraded a disk drive, that re-upped the contract for eighteen months.  Think about that for a bit.  We couldn't afford to extend our contract for another 18 months and we couldn't do a thing about our overloaded server situation.  You also had to give 60 days notice before the end of the contract or it was automatically renewed.</p><p>Also:  never go with just one provider.  Diversify.  If all your data is in one data center, they can hold it hostage.  Happened to my dad in the mainframe days, happened to me last year.  Some things never change.</p><p>I could go on for hours, but I need to watch my blood pressure.  Or my cholesterol.  Something like that anyway.</p></htmltext>
<tokenext>While everybody else is suggesting technical things to look at , I 'm going to suggest that you look at the legal things instead.Read the contract with a magnifying glass .
In fact , have a professional with experience look at it for you .
The contracts often have traps in them that it takes a trained professional to spot.For example : one managed hosting provider had it in their contract that they would replace any dead servers within 2 hours of determining what the problem is .
Catch is , all they had to do was say " we have n't determined what the problem is yet " .
Kept our main server off line for nearly a month while the night crew and the day crew argued over what needed to be done.Contract said that if you so much as upgraded a disk drive , that re-upped the contract for eighteen months .
Think about that for a bit .
We could n't afford to extend our contract for another 18 months and we could n't do a thing about our overloaded server situation .
You also had to give 60 days notice before the end of the contract or it was automatically renewed.Also : never go with just one provider .
Diversify. If all your data is in one data center , they can hold it hostage .
Happened to my dad in the mainframe days , happened to me last year .
Some things never change.I could go on for hours , but I need to watch my blood pressure .
Or my cholesterol .
Something like that anyway .</tokentext>
<sentencetext>While everybody else is suggesting technical things to look at, I'm going to suggest that you look at the legal things instead.Read the contract with a magnifying glass.
In fact, have a professional with experience look at it for you.
The contracts often have traps in them that it takes a trained professional to spot.For example:  one managed hosting provider had it in their contract that they would replace any dead servers within 2 hours of determining what the problem is.
Catch is, all they had to do was say "we haven't determined what the problem is yet".
Kept our main server off line for nearly a month while the night crew and the day crew argued over what needed to be done.Contract said that if you so much as upgraded a disk drive, that re-upped the contract for eighteen months.
Think about that for a bit.
We couldn't afford to extend our contract for another 18 months and we couldn't do a thing about our overloaded server situation.
You also had to give 60 days notice before the end of the contract or it was automatically renewed.Also:  never go with just one provider.
Diversify.  If all your data is in one data center, they can hold it hostage.
Happened to my dad in the mainframe days, happened to me last year.
Some things never change.I could go on for hours, but I need to watch my blood pressure.
Or my cholesterol.
Something like that anyway.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038602</id>
	<title>past outages</title>
	<author>Anonymous</author>
	<datestamp>1257762660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Make sure you ask about any past outages and how they were handled, I have seen data centre power that has failed 4 times in 1 year due to the same problem that was only found on the fourth outage.</htmltext>
<tokenext>Make sure you ask about any past outages and how they were handled , I have seen data centre power that has failed 4 times in 1 year due to the same problem that was only found on the fourth outage .</tokentext>
<sentencetext>Make sure you ask about any past outages and how they were handled, I have seen data centre power that has failed 4 times in 1 year due to the same problem that was only found on the fourth outage.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30043218</id>
	<title>The things they won't tell you</title>
	<author>Anonymous</author>
	<datestamp>1257844200000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>All the normal stuff is important, but it is the things they won't tell you that is really important.</p><p>I visited a data center that was really easy to commute to.  See, it was above a subway train. Further, it was below a parking garage. Nice.</p><p>I've seen data centers at the end of airplane runways. I've seen data centers in flood zones, hurricane areas, earthquake prone locations and with roof leaks. Nobody is going to point these things out to you.</p><p>I really like how one DC manager showed off the huge springs that the turbine generators were sitting on to prevent vibration translating to other parts of the building, but they were behind on their battery maintenance schedule.  Doing the simple things, well and consistently are key.</p><p>In the end, you can't trust anyone and the safest answer is to spend your money in 2-3 locations. Don't make 1 a "primary."  They all need to be "primary." Mix production, DR, Dev and Test apps across all locations, split them evenly, and keep them 500mi or more apart. Don't have a single supplier for the DCs either.  Consistency is good, unless it isn't. Being different in each location and having different management is a hassle, but both probably teams won't make the same critical mistake that takes your systems down.</p></htmltext>
<tokenext>All the normal stuff is important , but it is the things they wo n't tell you that is really important.I visited a data center that was really easy to commute to .
See , it was above a subway train .
Further , it was below a parking garage .
Nice.I 've seen data centers at the end of airplane runways .
I 've seen data centers in flood zones , hurricane areas , earthquake prone locations and with roof leaks .
Nobody is going to point these things out to you.I really like how one DC manager showed off the huge springs that the turbine generators were sitting on to prevent vibration translating to other parts of the building , but they were behind on their battery maintenance schedule .
Doing the simple things , well and consistently are key.In the end , you ca n't trust anyone and the safest answer is to spend your money in 2-3 locations .
Do n't make 1 a " primary .
" They all need to be " primary .
" Mix production , DR , Dev and Test apps across all locations , split them evenly , and keep them 500mi or more apart .
Do n't have a single supplier for the DCs either .
Consistency is good , unless it is n't .
Being different in each location and having different management is a hassle , but both probably teams wo n't make the same critical mistake that takes your systems down .</tokentext>
<sentencetext>All the normal stuff is important, but it is the things they won't tell you that is really important.I visited a data center that was really easy to commute to.
See, it was above a subway train.
Further, it was below a parking garage.
Nice.I've seen data centers at the end of airplane runways.
I've seen data centers in flood zones, hurricane areas, earthquake prone locations and with roof leaks.
Nobody is going to point these things out to you.I really like how one DC manager showed off the huge springs that the turbine generators were sitting on to prevent vibration translating to other parts of the building, but they were behind on their battery maintenance schedule.
Doing the simple things, well and consistently are key.In the end, you can't trust anyone and the safest answer is to spend your money in 2-3 locations.
Don't make 1 a "primary.
"  They all need to be "primary.
" Mix production, DR, Dev and Test apps across all locations, split them evenly, and keep them 500mi or more apart.
Don't have a single supplier for the DCs either.
Consistency is good, unless it isn't.
Being different in each location and having different management is a hassle, but both probably teams won't make the same critical mistake that takes your systems down.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038872</id>
	<title>Simple...</title>
	<author>Anonymous</author>
	<datestamp>1257763920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Read their disaster preparedness plan.  If you can get through it without your BS alarm ringing off the wall or laughing hysterically, there is hope.  You do have a disaster preparedness plan, right?</p></htmltext>
<tokenext>Read their disaster preparedness plan .
If you can get through it without your BS alarm ringing off the wall or laughing hysterically , there is hope .
You do have a disaster preparedness plan , right ?</tokentext>
<sentencetext>Read their disaster preparedness plan.
If you can get through it without your BS alarm ringing off the wall or laughing hysterically, there is hope.
You do have a disaster preparedness plan, right?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038670</id>
	<title>Additional Questions</title>
	<author>Astrobirdr</author>
	<datestamp>1257763020000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>I'd also ask:
<br> <br>
Number of years in business.<br>
Involvement of the owner in the current business.<br>
Number of years the current owner has been in this business.<br>
Also do a check with the Better Business Bureau to see what, if any, complaints had been filed.<br> <br>

And, as always, Google is your friend -- definitely do a search for the business you are considering along with the word(s)  problem, issue, complaint, praise, etc!</htmltext>
<tokenext>I 'd also ask : Number of years in business .
Involvement of the owner in the current business .
Number of years the current owner has been in this business .
Also do a check with the Better Business Bureau to see what , if any , complaints had been filed .
And , as always , Google is your friend -- definitely do a search for the business you are considering along with the word ( s ) problem , issue , complaint , praise , etc !</tokentext>
<sentencetext>I'd also ask:
 
Number of years in business.
Involvement of the owner in the current business.
Number of years the current owner has been in this business.
Also do a check with the Better Business Bureau to see what, if any, complaints had been filed.
And, as always, Google is your friend -- definitely do a search for the business you are considering along with the word(s)  problem, issue, complaint, praise, etc!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039242</id>
	<title>Re:Just off the top of my head</title>
	<author>tempest69</author>
	<datestamp>1257765600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>IMHO..
Check it for the standard disaster scenarios..<br>
Flood...  is this place in new orleans, below sea level, or in Cheyenne, a mile up with 12 inches of moisture a year.<br>

Fire, Looting, Snow (can collapse roofs if not shoveled, and leave power outages for days).
tornadoes, hurricanes, earthquakes, landslides, sinkholes, train derailment, proximity to a place someone would like to destroy (WTC, Fedral buildings,the killdozer targets). Buildings that can be affected by highway closures.  Close proximity to airport increases chances of plane collision.<br>
See if you can get the full insurance adjustment estimates on the property.  I've heard Detroit is considered the safest area for natural disasters (needs citation).

<br>
ask rough questions of the managment. "if cogent gets in another spat with level3, will we lose any connectivity?", "can a backhoe take out out service?", "if our rival comes into the data center to physically access their boxes, how will our boxes be secure?","how much wood could a wood chuck chuck..."</htmltext>
<tokenext>IMHO. . Check it for the standard disaster scenarios. . Flood... is this place in new orleans , below sea level , or in Cheyenne , a mile up with 12 inches of moisture a year .
Fire , Looting , Snow ( can collapse roofs if not shoveled , and leave power outages for days ) .
tornadoes , hurricanes , earthquakes , landslides , sinkholes , train derailment , proximity to a place someone would like to destroy ( WTC , Fedral buildings,the killdozer targets ) .
Buildings that can be affected by highway closures .
Close proximity to airport increases chances of plane collision .
See if you can get the full insurance adjustment estimates on the property .
I 've heard Detroit is considered the safest area for natural disasters ( needs citation ) .
ask rough questions of the managment .
" if cogent gets in another spat with level3 , will we lose any connectivity ?
" , " can a backhoe take out out service ?
" , " if our rival comes into the data center to physically access their boxes , how will our boxes be secure ?
" , " how much wood could a wood chuck chuck... "</tokentext>
<sentencetext>IMHO..
Check it for the standard disaster scenarios..
Flood...  is this place in new orleans, below sea level, or in Cheyenne, a mile up with 12 inches of moisture a year.
Fire, Looting, Snow (can collapse roofs if not shoveled, and leave power outages for days).
tornadoes, hurricanes, earthquakes, landslides, sinkholes, train derailment, proximity to a place someone would like to destroy (WTC, Fedral buildings,the killdozer targets).
Buildings that can be affected by highway closures.
Close proximity to airport increases chances of plane collision.
See if you can get the full insurance adjustment estimates on the property.
I've heard Detroit is considered the safest area for natural disasters (needs citation).
ask rough questions of the managment.
"if cogent gets in another spat with level3, will we lose any connectivity?
", "can a backhoe take out out service?
", "if our rival comes into the data center to physically access their boxes, how will our boxes be secure?
","how much wood could a wood chuck chuck..."</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042966</id>
	<title>How to choose a colocation facility</title>
	<author>eprosenx</author>
	<datestamp>1257883500000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>I wrote an extensive article on choosing a datacenter/colocation facility several months back.  The full post can be found on my blog, but I will paste it below for your Slashdot reading convenience:</p><p>http://www.bitplumber.net/2009/04/how-to-choose-a-colocation-facility/</p><p>How to choose a colocation facility</p><p>Choosing a colocation facility is one of the most important decisions an IT professional can make.  It will have repercussions for years down the road, as there is generally a contract term associated, and it becomes difficult/costly to move.  At the same time, unless you are a facilities professional, it is hard to tell the difference between the quality of one facility vs. that of another without knowing the right questions to ask.  I have developed this list in the hopes that it will be a reference to folks evaluating datacenter options.  This has been written using the assumption that you need a local datacenter rather than a DR facility (which can have very different needs), however, many of the same concepts will apply.</p><p>Location<br>When it comes right down to it, there are still certain things you have to do physically in person. You can&rsquo;t run a network cable through SSH or RDP. Having a datacenter close by makes a huge difference, especially when you lose remote connectivity and must go push a button in an emergency (we all have done this once or twice). In general, the newer, more high-end, and redundant your equipment is, the less you should have to touch it in person. Things are getting much better with out of band remote access controllers, but sometimes being there is worth a lot. You can&rsquo;t hear that fan making funny noises from your office.<br>Does the facility have good access to transportation such as freeways and airports? Are their hotels nearby if you will have out-of-town contractors visiting? How close to logistics depots are you for your vendor-of-choices parts, i.e. Cisco, Dell, HP, etc<br>Does the facility have adequate parking that is close to the building, does it cost money? Is it somewhere you want to leave your car in the middle of the night while you are inside working?<br>Do you have line-of-sight to the datacenter? If you can manage to get a wireless link to your datacenter this can be an extremely cost-effective option for high speed connectivity. There is something to be said for controlling your own destiny when it comes to your connectivity rather than being at the mercy of a telecom provider. Will the facility allow you to put a wireless antenna on the roof and how much will they charge?</p><p>Staffing<br>Do they have on-site staff 24&#215;7 to respond to emergency situations, to secure the facility, and to provide access when you forget/loose your badge (or have to stop by on your way home from the gym).<br>If they do not have staff on site 24&#215;7, what is their on-call policy? How long would it take them to respond to a power failure, a UPS exploding, a transformer catching fire in the parking lot, an Internet outage, an FM-200 fire suppression system going off, an HVAC system failing, or any other major malady (yes I have had all of these things happen to me in facilities I have worked in, and I am still waiting for the day a fire sprinkler goes off or there is a real fire in a datacenter).<br>What level of professional services can they provide? Basic remote hands (please press the power button)? More advanced troubleshooting (help diagnose a failed network switch)? Or even managed services (i.e. they take care of backups).<br>How competent are their NOC engineers, facilities folks, etc What quality of vendors do they use to do electrical work, HVAC maintenance, network cabling? This can be hard to tell, but there are lots of small clues you can pick up on.<br>Does their staff speak English fluently and without heavy accent? It is extremely difficult to communicate on the phone with someone in a loud datacenter environment about complex technical issues when both of you are having a hard time understanding each other. This dramatically slo</p></htmltext>
<tokenext>I wrote an extensive article on choosing a datacenter/colocation facility several months back .
The full post can be found on my blog , but I will paste it below for your Slashdot reading convenience : http : //www.bitplumber.net/2009/04/how-to-choose-a-colocation-facility/How to choose a colocation facilityChoosing a colocation facility is one of the most important decisions an IT professional can make .
It will have repercussions for years down the road , as there is generally a contract term associated , and it becomes difficult/costly to move .
At the same time , unless you are a facilities professional , it is hard to tell the difference between the quality of one facility vs. that of another without knowing the right questions to ask .
I have developed this list in the hopes that it will be a reference to folks evaluating datacenter options .
This has been written using the assumption that you need a local datacenter rather than a DR facility ( which can have very different needs ) , however , many of the same concepts will apply.LocationWhen it comes right down to it , there are still certain things you have to do physically in person .
You can    t run a network cable through SSH or RDP .
Having a datacenter close by makes a huge difference , especially when you lose remote connectivity and must go push a button in an emergency ( we all have done this once or twice ) .
In general , the newer , more high-end , and redundant your equipment is , the less you should have to touch it in person .
Things are getting much better with out of band remote access controllers , but sometimes being there is worth a lot .
You can    t hear that fan making funny noises from your office.Does the facility have good access to transportation such as freeways and airports ?
Are their hotels nearby if you will have out-of-town contractors visiting ?
How close to logistics depots are you for your vendor-of-choices parts , i.e .
Cisco , Dell , HP , etcDoes the facility have adequate parking that is close to the building , does it cost money ?
Is it somewhere you want to leave your car in the middle of the night while you are inside working ? Do you have line-of-sight to the datacenter ?
If you can manage to get a wireless link to your datacenter this can be an extremely cost-effective option for high speed connectivity .
There is something to be said for controlling your own destiny when it comes to your connectivity rather than being at the mercy of a telecom provider .
Will the facility allow you to put a wireless antenna on the roof and how much will they charge ? StaffingDo they have on-site staff 24   7 to respond to emergency situations , to secure the facility , and to provide access when you forget/loose your badge ( or have to stop by on your way home from the gym ) .If they do not have staff on site 24   7 , what is their on-call policy ?
How long would it take them to respond to a power failure , a UPS exploding , a transformer catching fire in the parking lot , an Internet outage , an FM-200 fire suppression system going off , an HVAC system failing , or any other major malady ( yes I have had all of these things happen to me in facilities I have worked in , and I am still waiting for the day a fire sprinkler goes off or there is a real fire in a datacenter ) .What level of professional services can they provide ?
Basic remote hands ( please press the power button ) ?
More advanced troubleshooting ( help diagnose a failed network switch ) ?
Or even managed services ( i.e .
they take care of backups ) .How competent are their NOC engineers , facilities folks , etc What quality of vendors do they use to do electrical work , HVAC maintenance , network cabling ?
This can be hard to tell , but there are lots of small clues you can pick up on.Does their staff speak English fluently and without heavy accent ?
It is extremely difficult to communicate on the phone with someone in a loud datacenter environment about complex technical issues when both of you are having a hard time understanding each other .
This dramatically slo</tokentext>
<sentencetext>I wrote an extensive article on choosing a datacenter/colocation facility several months back.
The full post can be found on my blog, but I will paste it below for your Slashdot reading convenience:http://www.bitplumber.net/2009/04/how-to-choose-a-colocation-facility/How to choose a colocation facilityChoosing a colocation facility is one of the most important decisions an IT professional can make.
It will have repercussions for years down the road, as there is generally a contract term associated, and it becomes difficult/costly to move.
At the same time, unless you are a facilities professional, it is hard to tell the difference between the quality of one facility vs. that of another without knowing the right questions to ask.
I have developed this list in the hopes that it will be a reference to folks evaluating datacenter options.
This has been written using the assumption that you need a local datacenter rather than a DR facility (which can have very different needs), however, many of the same concepts will apply.LocationWhen it comes right down to it, there are still certain things you have to do physically in person.
You can’t run a network cable through SSH or RDP.
Having a datacenter close by makes a huge difference, especially when you lose remote connectivity and must go push a button in an emergency (we all have done this once or twice).
In general, the newer, more high-end, and redundant your equipment is, the less you should have to touch it in person.
Things are getting much better with out of band remote access controllers, but sometimes being there is worth a lot.
You can’t hear that fan making funny noises from your office.Does the facility have good access to transportation such as freeways and airports?
Are their hotels nearby if you will have out-of-town contractors visiting?
How close to logistics depots are you for your vendor-of-choices parts, i.e.
Cisco, Dell, HP, etcDoes the facility have adequate parking that is close to the building, does it cost money?
Is it somewhere you want to leave your car in the middle of the night while you are inside working?Do you have line-of-sight to the datacenter?
If you can manage to get a wireless link to your datacenter this can be an extremely cost-effective option for high speed connectivity.
There is something to be said for controlling your own destiny when it comes to your connectivity rather than being at the mercy of a telecom provider.
Will the facility allow you to put a wireless antenna on the roof and how much will they charge?StaffingDo they have on-site staff 24×7 to respond to emergency situations, to secure the facility, and to provide access when you forget/loose your badge (or have to stop by on your way home from the gym).If they do not have staff on site 24×7, what is their on-call policy?
How long would it take them to respond to a power failure, a UPS exploding, a transformer catching fire in the parking lot, an Internet outage, an FM-200 fire suppression system going off, an HVAC system failing, or any other major malady (yes I have had all of these things happen to me in facilities I have worked in, and I am still waiting for the day a fire sprinkler goes off or there is a real fire in a datacenter).What level of professional services can they provide?
Basic remote hands (please press the power button)?
More advanced troubleshooting (help diagnose a failed network switch)?
Or even managed services (i.e.
they take care of backups).How competent are their NOC engineers, facilities folks, etc What quality of vendors do they use to do electrical work, HVAC maintenance, network cabling?
This can be hard to tell, but there are lots of small clues you can pick up on.Does their staff speak English fluently and without heavy accent?
It is extremely difficult to communicate on the phone with someone in a loud datacenter environment about complex technical issues when both of you are having a hard time understanding each other.
This dramatically slo</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040320</id>
	<title>Re:an outside air duct</title>
	<author>Anonymous</author>
	<datestamp>1257771240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>When I worked at a corporate office in Maryland, they used the building's air conditioning to cool the server room.</p><p>This worked well until the outside temperature got down to about 15 degrees Fahrenheit, but then it failed miserably: the outdoor condensers no longer functioned, the AC shut down, and the entire IT department went into a panic.</p><p>The first time this happened, I (a lowly Help Desk tech) suggested to the CIO that he run a duct into the room from the outside: a simple fan would bring in enough sub-freezing air to cool the servers.</p><p>The <i>second</i> time it happened, the look on his face told me he hadn't taken my suggestion seriously enough.</p><p>The <b>third</b> time, he flipped a switch and the fan cooled his server room just fine.</p></div><p>the energy efficiency of the original solution is so bad that it should be CRIMINAL</p><p>you not only provided a simpler solution that will work you saved him a BUNCH of money</p><p>long live free cooling</p></div>
	</htmltext>
<tokenext>When I worked at a corporate office in Maryland , they used the building 's air conditioning to cool the server room.This worked well until the outside temperature got down to about 15 degrees Fahrenheit , but then it failed miserably : the outdoor condensers no longer functioned , the AC shut down , and the entire IT department went into a panic.The first time this happened , I ( a lowly Help Desk tech ) suggested to the CIO that he run a duct into the room from the outside : a simple fan would bring in enough sub-freezing air to cool the servers.The second time it happened , the look on his face told me he had n't taken my suggestion seriously enough.The third time , he flipped a switch and the fan cooled his server room just fine.the energy efficiency of the original solution is so bad that it should be CRIMINALyou not only provided a simpler solution that will work you saved him a BUNCH of moneylong live free cooling</tokentext>
<sentencetext>When I worked at a corporate office in Maryland, they used the building's air conditioning to cool the server room.This worked well until the outside temperature got down to about 15 degrees Fahrenheit, but then it failed miserably: the outdoor condensers no longer functioned, the AC shut down, and the entire IT department went into a panic.The first time this happened, I (a lowly Help Desk tech) suggested to the CIO that he run a duct into the room from the outside: a simple fan would bring in enough sub-freezing air to cool the servers.The second time it happened, the look on his face told me he hadn't taken my suggestion seriously enough.The third time, he flipped a switch and the fan cooled his server room just fine.the energy efficiency of the original solution is so bad that it should be CRIMINALyou not only provided a simpler solution that will work you saved him a BUNCH of moneylong live free cooling
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039038</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039764</id>
	<title>One Server at a time</title>
	<author>Anonymous</author>
	<datestamp>1257768540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>All line up and evacuate slowly.</p></htmltext>
<tokenext>All line up and evacuate slowly .</tokentext>
<sentencetext>All line up and evacuate slowly.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040966</id>
	<title>Re:Just off the top of my head</title>
	<author>orlanz</author>
	<datestamp>1257775020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Ask for a SAS70, it will answer many questions, and at a minimum give you a very good starting point.  After reading that, you might want to fill in the gaps where your own risk conclusions differ from the auditors via a visit.  Additionally, I would look at the backup solutions they have, and if they have an offsite solution, then do a quick visit over there too.  Also, find out how long it would take to come back online from offsite backups if Murphy strikes.  I think something that a lot of people don't consider is expansion.  See how much of their capacity they are already using up, how fast they can grow with your demands, and how easy/hard it would be to move out.

-- Ex-Auditor.</htmltext>
<tokenext>Ask for a SAS70 , it will answer many questions , and at a minimum give you a very good starting point .
After reading that , you might want to fill in the gaps where your own risk conclusions differ from the auditors via a visit .
Additionally , I would look at the backup solutions they have , and if they have an offsite solution , then do a quick visit over there too .
Also , find out how long it would take to come back online from offsite backups if Murphy strikes .
I think something that a lot of people do n't consider is expansion .
See how much of their capacity they are already using up , how fast they can grow with your demands , and how easy/hard it would be to move out .
-- Ex-Auditor .</tokentext>
<sentencetext>Ask for a SAS70, it will answer many questions, and at a minimum give you a very good starting point.
After reading that, you might want to fill in the gaps where your own risk conclusions differ from the auditors via a visit.
Additionally, I would look at the backup solutions they have, and if they have an offsite solution, then do a quick visit over there too.
Also, find out how long it would take to come back online from offsite backups if Murphy strikes.
I think something that a lot of people don't consider is expansion.
See how much of their capacity they are already using up, how fast they can grow with your demands, and how easy/hard it would be to move out.
-- Ex-Auditor.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040942</id>
	<title>Simple answer</title>
	<author>Anonymous</author>
	<datestamp>1257774960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A successful data center has no single points of failure.  Make sure everything is triple redundant with a by without dropping the load.</p></htmltext>
<tokenext>A successful data center has no single points of failure .
Make sure everything is triple redundant with a by without dropping the load .</tokentext>
<sentencetext>A successful data center has no single points of failure.
Make sure everything is triple redundant with a by without dropping the load.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039980</id>
	<title>raised floor</title>
	<author>Anonymous</author>
	<datestamp>1257769560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p><div class="quote"><p>Newer datacenters don't have raised floors because it is more energy efficient to have concrete floors.</p></div><p>Hogwash.</p></div><p>Why? You can't put as much weight on a raise floor as you can on plain concrete.</p><p>Further, if you're doing cold- or hot-aisle containment, you can do it without the need to do a raised floor. There are plenty of in-row cooling options so that you can put the cooling in the places you need it (either beside the racks, or on top):</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; http://www.42u.com/cooling/in-row-cooling/in-row-cooling.htm</p><p>There's even in-rack cooling:</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; http://www.42u.com/cooling/in-rack-cooling/in-rack-cooling.htm</p><p>Or, pipe the water (or refrigerant) straight to the aisle and put it in door-based cooling system which is rated to 35 kW per rack:</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; http://www.sun.com/servers/cooling/</p><p>The above systems also don't have fans, so you don't have to worry about maintenance on it--just the overall circulation system.</p><p>Not quite sure what the GP mean by the efficiency of concrete floors, but there are certainly better systems than general circulation air via raised floors.</p></div>
	</htmltext>
<tokenext>Newer datacenters do n't have raised floors because it is more energy efficient to have concrete floors.Hogwash.Why ?
You ca n't put as much weight on a raise floor as you can on plain concrete.Further , if you 're doing cold- or hot-aisle containment , you can do it without the need to do a raised floor .
There are plenty of in-row cooling options so that you can put the cooling in the places you need it ( either beside the racks , or on top ) :                 http : //www.42u.com/cooling/in-row-cooling/in-row-cooling.htmThere 's even in-rack cooling :                 http : //www.42u.com/cooling/in-rack-cooling/in-rack-cooling.htmOr , pipe the water ( or refrigerant ) straight to the aisle and put it in door-based cooling system which is rated to 35 kW per rack :                 http : //www.sun.com/servers/cooling/The above systems also do n't have fans , so you do n't have to worry about maintenance on it--just the overall circulation system.Not quite sure what the GP mean by the efficiency of concrete floors , but there are certainly better systems than general circulation air via raised floors .</tokentext>
<sentencetext>Newer datacenters don't have raised floors because it is more energy efficient to have concrete floors.Hogwash.Why?
You can't put as much weight on a raise floor as you can on plain concrete.Further, if you're doing cold- or hot-aisle containment, you can do it without the need to do a raised floor.
There are plenty of in-row cooling options so that you can put the cooling in the places you need it (either beside the racks, or on top):
                http://www.42u.com/cooling/in-row-cooling/in-row-cooling.htmThere's even in-rack cooling:
                http://www.42u.com/cooling/in-rack-cooling/in-rack-cooling.htmOr, pipe the water (or refrigerant) straight to the aisle and put it in door-based cooling system which is rated to 35 kW per rack:
                http://www.sun.com/servers/cooling/The above systems also don't have fans, so you don't have to worry about maintenance on it--just the overall circulation system.Not quite sure what the GP mean by the efficiency of concrete floors, but there are certainly better systems than general circulation air via raised floors.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30052664</id>
	<title>Re:Just off the top of my head</title>
	<author>Anonymous</author>
	<datestamp>1257852540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>8 feet?!  What beast lurks beneath that?  God forbid you move a tile off and someone falls \_in\_.</htmltext>
<tokenext>8 feet ? !
What beast lurks beneath that ?
God forbid you move a tile off and someone falls \ _in \ _ .</tokentext>
<sentencetext>8 feet?!
What beast lurks beneath that?
God forbid you move a tile off and someone falls \_in\_.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038868</id>
	<title>Do not jump in with both feet</title>
	<author>Jailbrekr</author>
	<datestamp>1257763920000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Regardless of how well they are decked out, always start with a "pilot project". Start small for a short period to evaluate real world performance of both their equipment and their tech support. We currently have a pilot project in place to evaluate a datacentre for outsourcing our compute requirements. We have learned that while they have exceptionally good equipment in place, their responsiveness and ability to provision is highly questionable.</p></htmltext>
<tokenext>Regardless of how well they are decked out , always start with a " pilot project " .
Start small for a short period to evaluate real world performance of both their equipment and their tech support .
We currently have a pilot project in place to evaluate a datacentre for outsourcing our compute requirements .
We have learned that while they have exceptionally good equipment in place , their responsiveness and ability to provision is highly questionable .</tokentext>
<sentencetext>Regardless of how well they are decked out, always start with a "pilot project".
Start small for a short period to evaluate real world performance of both their equipment and their tech support.
We currently have a pilot project in place to evaluate a datacentre for outsourcing our compute requirements.
We have learned that while they have exceptionally good equipment in place, their responsiveness and ability to provision is highly questionable.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30053658</id>
	<title>Re:Just off the top of my head</title>
	<author>SuperQ</author>
	<datestamp>1257857940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Wow really?  That's the best examples of datacenters you can find?  Not a single mention of PUE at all.  Lots of fancy talk and equipment but no real numbers.</p><p>Cisco's datacenter consumes 10MW.  If it has a PUE of 1.50 (good for that kind of design) that's only about 6.6MW for racks.</p><p>If this were a <a href="http://www.datacenterknowledge.com/archives/2009/10/15/google-efficiency-update-pue-of-1-22/" title="datacenterknowledge.com">Google datacenter</a> [datacenterknowledge.com] with a PUE of 1.22 that's 8.2MW for racks.</p><p>If cisco can maintain a PUE of 1.5 for that design, they're wasting 1.6MW of power.  That's about 13670MWh per year or at $0.08/KWh is about 1.1 millon dollars a year.</p></htmltext>
<tokenext>Wow really ?
That 's the best examples of datacenters you can find ?
Not a single mention of PUE at all .
Lots of fancy talk and equipment but no real numbers.Cisco 's datacenter consumes 10MW .
If it has a PUE of 1.50 ( good for that kind of design ) that 's only about 6.6MW for racks.If this were a Google datacenter [ datacenterknowledge.com ] with a PUE of 1.22 that 's 8.2MW for racks.If cisco can maintain a PUE of 1.5 for that design , they 're wasting 1.6MW of power .
That 's about 13670MWh per year or at $ 0.08/KWh is about 1.1 millon dollars a year .</tokentext>
<sentencetext>Wow really?
That's the best examples of datacenters you can find?
Not a single mention of PUE at all.
Lots of fancy talk and equipment but no real numbers.Cisco's datacenter consumes 10MW.
If it has a PUE of 1.50 (good for that kind of design) that's only about 6.6MW for racks.If this were a Google datacenter [datacenterknowledge.com] with a PUE of 1.22 that's 8.2MW for racks.If cisco can maintain a PUE of 1.5 for that design, they're wasting 1.6MW of power.
That's about 13670MWh per year or at $0.08/KWh is about 1.1 millon dollars a year.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042048</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039148</id>
	<title>Re:i ran a junky data center</title>
	<author>Nefarious Wheel</author>
	<datestamp>1257765180000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"></div><p>A smattering of basic physics helps.</p><p>Long ago in a distribution centre a far far away - well, east SF bay, anyway - we had a custom mini doing a bit of work for a major retail store chain's logistics business.  In the warehouse they built a little room for the mini upstairs, everything cheap but per spec, they insisted.  They used one of their domestic air conditioners for the cooling, as it had the right thermal rating to match the heat dissipation we required for our gear.  Cool, we said - no problem, cheap is ok as long as it's specced correctly.</p><p>It wasn't long before we had a service call for a hardware failure.  Sent the engineer out, and it was about 110 in the computer room.  They'd installed the air intake and air outflow of the air conditioner in the same tiny room.</p></div>
	</htmltext>
<tokenext>A smattering of basic physics helps.Long ago in a distribution centre a far far away - well , east SF bay , anyway - we had a custom mini doing a bit of work for a major retail store chain 's logistics business .
In the warehouse they built a little room for the mini upstairs , everything cheap but per spec , they insisted .
They used one of their domestic air conditioners for the cooling , as it had the right thermal rating to match the heat dissipation we required for our gear .
Cool , we said - no problem , cheap is ok as long as it 's specced correctly.It was n't long before we had a service call for a hardware failure .
Sent the engineer out , and it was about 110 in the computer room .
They 'd installed the air intake and air outflow of the air conditioner in the same tiny room .</tokentext>
<sentencetext>A smattering of basic physics helps.Long ago in a distribution centre a far far away - well, east SF bay, anyway - we had a custom mini doing a bit of work for a major retail store chain's logistics business.
In the warehouse they built a little room for the mini upstairs, everything cheap but per spec, they insisted.
They used one of their domestic air conditioners for the cooling, as it had the right thermal rating to match the heat dissipation we required for our gear.
Cool, we said - no problem, cheap is ok as long as it's specced correctly.It wasn't long before we had a service call for a hardware failure.
Sent the engineer out, and it was about 110 in the computer room.
They'd installed the air intake and air outflow of the air conditioner in the same tiny room.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038668</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30131332</id>
	<title>Another important topic</title>
	<author>Anonymous</author>
	<datestamp>1258484100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There are many good points above about infrastructure and convenience, all should be considered when choosing a data center.</p><p>However, there is another point which should be considered; how well is the company run from an operation/core values standpoint.  You can have the coolest features in the industry, but if the culture of the company doesn't invoke 100\% accountability throughout the entire organization it doesn't mean much.  Make sure that whatever company you choose has the correct value from all aspects of the business, not just data center functionality.</p><p>http://www.onlinetech.com/company/core\_values/</p><p>Geographic location is also a necessary component of your choice.  The Midwest is typically the best choice for a number of reasons.  Here are a few:</p><p>http://www.onlinetech.com/company/company\_overview/</p><p>Jason Yaeger<br>Online Tech, Operations Manager</p></htmltext>
<tokenext>There are many good points above about infrastructure and convenience , all should be considered when choosing a data center.However , there is another point which should be considered ; how well is the company run from an operation/core values standpoint .
You can have the coolest features in the industry , but if the culture of the company does n't invoke 100 \ % accountability throughout the entire organization it does n't mean much .
Make sure that whatever company you choose has the correct value from all aspects of the business , not just data center functionality.http : //www.onlinetech.com/company/core \ _values/Geographic location is also a necessary component of your choice .
The Midwest is typically the best choice for a number of reasons .
Here are a few : http : //www.onlinetech.com/company/company \ _overview/Jason YaegerOnline Tech , Operations Manager</tokentext>
<sentencetext>There are many good points above about infrastructure and convenience, all should be considered when choosing a data center.However, there is another point which should be considered; how well is the company run from an operation/core values standpoint.
You can have the coolest features in the industry, but if the culture of the company doesn't invoke 100\% accountability throughout the entire organization it doesn't mean much.
Make sure that whatever company you choose has the correct value from all aspects of the business, not just data center functionality.http://www.onlinetech.com/company/core\_values/Geographic location is also a necessary component of your choice.
The Midwest is typically the best choice for a number of reasons.
Here are a few:http://www.onlinetech.com/company/company\_overview/Jason YaegerOnline Tech, Operations Manager</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039508</id>
	<title>References, references, references</title>
	<author>trippd6</author>
	<datestamp>1257766860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The quality of a datacenter has less to do with the equipment (although thats important), and more to do with who designed and is running the equipment.</p><p>Most of the datacenter outages I have been a part of in one way or another (Customer, or Provider) have been caused by:</p><p>Poor planning<br>Human Error<br>Poor design</p><p>As a normal customer, there is no way to know if any of these problems exist. The solution? Ask for references that utilize that datacenter. Make sure they don't give you a customer that utilizes another data center from the same provider. Data center design varies greatly, even across the same provider. Ask that reference how long they have been there, how many problems they have had, and the companies response to those issues. Look for a customer with a long history in that data center (3+ years, 5 would be better).</p><p>Don't rule out a data center because they had an outage. Outages will happen, no matter how redundant their systems are. Their response to it is very important. If you find out about a previous outage, ask to see the root cause analysis they provided their customers. If they can't or won't produce it, even under NDA, then walk away.</p></htmltext>
<tokenext>The quality of a datacenter has less to do with the equipment ( although thats important ) , and more to do with who designed and is running the equipment.Most of the datacenter outages I have been a part of in one way or another ( Customer , or Provider ) have been caused by : Poor planningHuman ErrorPoor designAs a normal customer , there is no way to know if any of these problems exist .
The solution ?
Ask for references that utilize that datacenter .
Make sure they do n't give you a customer that utilizes another data center from the same provider .
Data center design varies greatly , even across the same provider .
Ask that reference how long they have been there , how many problems they have had , and the companies response to those issues .
Look for a customer with a long history in that data center ( 3 + years , 5 would be better ) .Do n't rule out a data center because they had an outage .
Outages will happen , no matter how redundant their systems are .
Their response to it is very important .
If you find out about a previous outage , ask to see the root cause analysis they provided their customers .
If they ca n't or wo n't produce it , even under NDA , then walk away .</tokentext>
<sentencetext>The quality of a datacenter has less to do with the equipment (although thats important), and more to do with who designed and is running the equipment.Most of the datacenter outages I have been a part of in one way or another (Customer, or Provider) have been caused by:Poor planningHuman ErrorPoor designAs a normal customer, there is no way to know if any of these problems exist.
The solution?
Ask for references that utilize that datacenter.
Make sure they don't give you a customer that utilizes another data center from the same provider.
Data center design varies greatly, even across the same provider.
Ask that reference how long they have been there, how many problems they have had, and the companies response to those issues.
Look for a customer with a long history in that data center (3+ years, 5 would be better).Don't rule out a data center because they had an outage.
Outages will happen, no matter how redundant their systems are.
Their response to it is very important.
If you find out about a previous outage, ask to see the root cause analysis they provided their customers.
If they can't or won't produce it, even under NDA, then walk away.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038890</id>
	<title>What do you need?</title>
	<author>Anonymous</author>
	<datestamp>1257763980000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>What does your company \_NEED\_?  How much bandwidth do you need? What kind of servers do you need? Are you looking for Co-Lo or Dedicated?  If you're doing Co-Lo, how much power and space do you need? If you're doing dedicated, do you need managed or unmanaged? PCI compliance? HIPAA compliance? Do you want to pay for certain redundancies? Do you need an Uptime Institute Tier certified facility?

I could go on and on. The one thing that you need consistently is good customer service. The rest depends on what you need.


Full Disclosure: I work for one of the biggest privately held dedicated hosting companies on   the planet.</htmltext>
<tokenext>What does your company \ _NEED \ _ ?
How much bandwidth do you need ?
What kind of servers do you need ?
Are you looking for Co-Lo or Dedicated ?
If you 're doing Co-Lo , how much power and space do you need ?
If you 're doing dedicated , do you need managed or unmanaged ?
PCI compliance ?
HIPAA compliance ?
Do you want to pay for certain redundancies ?
Do you need an Uptime Institute Tier certified facility ?
I could go on and on .
The one thing that you need consistently is good customer service .
The rest depends on what you need .
Full Disclosure : I work for one of the biggest privately held dedicated hosting companies on the planet .</tokentext>
<sentencetext>What does your company \_NEED\_?
How much bandwidth do you need?
What kind of servers do you need?
Are you looking for Co-Lo or Dedicated?
If you're doing Co-Lo, how much power and space do you need?
If you're doing dedicated, do you need managed or unmanaged?
PCI compliance?
HIPAA compliance?
Do you want to pay for certain redundancies?
Do you need an Uptime Institute Tier certified facility?
I could go on and on.
The one thing that you need consistently is good customer service.
The rest depends on what you need.
Full Disclosure: I work for one of the biggest privately held dedicated hosting companies on   the planet.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038548</id>
	<title>Get this out of the way</title>
	<author>Anonymous</author>
	<datestamp>1257762540000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Libraries of Congress per second.</p></htmltext>
<tokenext>Libraries of Congress per second .</tokentext>
<sentencetext>Libraries of Congress per second.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039174</id>
	<title>Re:i ran a junky data center</title>
	<author>Anonymous</author>
	<datestamp>1257765300000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext>I think "data center carpet" should be a new slashdot meme.  I can not stop laughing at how ridiculous that "data center" must have looked with that carpet.  Please tell me that it was the baby poo green shag carpet from the 70's.  That would really make it feature complete.</htmltext>
<tokenext>I think " data center carpet " should be a new slashdot meme .
I can not stop laughing at how ridiculous that " data center " must have looked with that carpet .
Please tell me that it was the baby poo green shag carpet from the 70 's .
That would really make it feature complete .</tokentext>
<sentencetext>I think "data center carpet" should be a new slashdot meme.
I can not stop laughing at how ridiculous that "data center" must have looked with that carpet.
Please tell me that it was the baby poo green shag carpet from the 70's.
That would really make it feature complete.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038668</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040028</id>
	<title>Evaluation of Data Centers is nothing new...</title>
	<author>Anonymous</author>
	<datestamp>1257769800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Why re-invent the wheel?</p><p>Yes it's People - Process - Technology... there are audit / checklist standards (multiple) for each area.</p><p>Search for SysAdmin SA-BOK-0500.pdf and you'll find a good overall checklist. SANS Institute also has a physical security checklist as well.</p><p>Beyond that you have the super serious audit things like SAS-70 or ISO-17799 / ISO-27000 or ISO-20000 or COBIT or ITIL... things that actually have outside audit standards / agencies.</p><p>If you want the informal check lists for your own review, SA-BOK-0500 or SANS Institute is good. If you really want to do a proper / formal thing, inquire as to their SAS-70 and ISO-20000 (ITIL) compliance. Ask to see copies of latest audit under NDA.</p></htmltext>
<tokenext>Why re-invent the wheel ? Yes it 's People - Process - Technology... there are audit / checklist standards ( multiple ) for each area.Search for SysAdmin SA-BOK-0500.pdf and you 'll find a good overall checklist .
SANS Institute also has a physical security checklist as well.Beyond that you have the super serious audit things like SAS-70 or ISO-17799 / ISO-27000 or ISO-20000 or COBIT or ITIL... things that actually have outside audit standards / agencies.If you want the informal check lists for your own review , SA-BOK-0500 or SANS Institute is good .
If you really want to do a proper / formal thing , inquire as to their SAS-70 and ISO-20000 ( ITIL ) compliance .
Ask to see copies of latest audit under NDA .</tokentext>
<sentencetext>Why re-invent the wheel?Yes it's People - Process - Technology... there are audit / checklist standards (multiple) for each area.Search for SysAdmin SA-BOK-0500.pdf and you'll find a good overall checklist.
SANS Institute also has a physical security checklist as well.Beyond that you have the super serious audit things like SAS-70 or ISO-17799 / ISO-27000 or ISO-20000 or COBIT or ITIL... things that actually have outside audit standards / agencies.If you want the informal check lists for your own review, SA-BOK-0500 or SANS Institute is good.
If you really want to do a proper / formal thing, inquire as to their SAS-70 and ISO-20000 (ITIL) compliance.
Ask to see copies of latest audit under NDA.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039704</id>
	<title>an important suggestion</title>
	<author>Eil</author>
	<datestamp>1257768120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Unless yours servers absolutely must be local, one of the most important factors should be local climate and environmental risk. I've worked in a couple datacenters in Michigan and it's really ideal:</p><p>* No state-wide forest fires<br>* No flooding if you're above the flood plain<br>* No hurricanes<br>* Very few tornadoes</p><p>On top of that, if the AC units should spontaneously fail all at once, 99\% of the time you can just open up all the doors and run a couple of large fans to keep things cool enough to run.</p></htmltext>
<tokenext>Unless yours servers absolutely must be local , one of the most important factors should be local climate and environmental risk .
I 've worked in a couple datacenters in Michigan and it 's really ideal : * No state-wide forest fires * No flooding if you 're above the flood plain * No hurricanes * Very few tornadoesOn top of that , if the AC units should spontaneously fail all at once , 99 \ % of the time you can just open up all the doors and run a couple of large fans to keep things cool enough to run .</tokentext>
<sentencetext>Unless yours servers absolutely must be local, one of the most important factors should be local climate and environmental risk.
I've worked in a couple datacenters in Michigan and it's really ideal:* No state-wide forest fires* No flooding if you're above the flood plain* No hurricanes* Very few tornadoesOn top of that, if the AC units should spontaneously fail all at once, 99\% of the time you can just open up all the doors and run a couple of large fans to keep things cool enough to run.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039072</id>
	<title>do they let me tug my nuts?</title>
	<author>Anonymous</author>
	<datestamp>1257764820000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>mmmmmmmmm raw</p></htmltext>
<tokenext>mmmmmmmmm raw</tokentext>
<sentencetext>mmmmmmmmm raw</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042048</id>
	<title>Re:Just off the top of my head</title>
	<author>Critical Facilities</author>
	<datestamp>1257784980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>But please, continue to refute my statement with clear, unsupported, single-word denials. They carry so much weight in an argument.</p></div><p> <a href="http://www.datacenterknowledge.com/archives/2009/11/05/indianas-new-supercomputing-center/" title="datacenterknowledge.com">Yeah</a> [datacenterknowledge.com],  you're probably right.  I mean,  <a href="http://www.datacenterknowledge.com/archives/2009/10/27/interactive-tour-ciscos-flagship-data-center/" title="datacenterknowledge.com">no one</a> [datacenterknowledge.com] is putting in raised floor environments anymore.  I don't know <a href="http://www.datacenterknowledge.com/archives/2009/10/27/quality-tech-gets-150m-investment/" title="datacenterknowledge.com"> what I was thinking</a> [datacenterknowledge.com].<br> <br>Quote all you want.  I run an enterprise data center,  and I can tell you that raised floor is certainly NOT dead.</p></div>
	</htmltext>
<tokenext>But please , continue to refute my statement with clear , unsupported , single-word denials .
They carry so much weight in an argument .
Yeah [ datacenterknowledge.com ] , you 're probably right .
I mean , no one [ datacenterknowledge.com ] is putting in raised floor environments anymore .
I do n't know what I was thinking [ datacenterknowledge.com ] .
Quote all you want .
I run an enterprise data center , and I can tell you that raised floor is certainly NOT dead .</tokentext>
<sentencetext>But please, continue to refute my statement with clear, unsupported, single-word denials.
They carry so much weight in an argument.
Yeah [datacenterknowledge.com],  you're probably right.
I mean,  no one [datacenterknowledge.com] is putting in raised floor environments anymore.
I don't know  what I was thinking [datacenterknowledge.com].
Quote all you want.
I run an enterprise data center,  and I can tell you that raised floor is certainly NOT dead.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040480</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30043028</id>
	<title>the quality of the toilet paper</title>
	<author>brak</author>
	<datestamp>1257884340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>if they care enough for you to have an extraordinary experience when you have to use the restroom, that means they really care.</p></htmltext>
<tokenext>if they care enough for you to have an extraordinary experience when you have to use the restroom , that means they really care .</tokentext>
<sentencetext>if they care enough for you to have an extraordinary experience when you have to use the restroom, that means they really care.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038974</id>
	<title>Don't forget the non-technical bits</title>
	<author>petes\_PoV</author>
	<datestamp>1257764340000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Such as street access. Is there more than one way in, if the access road was closed off (police incident, subsidence, civil unrest - depending where it's sited), what would happen. Could staff get to work, or leave for home?
<br>
Ease of recruiting / retaining sufficiently qualified staff in the locale, or persuading your to commute or relocate
<br>
Is the on-site restaurant / canteen or local eateries  likely to give everyone food poisoning (this could be a single point of failure)
<br>
Local crime rate - number of times the facility has been broken in to - even the amount of graffiti on the walls could be a negative indicator</htmltext>
<tokenext>Such as street access .
Is there more than one way in , if the access road was closed off ( police incident , subsidence , civil unrest - depending where it 's sited ) , what would happen .
Could staff get to work , or leave for home ?
Ease of recruiting / retaining sufficiently qualified staff in the locale , or persuading your to commute or relocate Is the on-site restaurant / canteen or local eateries likely to give everyone food poisoning ( this could be a single point of failure ) Local crime rate - number of times the facility has been broken in to - even the amount of graffiti on the walls could be a negative indicator</tokentext>
<sentencetext>Such as street access.
Is there more than one way in, if the access road was closed off (police incident, subsidence, civil unrest - depending where it's sited), what would happen.
Could staff get to work, or leave for home?
Ease of recruiting / retaining sufficiently qualified staff in the locale, or persuading your to commute or relocate

Is the on-site restaurant / canteen or local eateries  likely to give everyone food poisoning (this could be a single point of failure)

Local crime rate - number of times the facility has been broken in to - even the amount of graffiti on the walls could be a negative indicator</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042182</id>
	<title>Re:Just off the top of my head</title>
	<author>Anonymous</author>
	<datestamp>1257786420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Man Traps?  Nightly security audits?  What type of stuff is on those servers?  It seems like overkill for a typical business setup.</p></htmltext>
<tokenext>Man Traps ?
Nightly security audits ?
What type of stuff is on those servers ?
It seems like overkill for a typical business setup .</tokentext>
<sentencetext>Man Traps?
Nightly security audits?
What type of stuff is on those servers?
It seems like overkill for a typical business setup.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038992</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30045784</id>
	<title>Re:Just off the top of my head</title>
	<author>Anonymous</author>
	<datestamp>1257869700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Raised floor height - high enough to hide in when your boss comes looking for you<br>Cooling Capacity - should take 'er all the way down to about 36f to keep your beer cold<br>Security - You should be the ONLY one who can get in or out of everything. If there is a key or a card (except yours) that gets into everything, start changing the locks.<br>Power Quality - this is why you need a 40 inch Plasma TV in there. If that flickers you need some power filters.</p><p>Assess other criteria in the same vein.</p></htmltext>
<tokenext>Raised floor height - high enough to hide in when your boss comes looking for youCooling Capacity - should take 'er all the way down to about 36f to keep your beer coldSecurity - You should be the ONLY one who can get in or out of everything .
If there is a key or a card ( except yours ) that gets into everything , start changing the locks.Power Quality - this is why you need a 40 inch Plasma TV in there .
If that flickers you need some power filters.Assess other criteria in the same vein .</tokentext>
<sentencetext>Raised floor height - high enough to hide in when your boss comes looking for youCooling Capacity - should take 'er all the way down to about 36f to keep your beer coldSecurity - You should be the ONLY one who can get in or out of everything.
If there is a key or a card (except yours) that gets into everything, start changing the locks.Power Quality - this is why you need a 40 inch Plasma TV in there.
If that flickers you need some power filters.Assess other criteria in the same vein.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041158</id>
	<title>Re:Get this out of the way</title>
	<author>Anonymous</author>
	<datestamp>1257776580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Libraries of Congress per second.</p></div><p>Maybe not this figure, but it is definitely good to get statistics on load, figure out what all the systems are doing, etc.<br>
<br>
In the datacenter that I work in, there is a lot of deadweight, and even systems that are powered on, and doing nothing!<br>
Other systems are being hammered, and several of our tasks could *probably* have their load redistributed across some of the dead weight systems.</p></div>
	</htmltext>
<tokenext>Libraries of Congress per second.Maybe not this figure , but it is definitely good to get statistics on load , figure out what all the systems are doing , etc .
In the datacenter that I work in , there is a lot of deadweight , and even systems that are powered on , and doing nothing !
Other systems are being hammered , and several of our tasks could * probably * have their load redistributed across some of the dead weight systems .</tokentext>
<sentencetext>Libraries of Congress per second.Maybe not this figure, but it is definitely good to get statistics on load, figure out what all the systems are doing, etc.
In the datacenter that I work in, there is a lot of deadweight, and even systems that are powered on, and doing nothing!
Other systems are being hammered, and several of our tasks could *probably* have their load redistributed across some of the dead weight systems.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038548</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038988</id>
	<title>Re:Just off the top of my head</title>
	<author>Critical Facilities</author>
	<datestamp>1257764400000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>you state "Raised Floor Height". What is good?</p></div><p>24" is good,  36" is better.  I once had a place with 8'0".</p><p><div class="quote"><p>Newer datacenters don't have raised floors because it is more energy efficient to have concrete floors.</p></div><p>Hogwash.</p><p><div class="quote"><p>Cooling Capacity" -- what's good and what is bad? How is this measured?</p></div><p>Capacity is measured in BTUs,  or specifically tons (12,000 BTUs to a ton).  What's most important is the relationship between BTUs and KW consumptions.  In a nutshell,  how much heat can you remove from the building vs how much are you putting in?</p></div>
	</htmltext>
<tokenext>you state " Raised Floor Height " .
What is good ? 24 " is good , 36 " is better .
I once had a place with 8'0 " .Newer datacenters do n't have raised floors because it is more energy efficient to have concrete floors.Hogwash.Cooling Capacity " -- what 's good and what is bad ?
How is this measured ? Capacity is measured in BTUs , or specifically tons ( 12,000 BTUs to a ton ) .
What 's most important is the relationship between BTUs and KW consumptions .
In a nutshell , how much heat can you remove from the building vs how much are you putting in ?</tokentext>
<sentencetext>you state "Raised Floor Height".
What is good?24" is good,  36" is better.
I once had a place with 8'0".Newer datacenters don't have raised floors because it is more energy efficient to have concrete floors.Hogwash.Cooling Capacity" -- what's good and what is bad?
How is this measured?Capacity is measured in BTUs,  or specifically tons (12,000 BTUs to a ton).
What's most important is the relationship between BTUs and KW consumptions.
In a nutshell,  how much heat can you remove from the building vs how much are you putting in?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042430</id>
	<title>Re:Just off the top of my head</title>
	<author>atrus</author>
	<datestamp>1257789720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You want contained hot aisles or contained cold aisles to maintain maximum efficiency. You want managed airflow.

</p><p>Its perfectly ok for the hot aisle to be at 100+F. Its also perfectly ok for the cold aisle to be at the mid 70s, as long as there is no stratification or leakage (top of the rack should be within limits). What you want to see is offline CRAHs or VFDs installed in the CRAHs throttling back their airflow.

</p><p>I work at a company which specializes in monitoring and helping customers improve their capacity and energy use. By placing air where it needs to go you can both cut costs and improve capacity. Sadly, very very few datacenters are run with the efficiency and managed airflow in mind (people even put perforated tiles in the hot aisle!). If you have containment, then you're 95\% of the way there. Another more modern option is liquid cooled racks, or in-row cooling (APC Pods and the like).</p></htmltext>
<tokenext>You want contained hot aisles or contained cold aisles to maintain maximum efficiency .
You want managed airflow .
Its perfectly ok for the hot aisle to be at 100 + F .
Its also perfectly ok for the cold aisle to be at the mid 70s , as long as there is no stratification or leakage ( top of the rack should be within limits ) .
What you want to see is offline CRAHs or VFDs installed in the CRAHs throttling back their airflow .
I work at a company which specializes in monitoring and helping customers improve their capacity and energy use .
By placing air where it needs to go you can both cut costs and improve capacity .
Sadly , very very few datacenters are run with the efficiency and managed airflow in mind ( people even put perforated tiles in the hot aisle ! ) .
If you have containment , then you 're 95 \ % of the way there .
Another more modern option is liquid cooled racks , or in-row cooling ( APC Pods and the like ) .</tokentext>
<sentencetext>You want contained hot aisles or contained cold aisles to maintain maximum efficiency.
You want managed airflow.
Its perfectly ok for the hot aisle to be at 100+F.
Its also perfectly ok for the cold aisle to be at the mid 70s, as long as there is no stratification or leakage (top of the rack should be within limits).
What you want to see is offline CRAHs or VFDs installed in the CRAHs throttling back their airflow.
I work at a company which specializes in monitoring and helping customers improve their capacity and energy use.
By placing air where it needs to go you can both cut costs and improve capacity.
Sadly, very very few datacenters are run with the efficiency and managed airflow in mind (people even put perforated tiles in the hot aisle!).
If you have containment, then you're 95\% of the way there.
Another more modern option is liquid cooled racks, or in-row cooling (APC Pods and the like).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038964</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042196</id>
	<title>Re:Just off the top of my head</title>
	<author>Anonymous</author>
	<datestamp>1257786720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>add dual power feeds to each rack, from different power grids.</p></htmltext>
<tokenext>add dual power feeds to each rack , from different power grids .</tokentext>
<sentencetext>add dual power feeds to each rack, from different power grids.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038814</id>
	<title>Duh</title>
	<author>antifoidulus</author>
	<datestamp>1257763680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>based on the Data Center Chicks of course!</htmltext>
<tokenext>based on the Data Center Chicks of course !</tokentext>
<sentencetext>based on the Data Center Chicks of course!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30043372</id>
	<title>Things you DO NOT want to see in a data center</title>
	<author>Anonymous</author>
	<datestamp>1257846540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>1. water based fire suppression (sprinkler heads over server racks)<br>2. security fences that only go up  to a false/lowered ceiling<br>3. metallic dust on surfaces (from cutting metal in DC)<br>4. external/temporary/emergency cooling ducts on DC floor blowing cool air from AC unit parked in loading dock<br>5. generator running for 2 weeks<br>6. tropical feeling to air in DC<br>7. 45 minute wait to get access to customer colo area due to no onsite staff<br>8. power conduit under raised floor obstructing airflow<br>9. fire suppression override always set to off<br>10. mis-configured power ATS<br>11. NOC staff that are active crackers / black hat<br>12. hill billies doing rack installs / construction<br>13. windows server datacenter edition</p></htmltext>
<tokenext>1. water based fire suppression ( sprinkler heads over server racks ) 2. security fences that only go up to a false/lowered ceiling3 .
metallic dust on surfaces ( from cutting metal in DC ) 4. external/temporary/emergency cooling ducts on DC floor blowing cool air from AC unit parked in loading dock5 .
generator running for 2 weeks6 .
tropical feeling to air in DC7 .
45 minute wait to get access to customer colo area due to no onsite staff8 .
power conduit under raised floor obstructing airflow9 .
fire suppression override always set to off10 .
mis-configured power ATS11 .
NOC staff that are active crackers / black hat12 .
hill billies doing rack installs / construction13 .
windows server datacenter edition</tokentext>
<sentencetext>1. water based fire suppression (sprinkler heads over server racks)2. security fences that only go up  to a false/lowered ceiling3.
metallic dust on surfaces (from cutting metal in DC)4. external/temporary/emergency cooling ducts on DC floor blowing cool air from AC unit parked in loading dock5.
generator running for 2 weeks6.
tropical feeling to air in DC7.
45 minute wait to get access to customer colo area due to no onsite staff8.
power conduit under raised floor obstructing airflow9.
fire suppression override always set to off10.
mis-configured power ATS11.
NOC staff that are active crackers / black hat12.
hill billies doing rack installs / construction13.
windows server datacenter edition</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038918</id>
	<title>Vending machines.</title>
	<author>Kenja</author>
	<datestamp>1257764100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Since the odds are I'm going to be spending the night there at some point, good vending machines or a cafeteria are a must.</htmltext>
<tokenext>Since the odds are I 'm going to be spending the night there at some point , good vending machines or a cafeteria are a must .</tokentext>
<sentencetext>Since the odds are I'm going to be spending the night there at some point, good vending machines or a cafeteria are a must.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044324</id>
	<title>Re:Just off the top of my head</title>
	<author>Sandbags</author>
	<datestamp>1257860160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>2 of our 6 datacenters are built using the hot/cold isle method.  2 others are in a migration to a newwer design that still uses hot/cold, but it;s all intra-rack.  The datacewnter itself will be a nice 76 degrees, but inside the racks will be 65 or lower.  Cold moisture free air is pumped in from the top front of the rack.  Servers pull it through to the back and hot air is pulled up and out.  The racks are sealed for both airflow and noise reduction.  Each rack chassis is about 6-8" deeper than a normal rack, but because of efficincy we can place about 1/3 more racks in a datacenter and still have ample cooling.  We currently have 5 air handlers in each large datacenter, and 3 each in the smaller ones.  Any one can be offline completely assuming we're at max capacity, honestly any 2 likely could be off as our datacenters, though designed to be full, are rarely much more than half populated, as it would only be full during a period when another datacenter is being rebuolt (every 3-4 years or so on our pace).</p></htmltext>
<tokenext>2 of our 6 datacenters are built using the hot/cold isle method .
2 others are in a migration to a newwer design that still uses hot/cold , but it ; s all intra-rack .
The datacewnter itself will be a nice 76 degrees , but inside the racks will be 65 or lower .
Cold moisture free air is pumped in from the top front of the rack .
Servers pull it through to the back and hot air is pulled up and out .
The racks are sealed for both airflow and noise reduction .
Each rack chassis is about 6-8 " deeper than a normal rack , but because of efficincy we can place about 1/3 more racks in a datacenter and still have ample cooling .
We currently have 5 air handlers in each large datacenter , and 3 each in the smaller ones .
Any one can be offline completely assuming we 're at max capacity , honestly any 2 likely could be off as our datacenters , though designed to be full , are rarely much more than half populated , as it would only be full during a period when another datacenter is being rebuolt ( every 3-4 years or so on our pace ) .</tokentext>
<sentencetext>2 of our 6 datacenters are built using the hot/cold isle method.
2 others are in a migration to a newwer design that still uses hot/cold, but it;s all intra-rack.
The datacewnter itself will be a nice 76 degrees, but inside the racks will be 65 or lower.
Cold moisture free air is pumped in from the top front of the rack.
Servers pull it through to the back and hot air is pulled up and out.
The racks are sealed for both airflow and noise reduction.
Each rack chassis is about 6-8" deeper than a normal rack, but because of efficincy we can place about 1/3 more racks in a datacenter and still have ample cooling.
We currently have 5 air handlers in each large datacenter, and 3 each in the smaller ones.
Any one can be offline completely assuming we're at max capacity, honestly any 2 likely could be off as our datacenters, though designed to be full, are rarely much more than half populated, as it would only be full during a period when another datacenter is being rebuolt (every 3-4 years or so on our pace).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041244</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039066</id>
	<title>Re:Just off the top of my head</title>
	<author>icebike</author>
	<datestamp>1257764760000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Presumably the OP is looking for a hosting site, or processing center, rather than looking at purchasing the facility.</p><p>If so very few of the items mentioned in the parent post are germane, other than Outage/Uptime History.  What is under the floor is not your problem in hosting arrangement.</p><p>You might be interested in location (flood plain, quake zone) and, but if the place has been in business for more than 10 years it all boils down to Outage/Uptime History.</p><p>The cost, and ease of migration should the relationship sour and the names of the last big customers to exit the facility would be nice to know.</p></htmltext>
<tokenext>Presumably the OP is looking for a hosting site , or processing center , rather than looking at purchasing the facility.If so very few of the items mentioned in the parent post are germane , other than Outage/Uptime History .
What is under the floor is not your problem in hosting arrangement.You might be interested in location ( flood plain , quake zone ) and , but if the place has been in business for more than 10 years it all boils down to Outage/Uptime History.The cost , and ease of migration should the relationship sour and the names of the last big customers to exit the facility would be nice to know .</tokentext>
<sentencetext>Presumably the OP is looking for a hosting site, or processing center, rather than looking at purchasing the facility.If so very few of the items mentioned in the parent post are germane, other than Outage/Uptime History.
What is under the floor is not your problem in hosting arrangement.You might be interested in location (flood plain, quake zone) and, but if the place has been in business for more than 10 years it all boils down to Outage/Uptime History.The cost, and ease of migration should the relationship sour and the names of the last big customers to exit the facility would be nice to know.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552</id>
	<title>Just off the top of my head</title>
	<author>Critical Facilities</author>
	<datestamp>1257762540000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>Beyond the simpler questions of physical access control, connectivity, and power redundancy/capacity and SLA review</p></div><p>Well first of all,  I don't know that I'd write any of those things off as "simple".  But some other points worth looking into would be:</p><ol><li>Raised Floor Height<br>Cable Management (over or under floor)<br>Cooling Capacity and Redundancy<br>Power Quality (not just redundancy)<br>Age and Condition of Electrical Hardware (ATSs,  STSs,  UPSs,  Generators)<br>Outage/Uptime History<br>Fire Suppression System and Smoke Detection System<br>Maintenance records<br>Maintenance records<br>Maintenance records</li></ol></div>
	</htmltext>
<tokenext>Beyond the simpler questions of physical access control , connectivity , and power redundancy/capacity and SLA reviewWell first of all , I do n't know that I 'd write any of those things off as " simple " .
But some other points worth looking into would be : Raised Floor HeightCable Management ( over or under floor ) Cooling Capacity and RedundancyPower Quality ( not just redundancy ) Age and Condition of Electrical Hardware ( ATSs , STSs , UPSs , Generators ) Outage/Uptime HistoryFire Suppression System and Smoke Detection SystemMaintenance recordsMaintenance recordsMaintenance records</tokentext>
<sentencetext>Beyond the simpler questions of physical access control, connectivity, and power redundancy/capacity and SLA reviewWell first of all,  I don't know that I'd write any of those things off as "simple".
But some other points worth looking into would be:Raised Floor HeightCable Management (over or under floor)Cooling Capacity and RedundancyPower Quality (not just redundancy)Age and Condition of Electrical Hardware (ATSs,  STSs,  UPSs,  Generators)Outage/Uptime HistoryFire Suppression System and Smoke Detection SystemMaintenance recordsMaintenance recordsMaintenance records
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040224</id>
	<title>Re:Just off the top of my head</title>
	<author>Anonymous</author>
	<datestamp>1257770760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>As far as the relationship between BTU's and KW, what you're really looking for is power density.  Any higher tier data center is going to already have all of this calculated and should be able to dictate a power density you are required to adhere to.

Density is typically expressed in Watts per Sq Foot.  When touring a modern colo facility, don't be surprised to see a customer with 4 cabinets packed to the gills with high density blades in an 400-500 sqft cage.  The empty space is there to stay within density limits.  Ie, you may be limited to 150w/sqft.

I would be hard pressed to colo my companies' gear in any datacenter/colo that couldn't articulate the density they can support.

Also, for what it's worth, there was a comment a ways back about cable management both above cabinets and below the floor as being a requirement - if the colo allows cabling under-floor, make sure to check it and make sure it's clean.  Even better, there shouldn't be *any* cabling underfloor - the entire purpose of a raised floor is to create a pressurized space of cooled air which is then (ideally) strategically fed to the equipment.  Under-floor cabling impedes airflow, regardless of how clean it is.

Raised floor is *not* intended to hide cabling!

On that same page, ensure that there are tiers of above-cabinet cable management - power and data/network should be cleanly segregated.</htmltext>
<tokenext>As far as the relationship between BTU 's and KW , what you 're really looking for is power density .
Any higher tier data center is going to already have all of this calculated and should be able to dictate a power density you are required to adhere to .
Density is typically expressed in Watts per Sq Foot .
When touring a modern colo facility , do n't be surprised to see a customer with 4 cabinets packed to the gills with high density blades in an 400-500 sqft cage .
The empty space is there to stay within density limits .
Ie , you may be limited to 150w/sqft .
I would be hard pressed to colo my companies ' gear in any datacenter/colo that could n't articulate the density they can support .
Also , for what it 's worth , there was a comment a ways back about cable management both above cabinets and below the floor as being a requirement - if the colo allows cabling under-floor , make sure to check it and make sure it 's clean .
Even better , there should n't be * any * cabling underfloor - the entire purpose of a raised floor is to create a pressurized space of cooled air which is then ( ideally ) strategically fed to the equipment .
Under-floor cabling impedes airflow , regardless of how clean it is .
Raised floor is * not * intended to hide cabling !
On that same page , ensure that there are tiers of above-cabinet cable management - power and data/network should be cleanly segregated .</tokentext>
<sentencetext>As far as the relationship between BTU's and KW, what you're really looking for is power density.
Any higher tier data center is going to already have all of this calculated and should be able to dictate a power density you are required to adhere to.
Density is typically expressed in Watts per Sq Foot.
When touring a modern colo facility, don't be surprised to see a customer with 4 cabinets packed to the gills with high density blades in an 400-500 sqft cage.
The empty space is there to stay within density limits.
Ie, you may be limited to 150w/sqft.
I would be hard pressed to colo my companies' gear in any datacenter/colo that couldn't articulate the density they can support.
Also, for what it's worth, there was a comment a ways back about cable management both above cabinets and below the floor as being a requirement - if the colo allows cabling under-floor, make sure to check it and make sure it's clean.
Even better, there shouldn't be *any* cabling underfloor - the entire purpose of a raised floor is to create a pressurized space of cooled air which is then (ideally) strategically fed to the equipment.
Under-floor cabling impedes airflow, regardless of how clean it is.
Raised floor is *not* intended to hide cabling!
On that same page, ensure that there are tiers of above-cabinet cable management - power and data/network should be cleanly segregated.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038988</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039558</id>
	<title>Only way to be sure....</title>
	<author>unitron</author>
	<datestamp>1257767160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Nuke it from orbit and then see how soon their backup site with your backup data has you back online.</p></htmltext>
<tokenext>Nuke it from orbit and then see how soon their backup site with your backup data has you back online .</tokentext>
<sentencetext>Nuke it from orbit and then see how soon their backup site with your backup data has you back online.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039066
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038964
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042430
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038780
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040610
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038668
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039174
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039038
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044178
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038964
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30055308
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039980
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039136
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041896
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038668
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039162
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044334
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039038
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040320
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040480
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042048
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30053658
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040016
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038992
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040142
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038668
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039148
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040944
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038992
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042182
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044384
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042196
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039038
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041524
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30045784
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039172
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30052664
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038548
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041158
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039058
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041092
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040966
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039004
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041550
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038548
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039618
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039242
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039058
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040902
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038992
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30045738
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042770
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038568
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038980
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038988
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040224
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_1953241_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038992
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041244
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044324
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039224
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038568
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038980
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038920
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041174
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039278
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038868
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038552
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038992
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041244
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044324
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30045738
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040142
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042182
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044384
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038744
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039066
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044334
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039172
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038706
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038964
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042430
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30055308
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038988
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040480
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042048
-----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30053658
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040224
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039980
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30052664
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30045784
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040016
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042196
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039242
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039004
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042770
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041550
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040966
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039432
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044060
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038974
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039038
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30044178
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041524
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040320
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038814
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039136
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041896
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038724
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039058
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041092
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040902
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041526
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038670
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039120
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038584
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30042396
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038890
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038780
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040610
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038548
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30041158
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039618
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_1953241.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30038668
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039148
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30040944
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039174
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_1953241.30039162
</commentlist>
</conversation>
