<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_11_17_1559206</id>
	<title>Cooling Bags Could Cut Server Cooling Costs By 93\%</title>
	<author>timothy</author>
	<datestamp>1258474380000</datestamp>
	<htmltext>judgecorp writes <i>"UK company Iceotope has launched liquid-cooling technology which it says surpasses what can be done with water or air-cooling and can cut data centre cooling costs by up to 93 percent. Announced at Supercomputing 2009 in Portland, Oregon, the 'modular Liquid-Immersion Cooled Server' technology <a href="http://www.eweekeurope.co.uk/news/server-cool-bags-could-cut-costs-by-93-percent-2474">wraps each server in a cool-bag-like device</a>, which cools components inside a server, rather than cooling the whole data centre, or even a traditional 'hot aisle.' Earlier this year, IBM predicted that <a href="http://www.eweekeurope.co.uk/news/all-servers-could-be-water-cooled-in-ten-years--says-ibm-909">in ten years all data centre servers might be water-cooled</a>."</i> Adds reader 1sockchuck, "The Hot Aisle has <a href="http://www.thehotaisle.com/2009/11/13/one-day-all-servers-will-be-this-good/">additional photos</a> and <a href="http://www.thehotaisle.com/2009/11/17/new-pictures-of-iceotope-liquid-cooled-blades/">diagrams of the new system</a>."</htmltext>
<tokenext>judgecorp writes " UK company Iceotope has launched liquid-cooling technology which it says surpasses what can be done with water or air-cooling and can cut data centre cooling costs by up to 93 percent .
Announced at Supercomputing 2009 in Portland , Oregon , the 'modular Liquid-Immersion Cooled Server ' technology wraps each server in a cool-bag-like device , which cools components inside a server , rather than cooling the whole data centre , or even a traditional 'hot aisle .
' Earlier this year , IBM predicted that in ten years all data centre servers might be water-cooled .
" Adds reader 1sockchuck , " The Hot Aisle has additional photos and diagrams of the new system .
"</tokentext>
<sentencetext>judgecorp writes "UK company Iceotope has launched liquid-cooling technology which it says surpasses what can be done with water or air-cooling and can cut data centre cooling costs by up to 93 percent.
Announced at Supercomputing 2009 in Portland, Oregon, the 'modular Liquid-Immersion Cooled Server' technology wraps each server in a cool-bag-like device, which cools components inside a server, rather than cooling the whole data centre, or even a traditional 'hot aisle.
' Earlier this year, IBM predicted that in ten years all data centre servers might be water-cooled.
" Adds reader 1sockchuck, "The Hot Aisle has additional photos and diagrams of the new system.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30137172</id>
	<title>Re:Yes, but how much does it cost?</title>
	<author>Anonymous</author>
	<datestamp>1258461480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Commodity Supermicro motherboards plus some injection moldings. Not much more than Enterprise blade servers</p><p>Keith</p></htmltext>
<tokenext>Commodity Supermicro motherboards plus some injection moldings .
Not much more than Enterprise blade serversKeith</tokentext>
<sentencetext>Commodity Supermicro motherboards plus some injection moldings.
Not much more than Enterprise blade serversKeith</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131818</id>
	<title>Re:Water is a hassle</title>
	<author>MobyDisk</author>
	<datestamp>1258485600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The real problem with the system you describe is that <a href="http://science.slashdot.org/article.pl?sid=09/11/06/0824213" title="slashdot.org">a baguette can cause the entire system to overheat.</a> [slashdot.org]</p></htmltext>
<tokenext>The real problem with the system you describe is that a baguette can cause the entire system to overheat .
[ slashdot.org ]</tokentext>
<sentencetext>The real problem with the system you describe is that a baguette can cause the entire system to overheat.
[slashdot.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30139342</id>
	<title>Re:Yes, but how much does it cost?</title>
	<author>wilec</author>
	<datestamp>1258476540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I don't have time now to RTA but if one was to use say a non conductive, non corrosive refrigerant one could make use of the lower vapor point to more efficantly remove the heat AND even better lower the operating temperature considerably. A split cascaded system would be my choice as you could achieve very low temps with the possibility of modularizing the components so the point of use (the chips) is a relatively small inexpensive unit that is supported by layers of larger systems. Visualize the module as cube with a pair of hoses that connect to larger refrigeration systems, just another connector service like power, data or control. Within the range of the temps allowed by the refrigerant one could very easily throttle the temps depending on need.</p><p>Hey we run hundreds of hp high voltage motors immersed in refrigerant every day in industrial settings.<br>Cascaded freezer systems are common in labs and liquefied gases production uses similar systems on a massive scale.</p><p>Just using the benefits of the more efficient heat transfer from immersion and lower vapor point should make for reduced operating costs. Initial costs for todays common hardware is chump change compared the energy costs so there is hope if something can be produced cheap enough.</p><p>I personally like the idea of the extremely low operating temps that could be used to enhance performance.</p><p>matthew</p></htmltext>
<tokenext>I do n't have time now to RTA but if one was to use say a non conductive , non corrosive refrigerant one could make use of the lower vapor point to more efficantly remove the heat AND even better lower the operating temperature considerably .
A split cascaded system would be my choice as you could achieve very low temps with the possibility of modularizing the components so the point of use ( the chips ) is a relatively small inexpensive unit that is supported by layers of larger systems .
Visualize the module as cube with a pair of hoses that connect to larger refrigeration systems , just another connector service like power , data or control .
Within the range of the temps allowed by the refrigerant one could very easily throttle the temps depending on need.Hey we run hundreds of hp high voltage motors immersed in refrigerant every day in industrial settings.Cascaded freezer systems are common in labs and liquefied gases production uses similar systems on a massive scale.Just using the benefits of the more efficient heat transfer from immersion and lower vapor point should make for reduced operating costs .
Initial costs for todays common hardware is chump change compared the energy costs so there is hope if something can be produced cheap enough.I personally like the idea of the extremely low operating temps that could be used to enhance performance.matthew</tokentext>
<sentencetext>I don't have time now to RTA but if one was to use say a non conductive, non corrosive refrigerant one could make use of the lower vapor point to more efficantly remove the heat AND even better lower the operating temperature considerably.
A split cascaded system would be my choice as you could achieve very low temps with the possibility of modularizing the components so the point of use (the chips) is a relatively small inexpensive unit that is supported by layers of larger systems.
Visualize the module as cube with a pair of hoses that connect to larger refrigeration systems, just another connector service like power, data or control.
Within the range of the temps allowed by the refrigerant one could very easily throttle the temps depending on need.Hey we run hundreds of hp high voltage motors immersed in refrigerant every day in industrial settings.Cascaded freezer systems are common in labs and liquefied gases production uses similar systems on a massive scale.Just using the benefits of the more efficient heat transfer from immersion and lower vapor point should make for reduced operating costs.
Initial costs for todays common hardware is chump change compared the energy costs so there is hope if something can be produced cheap enough.I personally like the idea of the extremely low operating temps that could be used to enhance performance.matthew</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130664</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30134708</id>
	<title>Again?</title>
	<author>Anonymous</author>
	<datestamp>1258451880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I remember back in 1974, the IBM mainframe was water cooled. Worked pretty well until on Saturday the primary admin came in and forgot to turn on the water supply before starting the system.<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>I remember back in 1974 , the IBM mainframe was water cooled .
Worked pretty well until on Saturday the primary admin came in and forgot to turn on the water supply before starting the system .
: )</tokentext>
<sentencetext>I remember back in 1974, the IBM mainframe was water cooled.
Worked pretty well until on Saturday the primary admin came in and forgot to turn on the water supply before starting the system.
:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130112</id>
	<title>great idea</title>
	<author>mikey177</author>
	<datestamp>1258478640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>we all know what happens when you mix water and server rooms <a href="http://www.youtube.com/watch?v=1M\_QTBENR1Q" title="youtube.com" rel="nofollow">http://www.youtube.com/watch?v=1M\_QTBENR1Q</a> [youtube.com] better call up Noah</htmltext>
<tokenext>we all know what happens when you mix water and server rooms http : //www.youtube.com/watch ? v = 1M \ _QTBENR1Q [ youtube.com ] better call up Noah</tokentext>
<sentencetext>we all know what happens when you mix water and server rooms http://www.youtube.com/watch?v=1M\_QTBENR1Q [youtube.com] better call up Noah</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130408</id>
	<title>Quick Release</title>
	<author>srealm</author>
	<datestamp>1258479900000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>The problem with all this is you need a good piping and plumbing system in place, complete with quick release valves to ensure you can disconnect or connect hardware without having to do a whole bunch piping and water routing in the process.  Part of the beauty of racks is you just slide in the computer, screw it in, and plug in the plugs at the back and you're done.</p><p>I'm not saying it's impossible, but just building a new case, or blade, or whatever isn't going to do it - you need a new rack system with built in pipes and pumps, and probably a data center with even more plumbing with outlets at the appropriate places to supply each rack with water.  This is no small task for trying to retrofit an existing data center.</p><p>Not to mention that you have to make sure you have enough pressure to ensure each server is supplied water from the 'source', you cannot just daisy chain computers because the water would get hotter and hotter the further down the chain you go.  This means a dual piping system (one for 'cool or room temperature' water and one for 'hot' water).  And it means adjusting the pressure to each rack depending on how many computers are in it and such.</p><p>The issues of water cooling a data center go WAY beyond the case, which is why nobody has really done it yet - sure, the cost savings are potentially huge, but it's a LOT more complicated that sticking a bunch of servers with fans in racks that can move around and such, and then turning on the A/C.  And there is a lot less room for error (as someone else mentioned, what if a leak occurs?  or a plumbing joint fails, or whatever.  Hell, if a pump fails you could be out a whole rack!).</p></htmltext>
<tokenext>The problem with all this is you need a good piping and plumbing system in place , complete with quick release valves to ensure you can disconnect or connect hardware without having to do a whole bunch piping and water routing in the process .
Part of the beauty of racks is you just slide in the computer , screw it in , and plug in the plugs at the back and you 're done.I 'm not saying it 's impossible , but just building a new case , or blade , or whatever is n't going to do it - you need a new rack system with built in pipes and pumps , and probably a data center with even more plumbing with outlets at the appropriate places to supply each rack with water .
This is no small task for trying to retrofit an existing data center.Not to mention that you have to make sure you have enough pressure to ensure each server is supplied water from the 'source ' , you can not just daisy chain computers because the water would get hotter and hotter the further down the chain you go .
This means a dual piping system ( one for 'cool or room temperature ' water and one for 'hot ' water ) .
And it means adjusting the pressure to each rack depending on how many computers are in it and such.The issues of water cooling a data center go WAY beyond the case , which is why nobody has really done it yet - sure , the cost savings are potentially huge , but it 's a LOT more complicated that sticking a bunch of servers with fans in racks that can move around and such , and then turning on the A/C .
And there is a lot less room for error ( as someone else mentioned , what if a leak occurs ?
or a plumbing joint fails , or whatever .
Hell , if a pump fails you could be out a whole rack !
) .</tokentext>
<sentencetext>The problem with all this is you need a good piping and plumbing system in place, complete with quick release valves to ensure you can disconnect or connect hardware without having to do a whole bunch piping and water routing in the process.
Part of the beauty of racks is you just slide in the computer, screw it in, and plug in the plugs at the back and you're done.I'm not saying it's impossible, but just building a new case, or blade, or whatever isn't going to do it - you need a new rack system with built in pipes and pumps, and probably a data center with even more plumbing with outlets at the appropriate places to supply each rack with water.
This is no small task for trying to retrofit an existing data center.Not to mention that you have to make sure you have enough pressure to ensure each server is supplied water from the 'source', you cannot just daisy chain computers because the water would get hotter and hotter the further down the chain you go.
This means a dual piping system (one for 'cool or room temperature' water and one for 'hot' water).
And it means adjusting the pressure to each rack depending on how many computers are in it and such.The issues of water cooling a data center go WAY beyond the case, which is why nobody has really done it yet - sure, the cost savings are potentially huge, but it's a LOT more complicated that sticking a bunch of servers with fans in racks that can move around and such, and then turning on the A/C.
And there is a lot less room for error (as someone else mentioned, what if a leak occurs?
or a plumbing joint fails, or whatever.
Hell, if a pump fails you could be out a whole rack!
).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131020</id>
	<title>Re:Cray-2</title>
	<author>hey</author>
	<datestamp>1258482900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Everything *trickles* from Supercomputers/Mainframes eventually.</p></htmltext>
<tokenext>Everything * trickles * from Supercomputers/Mainframes eventually .</tokentext>
<sentencetext>Everything *trickles* from Supercomputers/Mainframes eventually.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130592</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132814</id>
	<title>Re:great idea</title>
	<author>darthwader</author>
	<datestamp>1258488840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Actually, this technology would make the data center better protected from a flood.  Since each blade is sealed in its own bubble of coolant, if the entire rack is underwater because of a flood, the blades would be protected.  Maybe some of the external components like the cooling pumps might be damaged, but most of the contents of the rack would be fine.</p><p>I'm not saying they could continue to operate through the flood, but after the water is gone and the mess cleaned up, you replace the UPS and fix the external things which are damaged, and you could get going again without having to actually replace the computers which are in the rack.</p></htmltext>
<tokenext>Actually , this technology would make the data center better protected from a flood .
Since each blade is sealed in its own bubble of coolant , if the entire rack is underwater because of a flood , the blades would be protected .
Maybe some of the external components like the cooling pumps might be damaged , but most of the contents of the rack would be fine.I 'm not saying they could continue to operate through the flood , but after the water is gone and the mess cleaned up , you replace the UPS and fix the external things which are damaged , and you could get going again without having to actually replace the computers which are in the rack .</tokentext>
<sentencetext>Actually, this technology would make the data center better protected from a flood.
Since each blade is sealed in its own bubble of coolant, if the entire rack is underwater because of a flood, the blades would be protected.
Maybe some of the external components like the cooling pumps might be damaged, but most of the contents of the rack would be fine.I'm not saying they could continue to operate through the flood, but after the water is gone and the mess cleaned up, you replace the UPS and fix the external things which are damaged, and you could get going again without having to actually replace the computers which are in the rack.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130112</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130694</id>
	<title>Hmmm, so what happens when internals break?</title>
	<author>darkmayo</author>
	<datestamp>1258481280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>With all those layers it doesnt seem that sliding one of these out and quickly swapping some RAM or any other part is<br>going to happen.</p><p>As well do these Iceotope guys actually make server hardware or just the cooling specs. Who do they get there guts from or are they just advertising and hoping the guys like HP, IBM or SUN (well maybe not SUN) decide to design there next generation of servers with this in mind?</p><p>I'd like to see how easy it is for replacement. doesnt look like there is a lot of room for other bits as well. I only saw a 1U model but do these guys have the same gear for larger more beefy servers?  How about blades?</p><p>lastly how much does one of these things weigh?</p></htmltext>
<tokenext>With all those layers it doesnt seem that sliding one of these out and quickly swapping some RAM or any other part isgoing to happen.As well do these Iceotope guys actually make server hardware or just the cooling specs .
Who do they get there guts from or are they just advertising and hoping the guys like HP , IBM or SUN ( well maybe not SUN ) decide to design there next generation of servers with this in mind ? I 'd like to see how easy it is for replacement .
doesnt look like there is a lot of room for other bits as well .
I only saw a 1U model but do these guys have the same gear for larger more beefy servers ?
How about blades ? lastly how much does one of these things weigh ?</tokentext>
<sentencetext>With all those layers it doesnt seem that sliding one of these out and quickly swapping some RAM or any other part isgoing to happen.As well do these Iceotope guys actually make server hardware or just the cooling specs.
Who do they get there guts from or are they just advertising and hoping the guys like HP, IBM or SUN (well maybe not SUN) decide to design there next generation of servers with this in mind?I'd like to see how easy it is for replacement.
doesnt look like there is a lot of room for other bits as well.
I only saw a 1U model but do these guys have the same gear for larger more beefy servers?
How about blades?lastly how much does one of these things weigh?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130626</id>
	<title>Re:Excess Heat</title>
	<author>Anonymous</author>
	<datestamp>1258480920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>IBM Zurich has developed technology along with a study to support its use in which waste server farm heat is transferred to water and the heated water then piped to the neighboring town to heat homes, which already use hot water heating. To make this process efficient, however, you need to maximize heat transfer to the fluid. When you have a system that emits heat in a nonuniform manner, the efficiency of transferring the heat to the fluid gets worse if you allow the heat to mix and become uniform before doing the transfer. That is, by the time the heat has made it to the outside of the server case, many sources of heat have already mixed together, reducing the ability to transfer this heat.</p><p>On the other hand, if you can bring the liquid very close to the actual sources of heat generation, the transfer can be much more efficient. The ideal situation is to use microchannels in the processor casing itself, with more channels or "spray nozzles" located over the parts of the chip that dissipate the most heat and fewer over the rest of the chip. The goal is for the chip to become uniform in temperature because you're pulling heat off at a rate proportional to how much is generated. This maximizes the amount of energy that is ultimately transferred to the fluid and available to heat something else.</p><p>In the IBM Zurich study, they noted that this scenario makes the most sense in cold climates where homes rely on hot water heating a large fraction of the year. One way to look at it is that the homes already rely on energy being turned directly into heat in order to generate hot water. The water-cooled servers merely replace a "dumb" source of heat with a source that happens to perform computing in the process but which can be almost as efficient in turning the original source of electrical energy into hot water.</p></htmltext>
<tokenext>IBM Zurich has developed technology along with a study to support its use in which waste server farm heat is transferred to water and the heated water then piped to the neighboring town to heat homes , which already use hot water heating .
To make this process efficient , however , you need to maximize heat transfer to the fluid .
When you have a system that emits heat in a nonuniform manner , the efficiency of transferring the heat to the fluid gets worse if you allow the heat to mix and become uniform before doing the transfer .
That is , by the time the heat has made it to the outside of the server case , many sources of heat have already mixed together , reducing the ability to transfer this heat.On the other hand , if you can bring the liquid very close to the actual sources of heat generation , the transfer can be much more efficient .
The ideal situation is to use microchannels in the processor casing itself , with more channels or " spray nozzles " located over the parts of the chip that dissipate the most heat and fewer over the rest of the chip .
The goal is for the chip to become uniform in temperature because you 're pulling heat off at a rate proportional to how much is generated .
This maximizes the amount of energy that is ultimately transferred to the fluid and available to heat something else.In the IBM Zurich study , they noted that this scenario makes the most sense in cold climates where homes rely on hot water heating a large fraction of the year .
One way to look at it is that the homes already rely on energy being turned directly into heat in order to generate hot water .
The water-cooled servers merely replace a " dumb " source of heat with a source that happens to perform computing in the process but which can be almost as efficient in turning the original source of electrical energy into hot water .</tokentext>
<sentencetext>IBM Zurich has developed technology along with a study to support its use in which waste server farm heat is transferred to water and the heated water then piped to the neighboring town to heat homes, which already use hot water heating.
To make this process efficient, however, you need to maximize heat transfer to the fluid.
When you have a system that emits heat in a nonuniform manner, the efficiency of transferring the heat to the fluid gets worse if you allow the heat to mix and become uniform before doing the transfer.
That is, by the time the heat has made it to the outside of the server case, many sources of heat have already mixed together, reducing the ability to transfer this heat.On the other hand, if you can bring the liquid very close to the actual sources of heat generation, the transfer can be much more efficient.
The ideal situation is to use microchannels in the processor casing itself, with more channels or "spray nozzles" located over the parts of the chip that dissipate the most heat and fewer over the rest of the chip.
The goal is for the chip to become uniform in temperature because you're pulling heat off at a rate proportional to how much is generated.
This maximizes the amount of energy that is ultimately transferred to the fluid and available to heat something else.In the IBM Zurich study, they noted that this scenario makes the most sense in cold climates where homes rely on hot water heating a large fraction of the year.
One way to look at it is that the homes already rely on energy being turned directly into heat in order to generate hot water.
The water-cooled servers merely replace a "dumb" source of heat with a source that happens to perform computing in the process but which can be almost as efficient in turning the original source of electrical energy into hot water.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130060</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130724</id>
	<title>Re:Excess Heat</title>
	<author>Smidge204</author>
	<datestamp>1258481460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Very little, since you're dealing with very low quality heat. The hottest temp in your system is going to be the hardware itself (unless you're expending energy to pump it - then what's the point of trying to generate power from it?)</p><p>So if your max hardware temp is, say, 38C (100F) that's not good enough to generate any appreciable power from.</p><p>On the other hand, you probably will be pumping the heat to chill the system, and the rejected heat temp may be quite a bit higher - maybe as high as 75C. You can use that to heat your building's occupied spaces.<br>=Smidge=</p></htmltext>
<tokenext>Very little , since you 're dealing with very low quality heat .
The hottest temp in your system is going to be the hardware itself ( unless you 're expending energy to pump it - then what 's the point of trying to generate power from it ?
) So if your max hardware temp is , say , 38C ( 100F ) that 's not good enough to generate any appreciable power from.On the other hand , you probably will be pumping the heat to chill the system , and the rejected heat temp may be quite a bit higher - maybe as high as 75C .
You can use that to heat your building 's occupied spaces. = Smidge =</tokentext>
<sentencetext>Very little, since you're dealing with very low quality heat.
The hottest temp in your system is going to be the hardware itself (unless you're expending energy to pump it - then what's the point of trying to generate power from it?
)So if your max hardware temp is, say, 38C (100F) that's not good enough to generate any appreciable power from.On the other hand, you probably will be pumping the heat to chill the system, and the rejected heat temp may be quite a bit higher - maybe as high as 75C.
You can use that to heat your building's occupied spaces.=Smidge=</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130060</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131524</id>
	<title>Re:Ugh.</title>
	<author>Cajun Hell</author>
	<datestamp>1258484640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Weird that your filters are malfunctioning.  But anyway, these cool new bags are only currently available through barter, in exchange for 2 kiddie porn magazines plus one copy of michaelangelo virus.</htmltext>
<tokenext>Weird that your filters are malfunctioning .
But anyway , these cool new bags are only currently available through barter , in exchange for 2 kiddie porn magazines plus one copy of michaelangelo virus .</tokentext>
<sentencetext>Weird that your filters are malfunctioning.
But anyway, these cool new bags are only currently available through barter, in exchange for 2 kiddie porn magazines plus one copy of michaelangelo virus.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130006</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30134186</id>
	<title>Re:Quick Release</title>
	<author>jbengt</author>
	<datestamp>1258450260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The issues of water cooling a data center go WAY beyond the case, which is why nobody has really done it yet . . .</p> </div><p>They've only been doing direct water cooling of data center computers since the 1950s.  Though the last time I worked on one was in the 1980s, and it was mainframes, not PCs/blades.</p></div>
	</htmltext>
<tokenext>The issues of water cooling a data center go WAY beyond the case , which is why nobody has really done it yet .
. .
They 've only been doing direct water cooling of data center computers since the 1950s .
Though the last time I worked on one was in the 1980s , and it was mainframes , not PCs/blades .</tokentext>
<sentencetext>The issues of water cooling a data center go WAY beyond the case, which is why nobody has really done it yet .
. .
They've only been doing direct water cooling of data center computers since the 1950s.
Though the last time I worked on one was in the 1980s, and it was mainframes, not PCs/blades.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130408</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131852</id>
	<title>Re:Water is a hassle</title>
	<author>tuomoks</author>
	<datestamp>1258485660000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Water (and liquid coolants, even metals) can be a hassle if not deigned correctly. I have had my experiences with water cooled systems but mainly the "over efficiency", well, one burst which shouldn't have happened (LOL).</p><p>One thing I have learned (from my son) - in cars, everything replaced with military and/or airplane grade fittings, valves, tubes, etc - makes life much easier. Not much more expensive but very fast pays back. If I would have known that (much) earlier instead of accepting engineering (good enough) / accounting (cheap enough), my life would have been easier but maybe it's a learning process?</p></htmltext>
<tokenext>Water ( and liquid coolants , even metals ) can be a hassle if not deigned correctly .
I have had my experiences with water cooled systems but mainly the " over efficiency " , well , one burst which should n't have happened ( LOL ) .One thing I have learned ( from my son ) - in cars , everything replaced with military and/or airplane grade fittings , valves , tubes , etc - makes life much easier .
Not much more expensive but very fast pays back .
If I would have known that ( much ) earlier instead of accepting engineering ( good enough ) / accounting ( cheap enough ) , my life would have been easier but maybe it 's a learning process ?</tokentext>
<sentencetext>Water (and liquid coolants, even metals) can be a hassle if not deigned correctly.
I have had my experiences with water cooled systems but mainly the "over efficiency", well, one burst which shouldn't have happened (LOL).One thing I have learned (from my son) - in cars, everything replaced with military and/or airplane grade fittings, valves, tubes, etc - makes life much easier.
Not much more expensive but very fast pays back.
If I would have known that (much) earlier instead of accepting engineering (good enough) / accounting (cheap enough), my life would have been easier but maybe it's a learning process?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130460</id>
	<title>Re:Do I get at least a pair of rubber gloves?</title>
	<author>perdera</author>
	<datestamp>1258480260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yeah, I'll keep my FRUs, thanks.</p></htmltext>
<tokenext>Yeah , I 'll keep my FRUs , thanks .</tokentext>
<sentencetext>Yeah, I'll keep my FRUs, thanks.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130086</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130086</id>
	<title>Do I get at least a pair of rubber gloves?</title>
	<author>Itninja</author>
	<datestamp>1258478520000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>Seriously. What do we do when a RAM module or a backplane fails? Will a simple hardware swap become a task for those trained in hazmat handling? I do not want to be on the help desk when someone calls and says "Help! The servers are leaking!"</htmltext>
<tokenext>Seriously .
What do we do when a RAM module or a backplane fails ?
Will a simple hardware swap become a task for those trained in hazmat handling ?
I do not want to be on the help desk when someone calls and says " Help !
The servers are leaking !
"</tokentext>
<sentencetext>Seriously.
What do we do when a RAM module or a backplane fails?
Will a simple hardware swap become a task for those trained in hazmat handling?
I do not want to be on the help desk when someone calls and says "Help!
The servers are leaking!
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131678</id>
	<title>Costs of water</title>
	<author>stimpleton</author>
	<datestamp>1258485180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>"Earlier this year, IBM predicted that in ten years all data centre servers might be water-cooled."</i>
<br> <br>
The costs of cooling air will be replaced by the costs of obtaining water. This system will not be for "water challenged areas".....Californy, etc.</htmltext>
<tokenext>" Earlier this year , IBM predicted that in ten years all data centre servers might be water-cooled .
" The costs of cooling air will be replaced by the costs of obtaining water .
This system will not be for " water challenged areas " .....Californy , etc .</tokentext>
<sentencetext>"Earlier this year, IBM predicted that in ten years all data centre servers might be water-cooled.
"
 
The costs of cooling air will be replaced by the costs of obtaining water.
This system will not be for "water challenged areas".....Californy, etc.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130634</id>
	<title>Cold mineral oil.</title>
	<author>ground.zero.612</author>
	<datestamp>1258480980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sixteen years ago, at the end of my highschool career, I was very into overclocking (had multiple celeron 300A). With peltier cooling I was able to run a 300mhz CPU at 450mhz with rock solid stability (ran things like prime95 24hrs a day for weeks). People were starting to experiment with liquid cooling commodity white-box computers.</p><p>One of the more interesting applications I saw was an old styrofoam cooler converted into a PC case. All components were submerged in a bath of cold mineral oil. I remember thinking that the data centers of the future would require SCUBA certified technicians in dry suits to swim down to the racks and swap out the broken module. Maybe I was thinking too grand, and this would be feasible as submerge-modules with aquarium like tanks instead of racks.</p></htmltext>
<tokenext>Sixteen years ago , at the end of my highschool career , I was very into overclocking ( had multiple celeron 300A ) .
With peltier cooling I was able to run a 300mhz CPU at 450mhz with rock solid stability ( ran things like prime95 24hrs a day for weeks ) .
People were starting to experiment with liquid cooling commodity white-box computers.One of the more interesting applications I saw was an old styrofoam cooler converted into a PC case .
All components were submerged in a bath of cold mineral oil .
I remember thinking that the data centers of the future would require SCUBA certified technicians in dry suits to swim down to the racks and swap out the broken module .
Maybe I was thinking too grand , and this would be feasible as submerge-modules with aquarium like tanks instead of racks .</tokentext>
<sentencetext>Sixteen years ago, at the end of my highschool career, I was very into overclocking (had multiple celeron 300A).
With peltier cooling I was able to run a 300mhz CPU at 450mhz with rock solid stability (ran things like prime95 24hrs a day for weeks).
People were starting to experiment with liquid cooling commodity white-box computers.One of the more interesting applications I saw was an old styrofoam cooler converted into a PC case.
All components were submerged in a bath of cold mineral oil.
I remember thinking that the data centers of the future would require SCUBA certified technicians in dry suits to swim down to the racks and swap out the broken module.
Maybe I was thinking too grand, and this would be feasible as submerge-modules with aquarium like tanks instead of racks.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30134836</id>
	<title>Microfluidics and Home Heat Sinks</title>
	<author>Doc Ruby</author>
	<datestamp>1258452300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm surprised that hot chips don't already include a layer of microfluidics right inside the package. People have been dealing with overheating chips inefficiently for years. There's clearly an opportunity to sell chips with fluid cooling built right into them.</p><p>I think eventually buildings will have fluid cooling systems attached to heat sinks for all kinds of purposes. <a href="http://en.wikipedia.org/wiki/Geothermal\_heat\_pump" title="wikipedia.org">Geothermal heat pumps</a> [wikipedia.org] already are popular for making heating and cooling up to 4x as powerful as the electricity powering them (instead of typical efficiency under 100\%). Refrigerators and clothes dryers could also benefit in efficiency by routing their relocated heat into a fluid through a heat sink. Computers, TVs, and other electronic devices all could run more efficiently connected to a shared heat sink circuit, while avoiding heating air during hot seasons that uses energy to be cooled back down.</p><p>Ultimately, we'll have to find a way to consume the waste heat instead of just move it "away", or suffer the fate of Larry Niven's <a href="http://en.wikipedia.org/wiki/Pierson's\_Puppeteers#Homeworld\_.E2.80.94\_The\_Fleet\_of\_Worlds" title="wikipedia.org">Puppeteer Homeworld</a> [wikipedia.org]. But in the meantime, we can do a lot better job managing it with the tech we've already got, with a few tweaks and more widespread application.</p></htmltext>
<tokenext>I 'm surprised that hot chips do n't already include a layer of microfluidics right inside the package .
People have been dealing with overheating chips inefficiently for years .
There 's clearly an opportunity to sell chips with fluid cooling built right into them.I think eventually buildings will have fluid cooling systems attached to heat sinks for all kinds of purposes .
Geothermal heat pumps [ wikipedia.org ] already are popular for making heating and cooling up to 4x as powerful as the electricity powering them ( instead of typical efficiency under 100 \ % ) .
Refrigerators and clothes dryers could also benefit in efficiency by routing their relocated heat into a fluid through a heat sink .
Computers , TVs , and other electronic devices all could run more efficiently connected to a shared heat sink circuit , while avoiding heating air during hot seasons that uses energy to be cooled back down.Ultimately , we 'll have to find a way to consume the waste heat instead of just move it " away " , or suffer the fate of Larry Niven 's Puppeteer Homeworld [ wikipedia.org ] .
But in the meantime , we can do a lot better job managing it with the tech we 've already got , with a few tweaks and more widespread application .</tokentext>
<sentencetext>I'm surprised that hot chips don't already include a layer of microfluidics right inside the package.
People have been dealing with overheating chips inefficiently for years.
There's clearly an opportunity to sell chips with fluid cooling built right into them.I think eventually buildings will have fluid cooling systems attached to heat sinks for all kinds of purposes.
Geothermal heat pumps [wikipedia.org] already are popular for making heating and cooling up to 4x as powerful as the electricity powering them (instead of typical efficiency under 100\%).
Refrigerators and clothes dryers could also benefit in efficiency by routing their relocated heat into a fluid through a heat sink.
Computers, TVs, and other electronic devices all could run more efficiently connected to a shared heat sink circuit, while avoiding heating air during hot seasons that uses energy to be cooled back down.Ultimately, we'll have to find a way to consume the waste heat instead of just move it "away", or suffer the fate of Larry Niven's Puppeteer Homeworld [wikipedia.org].
But in the meantime, we can do a lot better job managing it with the tech we've already got, with a few tweaks and more widespread application.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130746</id>
	<title>Re:Do I get at least a pair of rubber gloves?</title>
	<author>Anonymous</author>
	<datestamp>1258481580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>TFA states it's an inert liquid, so hazmat need not be involved. Actually, it sounds an awful lot like an <a href="http://hardware.slashdot.org/article.pl?sid=08/08/27/1930214" title="slashdot.org" rel="nofollow">earlier story</a> [slashdot.org] concerning a full-immersion prototype desktop PC.</htmltext>
<tokenext>TFA states it 's an inert liquid , so hazmat need not be involved .
Actually , it sounds an awful lot like an earlier story [ slashdot.org ] concerning a full-immersion prototype desktop PC .</tokentext>
<sentencetext>TFA states it's an inert liquid, so hazmat need not be involved.
Actually, it sounds an awful lot like an earlier story [slashdot.org] concerning a full-immersion prototype desktop PC.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130086</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30138188</id>
	<title>Re:Cold mineral oil.</title>
	<author>mirix</author>
	<datestamp>1258467480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>All of my 300A's ran fine at 450MHz with the crappy stock heatsink...</htmltext>
<tokenext>All of my 300A 's ran fine at 450MHz with the crappy stock heatsink.. .</tokentext>
<sentencetext>All of my 300A's ran fine at 450MHz with the crappy stock heatsink...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130106</id>
	<title>Super cool!</title>
	<author>stakovahflow</author>
	<datestamp>1258478640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Super cool! ^\_^ If they made those for laptops, I'd be all over it. My wife likes to use her HP as a lap warmer, with a blanket... But there I go thinking again... --Stak</htmltext>
<tokenext>Super cool !
^ \ _ ^ If they made those for laptops , I 'd be all over it .
My wife likes to use her HP as a lap warmer , with a blanket... But there I go thinking again... --Stak</tokentext>
<sentencetext>Super cool!
^\_^ If they made those for laptops, I'd be all over it.
My wife likes to use her HP as a lap warmer, with a blanket... But there I go thinking again... --Stak</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130796</id>
	<title>night time freezing of liquid would save more</title>
	<author>Locutus</author>
	<datestamp>1258481820000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>The technique of using cheaper off-peak energy to freeze liquid and then use that liquid for daytime cooling loads is already used in a very few places. Combine that technique with the direct server cooling mentioned in the article and....wait a minute....they are already claiming a 93\% cooling cost cut?  Either their is huge waste now or they're already expecting to use off-peak energy.  But then again, maybe the remaining 7\% is still large enough to merit further savings.<br><br>Direct cooling makes far more sense than cooling rooms like I keep seeing around now.<br><br>LoB</htmltext>
<tokenext>The technique of using cheaper off-peak energy to freeze liquid and then use that liquid for daytime cooling loads is already used in a very few places .
Combine that technique with the direct server cooling mentioned in the article and....wait a minute....they are already claiming a 93 \ % cooling cost cut ?
Either their is huge waste now or they 're already expecting to use off-peak energy .
But then again , maybe the remaining 7 \ % is still large enough to merit further savings.Direct cooling makes far more sense than cooling rooms like I keep seeing around now.LoB</tokentext>
<sentencetext>The technique of using cheaper off-peak energy to freeze liquid and then use that liquid for daytime cooling loads is already used in a very few places.
Combine that technique with the direct server cooling mentioned in the article and....wait a minute....they are already claiming a 93\% cooling cost cut?
Either their is huge waste now or they're already expecting to use off-peak energy.
But then again, maybe the remaining 7\% is still large enough to merit further savings.Direct cooling makes far more sense than cooling rooms like I keep seeing around now.LoB</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130356</id>
	<title>Re:Excess Heat</title>
	<author>afidel</author>
	<datestamp>1258479720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Not much at all, delta-t is too low to get any real efficiency.</htmltext>
<tokenext>Not much at all , delta-t is too low to get any real efficiency .</tokentext>
<sentencetext>Not much at all, delta-t is too low to get any real efficiency.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130060</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131130</id>
	<title>Re:Water is a hassle</title>
	<author>Anonymous</author>
	<datestamp>1258483320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>I just don't see why you would go through the hassle with water cooling unless you actually have to, and quite frankly if your servers draw enough power to force you to use water for cooling then you're doing something weird.</p></div><p>...like eating the instrument data of a really <a href="http://en.wikipedia.org/wiki/Square\_Kilometre\_Array" title="wikipedia.org" rel="nofollow">huge physics experiment?</a> [wikipedia.org]</p></div>
	</htmltext>
<tokenext>I just do n't see why you would go through the hassle with water cooling unless you actually have to , and quite frankly if your servers draw enough power to force you to use water for cooling then you 're doing something weird....like eating the instrument data of a really huge physics experiment ?
[ wikipedia.org ]</tokentext>
<sentencetext>I just don't see why you would go through the hassle with water cooling unless you actually have to, and quite frankly if your servers draw enough power to force you to use water for cooling then you're doing something weird....like eating the instrument data of a really huge physics experiment?
[wikipedia.org]
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131004</id>
	<title>Re:Cray-2</title>
	<author>jcaren</author>
	<datestamp>1258482780000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>The crays full immersion coolant model hit a big problem - the coanda effect.</p><p>This is where layer of fluid near the actual component flows much slower than actual flow - in layers slowing down exponentially as it gets closer to the stationary components.</p><p>For air this is not too much of a problem - only a very fine layer of stationary air over compenents that does not affect cooling. But with liquids the effect is both noticable and severely impacts coolant flow over hot surfaces - with some then "next gen" cray chips actually boiling the fluid. As todays chips run much hotter and generate a lot more heat than those Cray chips I can see this being a major problem today...</p><p>Crays fix for this was to move from full fluid immersion to immersion in droplets 'injected' using a car fuel injector.<br>This got everywhere and evaporated taking the heat away from components.</p><p>Rumor has it that during devlopment, engineers bought fuel injectors for a wide range of cars and the ones for certain porsche worked best so they bought the entire stock of fuel injectors for this car in the mid-west and used them...</p><p>I remember staff at cray giving away Porsche style sunglasses with Cray written on them instead of Porsche and when I enquired why - the above was the tale I was told by sales staff.</p><p>Whether true or not is something else - the cray sales staff in those days had a seriously odd sense of humor...</p></htmltext>
<tokenext>The crays full immersion coolant model hit a big problem - the coanda effect.This is where layer of fluid near the actual component flows much slower than actual flow - in layers slowing down exponentially as it gets closer to the stationary components.For air this is not too much of a problem - only a very fine layer of stationary air over compenents that does not affect cooling .
But with liquids the effect is both noticable and severely impacts coolant flow over hot surfaces - with some then " next gen " cray chips actually boiling the fluid .
As todays chips run much hotter and generate a lot more heat than those Cray chips I can see this being a major problem today...Crays fix for this was to move from full fluid immersion to immersion in droplets 'injected ' using a car fuel injector.This got everywhere and evaporated taking the heat away from components.Rumor has it that during devlopment , engineers bought fuel injectors for a wide range of cars and the ones for certain porsche worked best so they bought the entire stock of fuel injectors for this car in the mid-west and used them...I remember staff at cray giving away Porsche style sunglasses with Cray written on them instead of Porsche and when I enquired why - the above was the tale I was told by sales staff.Whether true or not is something else - the cray sales staff in those days had a seriously odd sense of humor.. .</tokentext>
<sentencetext>The crays full immersion coolant model hit a big problem - the coanda effect.This is where layer of fluid near the actual component flows much slower than actual flow - in layers slowing down exponentially as it gets closer to the stationary components.For air this is not too much of a problem - only a very fine layer of stationary air over compenents that does not affect cooling.
But with liquids the effect is both noticable and severely impacts coolant flow over hot surfaces - with some then "next gen" cray chips actually boiling the fluid.
As todays chips run much hotter and generate a lot more heat than those Cray chips I can see this being a major problem today...Crays fix for this was to move from full fluid immersion to immersion in droplets 'injected' using a car fuel injector.This got everywhere and evaporated taking the heat away from components.Rumor has it that during devlopment, engineers bought fuel injectors for a wide range of cars and the ones for certain porsche worked best so they bought the entire stock of fuel injectors for this car in the mid-west and used them...I remember staff at cray giving away Porsche style sunglasses with Cray written on them instead of Porsche and when I enquired why - the above was the tale I was told by sales staff.Whether true or not is something else - the cray sales staff in those days had a seriously odd sense of humor...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130592</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30137068</id>
	<title>But what is the cost and lockin? CPUs or HVAC?</title>
	<author>twrake</author>
	<datestamp>1258460880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I looked at specs for a new data center last week and the cost of electricity for the servers is followed closely by the cost for electricity to run the HVAC equipment. In a few more years it is likely the will become HVAC the major cost. So from a cost point of view the "lock in" is the HVAC equipment will become the major problem. This type of system will start to look real attractive and if we can get good leak detection within the server cabinet most of the problems will be manageable.</p><p>
&nbsp;</p></htmltext>
<tokenext>I looked at specs for a new data center last week and the cost of electricity for the servers is followed closely by the cost for electricity to run the HVAC equipment .
In a few more years it is likely the will become HVAC the major cost .
So from a cost point of view the " lock in " is the HVAC equipment will become the major problem .
This type of system will start to look real attractive and if we can get good leak detection within the server cabinet most of the problems will be manageable .
 </tokentext>
<sentencetext>I looked at specs for a new data center last week and the cost of electricity for the servers is followed closely by the cost for electricity to run the HVAC equipment.
In a few more years it is likely the will become HVAC the major cost.
So from a cost point of view the "lock in" is the HVAC equipment will become the major problem.
This type of system will start to look real attractive and if we can get good leak detection within the server cabinet most of the problems will be manageable.
 </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130664</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130094</id>
	<title>A few questions</title>
	<author>Reason58</author>
	<datestamp>1258478580000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>Won't this cause accessibility issues for the administrators who have to support these servers? Additionally, Google's evidence supports the idea that warmer temperatures are better for the life of some components, such as hard drives. Last, this may work well for traditional servers, but I fail to see how this can be made to support a large SAN array or something similar.</htmltext>
<tokenext>Wo n't this cause accessibility issues for the administrators who have to support these servers ?
Additionally , Google 's evidence supports the idea that warmer temperatures are better for the life of some components , such as hard drives .
Last , this may work well for traditional servers , but I fail to see how this can be made to support a large SAN array or something similar .</tokentext>
<sentencetext>Won't this cause accessibility issues for the administrators who have to support these servers?
Additionally, Google's evidence supports the idea that warmer temperatures are better for the life of some components, such as hard drives.
Last, this may work well for traditional servers, but I fail to see how this can be made to support a large SAN array or something similar.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131962</id>
	<title>Re:Doesn't look practical</title>
	<author>Anonymous</author>
	<datestamp>1258486020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You totally missed the point...  The entire board is submerged in a liquid, so every single component down to the smallest part on the board is surrounded 100\% by a "heat sink".  The liquid removes heat from the components, the water removes the heat from the special liquid, and the hot water can be used to heat the building, etc.  There is no air-flow because there is no air.  The height or layout of the motherboards could vary drastically and if the right liquid is used, you shouldn't even need special parts...</p></htmltext>
<tokenext>You totally missed the point... The entire board is submerged in a liquid , so every single component down to the smallest part on the board is surrounded 100 \ % by a " heat sink " .
The liquid removes heat from the components , the water removes the heat from the special liquid , and the hot water can be used to heat the building , etc .
There is no air-flow because there is no air .
The height or layout of the motherboards could vary drastically and if the right liquid is used , you should n't even need special parts.. .</tokentext>
<sentencetext>You totally missed the point...  The entire board is submerged in a liquid, so every single component down to the smallest part on the board is surrounded 100\% by a "heat sink".
The liquid removes heat from the components, the water removes the heat from the special liquid, and the hot water can be used to heat the building, etc.
There is no air-flow because there is no air.
The height or layout of the motherboards could vary drastically and if the right liquid is used, you shouldn't even need special parts...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130642</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130060</id>
	<title>Excess Heat</title>
	<author>smitty777</author>
	<datestamp>1258478340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>TFA mentions using the excess heat to heat the building.  I wonder how feasible it would be to actually recycle the heat to generate more power?  Anyone have an idea on how much heat could be generated by your typical server farm?</p></htmltext>
<tokenext>TFA mentions using the excess heat to heat the building .
I wonder how feasible it would be to actually recycle the heat to generate more power ?
Anyone have an idea on how much heat could be generated by your typical server farm ?</tokentext>
<sentencetext>TFA mentions using the excess heat to heat the building.
I wonder how feasible it would be to actually recycle the heat to generate more power?
Anyone have an idea on how much heat could be generated by your typical server farm?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130006</id>
	<title>Ugh.</title>
	<author>Pojut</author>
	<datestamp>1258478160000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>For some reason, the filters at work won't let me view the article.  Does it happen to mention how much the upfront cost for these bags are?</p></htmltext>
<tokenext>For some reason , the filters at work wo n't let me view the article .
Does it happen to mention how much the upfront cost for these bags are ?</tokentext>
<sentencetext>For some reason, the filters at work won't let me view the article.
Does it happen to mention how much the upfront cost for these bags are?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996</id>
	<title>Yes, but how much does it cost?</title>
	<author>captaindomon</author>
	<datestamp>1258478100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>That's really nifty, and I'm sure it works ok and everything, but... how much does it cost?</htmltext>
<tokenext>That 's really nifty , and I 'm sure it works ok and everything , but... how much does it cost ?</tokentext>
<sentencetext>That's really nifty, and I'm sure it works ok and everything, but... how much does it cost?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130574</id>
	<title>Prior Concepts</title>
	<author>Demonantis</author>
	<datestamp>1258480740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Reminds me of the <a href="http://www.ansul.com/en/Products/clean\_agent\_systems/sapphire.asp" title="ansul.com"> sapphire fire suppression </a> [ansul.com] just applied all the time. Or the sealed mineral oil boxes people seem to put computers in. The system could be huge if they apply it right and it actually realizes a 93\% reduction in energy cost(I have my doubts). The largest issue I have heard is that it is tricky, but not impossible, to move the heat away from the components once they heat up the liquid.</htmltext>
<tokenext>Reminds me of the sapphire fire suppression [ ansul.com ] just applied all the time .
Or the sealed mineral oil boxes people seem to put computers in .
The system could be huge if they apply it right and it actually realizes a 93 \ % reduction in energy cost ( I have my doubts ) .
The largest issue I have heard is that it is tricky , but not impossible , to move the heat away from the components once they heat up the liquid .</tokentext>
<sentencetext>Reminds me of the  sapphire fire suppression  [ansul.com] just applied all the time.
Or the sealed mineral oil boxes people seem to put computers in.
The system could be huge if they apply it right and it actually realizes a 93\% reduction in energy cost(I have my doubts).
The largest issue I have heard is that it is tricky, but not impossible, to move the heat away from the components once they heat up the liquid.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130128</id>
	<title>Coming back full circle</title>
	<author>hwyhobo</author>
	<datestamp>1258478700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Grandma would be proud of her cold compress technology.</p></htmltext>
<tokenext>Grandma would be proud of her cold compress technology .</tokentext>
<sentencetext>Grandma would be proud of her cold compress technology.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130794</id>
	<title>Cray XT5 "Jaguar"</title>
	<author>Anonymous</author>
	<datestamp>1258481820000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext>The <a href="http://www.cray.com/Products/XT5/Product/ORNLJaguar.aspx" title="cray.com" rel="nofollow">#1 on the top 500 supercomputer list</a> [cray.com] is using water cooling as well (in combination with phase change cooling). Watercooling whole racks can be done. The only difference from TFA is that is also adds <a href="http://www.pugetsystems.com/submerged.php" title="pugetsystems.com" rel="nofollow">immersion cooling</a> [pugetsystems.com]. Immersion cooling has been found to be superior in cooling but comes with (obvious) considerable maintenance problems. <a href="http://www.cray.com/Assets/Flash/XT5Jaguar/BigScienceF6Md.html" title="cray.com" rel="nofollow">The video</a> [cray.com] for this machine shows more or less standard water cooling blocks on the processors, along with various plumbing that to keeps the machine chilled.</htmltext>
<tokenext>The # 1 on the top 500 supercomputer list [ cray.com ] is using water cooling as well ( in combination with phase change cooling ) .
Watercooling whole racks can be done .
The only difference from TFA is that is also adds immersion cooling [ pugetsystems.com ] .
Immersion cooling has been found to be superior in cooling but comes with ( obvious ) considerable maintenance problems .
The video [ cray.com ] for this machine shows more or less standard water cooling blocks on the processors , along with various plumbing that to keeps the machine chilled .</tokentext>
<sentencetext>The #1 on the top 500 supercomputer list [cray.com] is using water cooling as well (in combination with phase change cooling).
Watercooling whole racks can be done.
The only difference from TFA is that is also adds immersion cooling [pugetsystems.com].
Immersion cooling has been found to be superior in cooling but comes with (obvious) considerable maintenance problems.
The video [cray.com] for this machine shows more or less standard water cooling blocks on the processors, along with various plumbing that to keeps the machine chilled.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132516</id>
	<title>Re:Cold mineral oil.</title>
	<author>zippthorne</author>
	<datestamp>1258487820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How did you get a 300 mhz CPU in 1993?</p></htmltext>
<tokenext>How did you get a 300 mhz CPU in 1993 ?</tokentext>
<sentencetext>How did you get a 300 mhz CPU in 1993?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130354</id>
	<title>New resume requirements...</title>
	<author>Firemouth</author>
	<datestamp>1258479660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Interviewer: "Well Mr. Robinson, while your resume is quite impressive, however, you just don't have everything we're looking for to fill the opening on our server maintenance team."
<br> <br>
Mr. Robinson: "What do you mean?  I have a Masters in Computer science, A+, MCSE, CCNA, CISSP, and 23 years of relevant experience.  What am I missing??"
<br> <br>
Interviewer: "You see, we're running that new server cooling technology you might of seen on slashdot.  I didn't see anything about being SCUBA certified on your resume."</htmltext>
<tokenext>Interviewer : " Well Mr. Robinson , while your resume is quite impressive , however , you just do n't have everything we 're looking for to fill the opening on our server maintenance team .
" Mr. Robinson : " What do you mean ?
I have a Masters in Computer science , A + , MCSE , CCNA , CISSP , and 23 years of relevant experience .
What am I missing ? ?
" Interviewer : " You see , we 're running that new server cooling technology you might of seen on slashdot .
I did n't see anything about being SCUBA certified on your resume .
"</tokentext>
<sentencetext>Interviewer: "Well Mr. Robinson, while your resume is quite impressive, however, you just don't have everything we're looking for to fill the opening on our server maintenance team.
"
 
Mr. Robinson: "What do you mean?
I have a Masters in Computer science, A+, MCSE, CCNA, CISSP, and 23 years of relevant experience.
What am I missing??
"
 
Interviewer: "You see, we're running that new server cooling technology you might of seen on slashdot.
I didn't see anything about being SCUBA certified on your resume.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30135628</id>
	<title>Nospill valves and other modern water cooling tech</title>
	<author>DrYak</author>
	<datestamp>1258454880000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>No spill (as in "almost insignificant", not as in "not too much, won't empty the whole system, but you better have some towel nearby just in case"), quick disconnect, low resistance valves for watercooled system have been already available for quite some time for enthusiasts.</p><p>(Koolance is an example of compagny producing such thing in the US, Aquatuning is an example of shop selling similar implements in the EU - no links to avoid gratuitous advertising to web spiders, but you can easily google the names).</p><p>Anyway, low conductance liquids are popular in application where spills and leaks aren't easily monitored (see above source). And don't forget that every other blade module is sealed too. So in case of leak you're just spilling... on a sealed container which isn't affected by external liquids anyway.</p><p>As for pressure : Well, uh, no. You would need tremendous pressure if you had to fill the whole rack using 1 single pump. Which would be a single point of failure and is bad.<br>The more sensible approach would be each blade module having its own small pump (Laing DDC for the win !!!) for pumping water out of the rack's main tank.</p><p>It's already the scenario used in most rack-cooling situation (see again mentioned sources above). And in case of pump failure, well, only 1 blade module fails. The rest of the rack is immune to it.</p><p>Well I'm sure most<nobr> <wbr></nobr>/.er have some ricer friend (the kind which custom hand compile gentoo with "-O9999"<nobr> <wbr></nobr>:-) ) to whom a massive failure of watercooling has happened some time ago. Watercooling safety has evolved since then and it's now much more secure even for simple enthusiast. Now, a company specializing into data-centers has even more possibility to offer safety.</p></htmltext>
<tokenext>No spill ( as in " almost insignificant " , not as in " not too much , wo n't empty the whole system , but you better have some towel nearby just in case " ) , quick disconnect , low resistance valves for watercooled system have been already available for quite some time for enthusiasts .
( Koolance is an example of compagny producing such thing in the US , Aquatuning is an example of shop selling similar implements in the EU - no links to avoid gratuitous advertising to web spiders , but you can easily google the names ) .Anyway , low conductance liquids are popular in application where spills and leaks are n't easily monitored ( see above source ) .
And do n't forget that every other blade module is sealed too .
So in case of leak you 're just spilling... on a sealed container which is n't affected by external liquids anyway.As for pressure : Well , uh , no .
You would need tremendous pressure if you had to fill the whole rack using 1 single pump .
Which would be a single point of failure and is bad.The more sensible approach would be each blade module having its own small pump ( Laing DDC for the win ! ! !
) for pumping water out of the rack 's main tank.It 's already the scenario used in most rack-cooling situation ( see again mentioned sources above ) .
And in case of pump failure , well , only 1 blade module fails .
The rest of the rack is immune to it.Well I 'm sure most /.er have some ricer friend ( the kind which custom hand compile gentoo with " -O9999 " : - ) ) to whom a massive failure of watercooling has happened some time ago .
Watercooling safety has evolved since then and it 's now much more secure even for simple enthusiast .
Now , a company specializing into data-centers has even more possibility to offer safety .</tokentext>
<sentencetext>No spill (as in "almost insignificant", not as in "not too much, won't empty the whole system, but you better have some towel nearby just in case"), quick disconnect, low resistance valves for watercooled system have been already available for quite some time for enthusiasts.
(Koolance is an example of compagny producing such thing in the US, Aquatuning is an example of shop selling similar implements in the EU - no links to avoid gratuitous advertising to web spiders, but you can easily google the names).Anyway, low conductance liquids are popular in application where spills and leaks aren't easily monitored (see above source).
And don't forget that every other blade module is sealed too.
So in case of leak you're just spilling... on a sealed container which isn't affected by external liquids anyway.As for pressure : Well, uh, no.
You would need tremendous pressure if you had to fill the whole rack using 1 single pump.
Which would be a single point of failure and is bad.The more sensible approach would be each blade module having its own small pump (Laing DDC for the win !!!
) for pumping water out of the rack's main tank.It's already the scenario used in most rack-cooling situation (see again mentioned sources above).
And in case of pump failure, well, only 1 blade module fails.
The rest of the rack is immune to it.Well I'm sure most /.er have some ricer friend (the kind which custom hand compile gentoo with "-O9999" :-) ) to whom a massive failure of watercooling has happened some time ago.
Watercooling safety has evolved since then and it's now much more secure even for simple enthusiast.
Now, a company specializing into data-centers has even more possibility to offer safety.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130408</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131200</id>
	<title>OK so then how do you explain this?</title>
	<author>kaizendojo</author>
	<datestamp>1258483620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><a href="http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/" title="datacenterknowledge.com">Source for excerpt below</a> [datacenterknowledge.com]
<br> <br>
"Intel set up a proof-of-concept using 900 production servers in a 1,000 square foot trailer in New Mexico, which it divided into two equal sections using low-cost direct-expansion (DX) air conditioning equipment. Recirculated air was used to cool servers in one half of the facility, while the other used air-side economization, expelling all hot waste air outside the data center, and drawing in exterior air to cool the servers. It ran the experiment over a 10-month period, from October 2007 to August 2008.
<br> <br>
The temperature of the outside air ranged between 64 and 92 degrees, and Intel made no attempt to control humidity, and applied only minimal filtering for particulates, using "a standard household air filter that removed only large particles from the incoming air but permitted fine dust to pass through." As a result, humidity in the data center ranged from 4 percent to more than 90 percent, and the servers became covered with a fine layer of dust.
<br> <br>
Despite the dust and variation in humidity and temperature, the failure rate in the test area using air-side economizers was 4.46 percent, not much different from the 3.83 percent failure rate in Intel's main data center at the site over the same period. Interestingly, the trailer compartment with recirculated DX cooling had the lowest failure rate at just 2.45 percent, even lower than Intel's main data center."
<br> <br> <br>
And although the failure rate was similar, the electricity bills were night and day.  So I'm not buying into this unless your running a HUGE data warehousing op with more transactions than WalMart...</htmltext>
<tokenext>Source for excerpt below [ datacenterknowledge.com ] " Intel set up a proof-of-concept using 900 production servers in a 1,000 square foot trailer in New Mexico , which it divided into two equal sections using low-cost direct-expansion ( DX ) air conditioning equipment .
Recirculated air was used to cool servers in one half of the facility , while the other used air-side economization , expelling all hot waste air outside the data center , and drawing in exterior air to cool the servers .
It ran the experiment over a 10-month period , from October 2007 to August 2008 .
The temperature of the outside air ranged between 64 and 92 degrees , and Intel made no attempt to control humidity , and applied only minimal filtering for particulates , using " a standard household air filter that removed only large particles from the incoming air but permitted fine dust to pass through .
" As a result , humidity in the data center ranged from 4 percent to more than 90 percent , and the servers became covered with a fine layer of dust .
Despite the dust and variation in humidity and temperature , the failure rate in the test area using air-side economizers was 4.46 percent , not much different from the 3.83 percent failure rate in Intel 's main data center at the site over the same period .
Interestingly , the trailer compartment with recirculated DX cooling had the lowest failure rate at just 2.45 percent , even lower than Intel 's main data center .
" And although the failure rate was similar , the electricity bills were night and day .
So I 'm not buying into this unless your running a HUGE data warehousing op with more transactions than WalMart.. .</tokentext>
<sentencetext>Source for excerpt below [datacenterknowledge.com]
 
"Intel set up a proof-of-concept using 900 production servers in a 1,000 square foot trailer in New Mexico, which it divided into two equal sections using low-cost direct-expansion (DX) air conditioning equipment.
Recirculated air was used to cool servers in one half of the facility, while the other used air-side economization, expelling all hot waste air outside the data center, and drawing in exterior air to cool the servers.
It ran the experiment over a 10-month period, from October 2007 to August 2008.
The temperature of the outside air ranged between 64 and 92 degrees, and Intel made no attempt to control humidity, and applied only minimal filtering for particulates, using "a standard household air filter that removed only large particles from the incoming air but permitted fine dust to pass through.
" As a result, humidity in the data center ranged from 4 percent to more than 90 percent, and the servers became covered with a fine layer of dust.
Despite the dust and variation in humidity and temperature, the failure rate in the test area using air-side economizers was 4.46 percent, not much different from the 3.83 percent failure rate in Intel's main data center at the site over the same period.
Interestingly, the trailer compartment with recirculated DX cooling had the lowest failure rate at just 2.45 percent, even lower than Intel's main data center.
"
  
And although the failure rate was similar, the electricity bills were night and day.
So I'm not buying into this unless your running a HUGE data warehousing op with more transactions than WalMart...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130888</id>
	<title>I thought we'd finally learned...</title>
	<author>pla</author>
	<datestamp>1258482240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Yet another way to increase the density of server farms...  Useful if you <b>must</b> grow your servers in Manhattan,
a waste of money otherwise.<br>
<br>
Among the many great things the internet has brought us (*cough*porn*cough*), "location-independence" ranks pretty high
up there.  Your servers don't <b>need</b> to all fit in one cargo container that runs so hot it requires LN cooling.  For
all it matters, you could put them in a single line of half-racks on a mountain ridge, cooled naturally by the wind (with
some care to keep them rain-free, of course).<br>
<br>
I thought we'd learned our lesson in that regard when tests last year by MS and Intel (not to mention Google's truly
inspiring data center designs) showed a substantial payoff by letting servers run hotter and less densely packed.  Silly me.</htmltext>
<tokenext>Yet another way to increase the density of server farms... Useful if you must grow your servers in Manhattan , a waste of money otherwise .
Among the many great things the internet has brought us ( * cough * porn * cough * ) , " location-independence " ranks pretty high up there .
Your servers do n't need to all fit in one cargo container that runs so hot it requires LN cooling .
For all it matters , you could put them in a single line of half-racks on a mountain ridge , cooled naturally by the wind ( with some care to keep them rain-free , of course ) .
I thought we 'd learned our lesson in that regard when tests last year by MS and Intel ( not to mention Google 's truly inspiring data center designs ) showed a substantial payoff by letting servers run hotter and less densely packed .
Silly me .</tokentext>
<sentencetext>Yet another way to increase the density of server farms...  Useful if you must grow your servers in Manhattan,
a waste of money otherwise.
Among the many great things the internet has brought us (*cough*porn*cough*), "location-independence" ranks pretty high
up there.
Your servers don't need to all fit in one cargo container that runs so hot it requires LN cooling.
For
all it matters, you could put them in a single line of half-racks on a mountain ridge, cooled naturally by the wind (with
some care to keep them rain-free, of course).
I thought we'd learned our lesson in that regard when tests last year by MS and Intel (not to mention Google's truly
inspiring data center designs) showed a substantial payoff by letting servers run hotter and less densely packed.
Silly me.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132768</id>
	<title>Re:Yes, but how much does it cost?</title>
	<author>Anonymous</author>
	<datestamp>1258488660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I believe that directly cooling components via liquid is way more effective than pushing some air around.</p><p>Think air cooled (loud and ineffective) vehicles compared to modern liquid cooled vehicles, that circulate liquid inside the engine (not the combustion chamber of course)...</p><p>I agree with the extra cost for the technology, however you could still use the same components if you e.g. submerge things in oil, that does not harm components and does not conduct electricity.</p></htmltext>
<tokenext>I believe that directly cooling components via liquid is way more effective than pushing some air around.Think air cooled ( loud and ineffective ) vehicles compared to modern liquid cooled vehicles , that circulate liquid inside the engine ( not the combustion chamber of course ) ...I agree with the extra cost for the technology , however you could still use the same components if you e.g .
submerge things in oil , that does not harm components and does not conduct electricity .</tokentext>
<sentencetext>I believe that directly cooling components via liquid is way more effective than pushing some air around.Think air cooled (loud and ineffective) vehicles compared to modern liquid cooled vehicles, that circulate liquid inside the engine (not the combustion chamber of course)...I agree with the extra cost for the technology, however you could still use the same components if you e.g.
submerge things in oil, that does not harm components and does not conduct electricity.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130664</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130592</id>
	<title>Cray-2</title>
	<author>fahrbot-bot</author>
	<datestamp>1258480740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><blockquote><div><p>"The Iceotope approach takes liquid - in the form of an inert synthetic coolant, rather than water - directly down to the component level,"<nobr> <wbr></nobr>... "It does this by immersing the entire contents of each server in a "bath" of coolant within a sealed compartment, creating a cooling module."</p></div>
</blockquote><p>

Hmm... The <a href="http://en.wikipedia.org/wiki/Cray-2" title="wikipedia.org">Cray-2</a> [wikipedia.org] was cooled via complete immersion in <a href="http://en.wikipedia.org/wiki/Fluorinert" title="wikipedia.org">Fluorinert</a> [wikipedia.org] way back in circa 1988.  I was an admin on one (Ya, I'm old).  So, this is a bit different, but certainly not ground-breaking.</p></div>
	</htmltext>
<tokenext>" The Iceotope approach takes liquid - in the form of an inert synthetic coolant , rather than water - directly down to the component level , " ... " It does this by immersing the entire contents of each server in a " bath " of coolant within a sealed compartment , creating a cooling module .
" Hmm... The Cray-2 [ wikipedia.org ] was cooled via complete immersion in Fluorinert [ wikipedia.org ] way back in circa 1988 .
I was an admin on one ( Ya , I 'm old ) .
So , this is a bit different , but certainly not ground-breaking .</tokentext>
<sentencetext>"The Iceotope approach takes liquid - in the form of an inert synthetic coolant, rather than water - directly down to the component level," ... "It does this by immersing the entire contents of each server in a "bath" of coolant within a sealed compartment, creating a cooling module.
"


Hmm... The Cray-2 [wikipedia.org] was cooled via complete immersion in Fluorinert [wikipedia.org] way back in circa 1988.
I was an admin on one (Ya, I'm old).
So, this is a bit different, but certainly not ground-breaking.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132958</id>
	<title>I'm no expert...</title>
	<author>Kleppy</author>
	<datestamp>1258489320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>...but I've seen water cooling for my short time on this earth as the superior cooling method.  So much that I ran it myself.

Sony even puts it in their <a href="http://www.pcworld.com/reviews/product/26280/review/vaio\_vgcra842g.html" title="pcworld.com" rel="nofollow">system</a> [pcworld.com] once but it was of a passive system than a pump/coolant system. Big name using it right out of the box.

I don't know why leaks would be that big of an issue as this isn't a high pressure water system; being a closed loop it is going to be a very low pressure system unless you are trying to blow water as fast as you can through it. If it moves too fast, it will create a layer of stagnant coolant just off the surfaces and degrade cooling. Low (pressure)and slow (moving) should yield best cooling. No need to move 2000 Lph unless you are using one pump for many heat sources to maintain flow, but I wouldn't put that many devices on one pump.</htmltext>
<tokenext>...but I 've seen water cooling for my short time on this earth as the superior cooling method .
So much that I ran it myself .
Sony even puts it in their system [ pcworld.com ] once but it was of a passive system than a pump/coolant system .
Big name using it right out of the box .
I do n't know why leaks would be that big of an issue as this is n't a high pressure water system ; being a closed loop it is going to be a very low pressure system unless you are trying to blow water as fast as you can through it .
If it moves too fast , it will create a layer of stagnant coolant just off the surfaces and degrade cooling .
Low ( pressure ) and slow ( moving ) should yield best cooling .
No need to move 2000 Lph unless you are using one pump for many heat sources to maintain flow , but I would n't put that many devices on one pump .</tokentext>
<sentencetext>...but I've seen water cooling for my short time on this earth as the superior cooling method.
So much that I ran it myself.
Sony even puts it in their system [pcworld.com] once but it was of a passive system than a pump/coolant system.
Big name using it right out of the box.
I don't know why leaks would be that big of an issue as this isn't a high pressure water system; being a closed loop it is going to be a very low pressure system unless you are trying to blow water as fast as you can through it.
If it moves too fast, it will create a layer of stagnant coolant just off the surfaces and degrade cooling.
Low (pressure)and slow (moving) should yield best cooling.
No need to move 2000 Lph unless you are using one pump for many heat sources to maintain flow, but I wouldn't put that many devices on one pump.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30142480</id>
	<title>Re:Yes, but how much does it cost?</title>
	<author>Smidge204</author>
	<datestamp>1257085740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I don't have time now to RTA but if one was to use say a non conductive, non corrosive refrigerant one could make use of the lower vapor point to more efficantly remove the heat AND even better lower the operating temperature considerably.</p></div><p>Direct boiling is less than ideal because you get hot spots at the bubble nucleation sites. Using a phase change cooling scheme does not have any specific <i>heat removal</i> advantages. All phase change does it guarantee the temperature of the cooling medium - it makes no guarantees on heat flux or temperature of the object being cooled.</p><p>And such an environment virtually guarantees non-serviceability of the components.</p><p><div class="quote"><p>Hey we run hundreds of hp high voltage motors immersed in refrigerant every day in industrial settings.</p></div><p>Do we now? If you're referring to hermetically sealed compressors (such as in a refrigerator or AC unit) then the motor itself is most certainly not "immersed in the refrigerant." I'd be curious to know if you have any specific examples, though.</p><p><div class="quote"><p>Just using the benefits of the more efficient heat transfer</p></div><p>Again, such is not guaranteed.</p><p><div class="quote"><p>I personally like the idea of the extremely low operating temps that could be used to enhance performance.</p></div><p>Low operating temps do not automatically equate to higher performance. They allow you to run hardware over spec without frying it - but at the cost of stability. I don't think too many commercial server farms would be willing to make that trade.</p><p>Lowering the temp of the hardware also begins to work against you, economically. The farther the temp of the hardware gets below the ambient temp you ultimately reject the heat to, the more work you have to do. This should be self-evident: dQ/dT = h*a*(T1-T2). Maintaining a temperature gradient of 30C at 100 watts takes a third of the work of maintaining a gradient of 90C at 100 watts.</p><p>Maintaining a temperature gradient is, after all, exactly what cooling is all about.<br>=Smidge=</p></div>
	</htmltext>
<tokenext>I do n't have time now to RTA but if one was to use say a non conductive , non corrosive refrigerant one could make use of the lower vapor point to more efficantly remove the heat AND even better lower the operating temperature considerably.Direct boiling is less than ideal because you get hot spots at the bubble nucleation sites .
Using a phase change cooling scheme does not have any specific heat removal advantages .
All phase change does it guarantee the temperature of the cooling medium - it makes no guarantees on heat flux or temperature of the object being cooled.And such an environment virtually guarantees non-serviceability of the components.Hey we run hundreds of hp high voltage motors immersed in refrigerant every day in industrial settings.Do we now ?
If you 're referring to hermetically sealed compressors ( such as in a refrigerator or AC unit ) then the motor itself is most certainly not " immersed in the refrigerant .
" I 'd be curious to know if you have any specific examples , though.Just using the benefits of the more efficient heat transferAgain , such is not guaranteed.I personally like the idea of the extremely low operating temps that could be used to enhance performance.Low operating temps do not automatically equate to higher performance .
They allow you to run hardware over spec without frying it - but at the cost of stability .
I do n't think too many commercial server farms would be willing to make that trade.Lowering the temp of the hardware also begins to work against you , economically .
The farther the temp of the hardware gets below the ambient temp you ultimately reject the heat to , the more work you have to do .
This should be self-evident : dQ/dT = h * a * ( T1-T2 ) .
Maintaining a temperature gradient of 30C at 100 watts takes a third of the work of maintaining a gradient of 90C at 100 watts.Maintaining a temperature gradient is , after all , exactly what cooling is all about. = Smidge =</tokentext>
<sentencetext>I don't have time now to RTA but if one was to use say a non conductive, non corrosive refrigerant one could make use of the lower vapor point to more efficantly remove the heat AND even better lower the operating temperature considerably.Direct boiling is less than ideal because you get hot spots at the bubble nucleation sites.
Using a phase change cooling scheme does not have any specific heat removal advantages.
All phase change does it guarantee the temperature of the cooling medium - it makes no guarantees on heat flux or temperature of the object being cooled.And such an environment virtually guarantees non-serviceability of the components.Hey we run hundreds of hp high voltage motors immersed in refrigerant every day in industrial settings.Do we now?
If you're referring to hermetically sealed compressors (such as in a refrigerator or AC unit) then the motor itself is most certainly not "immersed in the refrigerant.
" I'd be curious to know if you have any specific examples, though.Just using the benefits of the more efficient heat transferAgain, such is not guaranteed.I personally like the idea of the extremely low operating temps that could be used to enhance performance.Low operating temps do not automatically equate to higher performance.
They allow you to run hardware over spec without frying it - but at the cost of stability.
I don't think too many commercial server farms would be willing to make that trade.Lowering the temp of the hardware also begins to work against you, economically.
The farther the temp of the hardware gets below the ambient temp you ultimately reject the heat to, the more work you have to do.
This should be self-evident: dQ/dT = h*a*(T1-T2).
Maintaining a temperature gradient of 30C at 100 watts takes a third of the work of maintaining a gradient of 90C at 100 watts.Maintaining a temperature gradient is, after all, exactly what cooling is all about.=Smidge=
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30139342</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130676</id>
	<title>weight?</title>
	<author>Anonymous</author>
	<datestamp>1258481220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>How much does a rack full of water-cooled blades weigh?</p><p>Never thought I'd see the UPS become the lightest thing in the server room.</p></htmltext>
<tokenext>How much does a rack full of water-cooled blades weigh ? Never thought I 'd see the UPS become the lightest thing in the server room .</tokentext>
<sentencetext>How much does a rack full of water-cooled blades weigh?Never thought I'd see the UPS become the lightest thing in the server room.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130802</id>
	<title>What about the benefits to Joe User?</title>
	<author>butabozuhi</author>
	<datestamp>1258481880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>There are probably great economies of scale for datacenters, but what about Joe User? The article wasn't clear if 'included in the manufacturing process' would include consumer level systems. Just thinking that cost savings for datacenters is great, but I'd be really interested if it helped out the regular consumer (not to mention what kind of operational issues might this bring up?).</htmltext>
<tokenext>There are probably great economies of scale for datacenters , but what about Joe User ?
The article was n't clear if 'included in the manufacturing process ' would include consumer level systems .
Just thinking that cost savings for datacenters is great , but I 'd be really interested if it helped out the regular consumer ( not to mention what kind of operational issues might this bring up ?
) .</tokentext>
<sentencetext>There are probably great economies of scale for datacenters, but what about Joe User?
The article wasn't clear if 'included in the manufacturing process' would include consumer level systems.
Just thinking that cost savings for datacenters is great, but I'd be really interested if it helped out the regular consumer (not to mention what kind of operational issues might this bring up?
).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30134868</id>
	<title>how about just cutting down the ac to dc to ac to</title>
	<author>Joe The Dragon</author>
	<datestamp>1258452420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>how about just cutting down the ac to dc to ac to dc part and make a common DC bus with the big and hot ac to dc part away from the severs and they can just have not as well dc to dc in them.</p><p>water has a lot that can go bad with it and do you want some water to mess up a $1000+ sever?</p></htmltext>
<tokenext>how about just cutting down the ac to dc to ac to dc part and make a common DC bus with the big and hot ac to dc part away from the severs and they can just have not as well dc to dc in them.water has a lot that can go bad with it and do you want some water to mess up a $ 1000 + sever ?</tokentext>
<sentencetext>how about just cutting down the ac to dc to ac to dc part and make a common DC bus with the big and hot ac to dc part away from the severs and they can just have not as well dc to dc in them.water has a lot that can go bad with it and do you want some water to mess up a $1000+ sever?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131188</id>
	<title>Re:Quick Release</title>
	<author>sexconker</author>
	<datestamp>1258483620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Not to mention the very simple fact that when something goes wrong with the servers, you have a team of guys ready to fix it in no time flat.</p><p>When something goes wrong with the plumbing no one can touch it unless they're a licensed plumber.  He'll take a few days to get there and a few days to do the job, AND he'll charge you more than you paid your server guys in the same time frame.</p></htmltext>
<tokenext>Not to mention the very simple fact that when something goes wrong with the servers , you have a team of guys ready to fix it in no time flat.When something goes wrong with the plumbing no one can touch it unless they 're a licensed plumber .
He 'll take a few days to get there and a few days to do the job , AND he 'll charge you more than you paid your server guys in the same time frame .</tokentext>
<sentencetext>Not to mention the very simple fact that when something goes wrong with the servers, you have a team of guys ready to fix it in no time flat.When something goes wrong with the plumbing no one can touch it unless they're a licensed plumber.
He'll take a few days to get there and a few days to do the job, AND he'll charge you more than you paid your server guys in the same time frame.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130408</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132232</id>
	<title>Re:A few questions</title>
	<author>turtleshadow</author>
	<datestamp>1258486920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The data centers I've been in the SAN array and tape library are is a totally different area, level, or building than the computing farm. This is because of security and accessibility to the librarians and vendors of long term storage. You use fibre or some other technology to connect the two areas.</p><p>With enough tapes and disk it means a tech or librarian is always walking around handling media and I'd rather them not touch my server cabinet inadvertently. Being 1 company we don't cage intra-department unless its mission critical.</p><p>The drives that contain the code  to start the system or fast local space could very easily be insulated in some other part of the cabinet. The proposed system is geared to big systems which don't required 1 CPU 1 Disk to start individual cpus. The blade is configured a channel to bootstrap from a disk or disk image somewhere else in the complex. Anyhow with 8-32GB MicroSD you can put that chip into an external USB port and configure a boot from that.</p><p>The  equilibrium of the system is the most important. Large swings of temps and humidity kill rotating media and robotic tape libraries. These occur when service doors are opened for substantial periods of time for a "hot swap component" removal or extended repairs which involve a cool down to the mechanical parts.</p><p>My most pressing admin question is how does the telemetry come in from the complex to warn me of a heat/pump/flow failure? Is this easy to use, Is it secured (IE no one can snmp/telnet to a dumb pump and shut it off)  and accountable with a robust logging system I can integrate into my business.</p></htmltext>
<tokenext>The data centers I 've been in the SAN array and tape library are is a totally different area , level , or building than the computing farm .
This is because of security and accessibility to the librarians and vendors of long term storage .
You use fibre or some other technology to connect the two areas.With enough tapes and disk it means a tech or librarian is always walking around handling media and I 'd rather them not touch my server cabinet inadvertently .
Being 1 company we do n't cage intra-department unless its mission critical.The drives that contain the code to start the system or fast local space could very easily be insulated in some other part of the cabinet .
The proposed system is geared to big systems which do n't required 1 CPU 1 Disk to start individual cpus .
The blade is configured a channel to bootstrap from a disk or disk image somewhere else in the complex .
Anyhow with 8-32GB MicroSD you can put that chip into an external USB port and configure a boot from that.The equilibrium of the system is the most important .
Large swings of temps and humidity kill rotating media and robotic tape libraries .
These occur when service doors are opened for substantial periods of time for a " hot swap component " removal or extended repairs which involve a cool down to the mechanical parts.My most pressing admin question is how does the telemetry come in from the complex to warn me of a heat/pump/flow failure ?
Is this easy to use , Is it secured ( IE no one can snmp/telnet to a dumb pump and shut it off ) and accountable with a robust logging system I can integrate into my business .</tokentext>
<sentencetext>The data centers I've been in the SAN array and tape library are is a totally different area, level, or building than the computing farm.
This is because of security and accessibility to the librarians and vendors of long term storage.
You use fibre or some other technology to connect the two areas.With enough tapes and disk it means a tech or librarian is always walking around handling media and I'd rather them not touch my server cabinet inadvertently.
Being 1 company we don't cage intra-department unless its mission critical.The drives that contain the code  to start the system or fast local space could very easily be insulated in some other part of the cabinet.
The proposed system is geared to big systems which don't required 1 CPU 1 Disk to start individual cpus.
The blade is configured a channel to bootstrap from a disk or disk image somewhere else in the complex.
Anyhow with 8-32GB MicroSD you can put that chip into an external USB port and configure a boot from that.The  equilibrium of the system is the most important.
Large swings of temps and humidity kill rotating media and robotic tape libraries.
These occur when service doors are opened for substantial periods of time for a "hot swap component" removal or extended repairs which involve a cool down to the mechanical parts.My most pressing admin question is how does the telemetry come in from the complex to warn me of a heat/pump/flow failure?
Is this easy to use, Is it secured (IE no one can snmp/telnet to a dumb pump and shut it off)  and accountable with a robust logging system I can integrate into my business.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130094</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132064</id>
	<title>Not for the real world</title>
	<author>wiedzmin</author>
	<datestamp>1258486380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>In all honesty, this being a cool concept and all, it would not work in the real world because a) it cannot be retrofitted to existing systems and b) it requires the use of proprietory, unknown hardware. How many large companies are going to switch from tried and trusted server providers (like HP, IBM, Dell and as of late Cisco) in favor of something that, well, looks nifty. Their only shot at this not becoming vaporware is to try and sell the technology to a major server manufacturer, and even then I doubt it will work - imagine all the effort it would require to retrofit your existing data center for liquid cooling... liquids and server rooms don't go well together.</htmltext>
<tokenext>In all honesty , this being a cool concept and all , it would not work in the real world because a ) it can not be retrofitted to existing systems and b ) it requires the use of proprietory , unknown hardware .
How many large companies are going to switch from tried and trusted server providers ( like HP , IBM , Dell and as of late Cisco ) in favor of something that , well , looks nifty .
Their only shot at this not becoming vaporware is to try and sell the technology to a major server manufacturer , and even then I doubt it will work - imagine all the effort it would require to retrofit your existing data center for liquid cooling... liquids and server rooms do n't go well together .</tokentext>
<sentencetext>In all honesty, this being a cool concept and all, it would not work in the real world because a) it cannot be retrofitted to existing systems and b) it requires the use of proprietory, unknown hardware.
How many large companies are going to switch from tried and trusted server providers (like HP, IBM, Dell and as of late Cisco) in favor of something that, well, looks nifty.
Their only shot at this not becoming vaporware is to try and sell the technology to a major server manufacturer, and even then I doubt it will work - imagine all the effort it would require to retrofit your existing data center for liquid cooling... liquids and server rooms don't go well together.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30134298</id>
	<title>Done as a hobby 9 years  ago</title>
	<author>Nikademus</author>
	<datestamp>1258450620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Wow, amazing, they finally produced something like what has been done on my website more than 9 years ago:<br><a href="http://www.octools.com/index.cgi?caller=articles/submersion/submersion.html" title="octools.com">http://www.octools.com/index.cgi?caller=articles/submersion/submersion.html</a> [octools.com]</p></htmltext>
<tokenext>Wow , amazing , they finally produced something like what has been done on my website more than 9 years ago : http : //www.octools.com/index.cgi ? caller = articles/submersion/submersion.html [ octools.com ]</tokentext>
<sentencetext>Wow, amazing, they finally produced something like what has been done on my website more than 9 years ago:http://www.octools.com/index.cgi?caller=articles/submersion/submersion.html [octools.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131050</id>
	<title>Re:Doesn't look practical</title>
	<author>hey</author>
	<datestamp>1258482960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That was my thought too.<br>I can see heat sinks with liquid pipes in them in the future.  Plus regular air cooling.  ie a hybrid solution.</p></htmltext>
<tokenext>That was my thought too.I can see heat sinks with liquid pipes in them in the future .
Plus regular air cooling .
ie a hybrid solution .</tokentext>
<sentencetext>That was my thought too.I can see heat sinks with liquid pipes in them in the future.
Plus regular air cooling.
ie a hybrid solution.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130642</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30142868</id>
	<title>Re:Hmmm, so what happens when internals break?</title>
	<author>1sockchuck</author>
	<datestamp>1257088260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Re vendors: Iceotope makes the cooling system. The <a href="http://www.datacenterknowledge.com/archives/2009/11/17/iceotope-a-new-take-on-liquid-cooling/" title="datacenterknowledge.com">demo at SC09</a> [datacenterknowledge.com] is using servers from Boston Limited, a UK server firm.</htmltext>
<tokenext>Re vendors : Iceotope makes the cooling system .
The demo at SC09 [ datacenterknowledge.com ] is using servers from Boston Limited , a UK server firm .</tokentext>
<sentencetext>Re vendors: Iceotope makes the cooling system.
The demo at SC09 [datacenterknowledge.com] is using servers from Boston Limited, a UK server firm.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130694</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131370</id>
	<title>Re:Water is a hassle</title>
	<author>Anonymous</author>
	<datestamp>1258484160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You may want to call the folks behind DRM. I've read elsewhere on Slashdot that they've been working on technology to make water not wet. It may come in very handy for your application.</p></htmltext>
<tokenext>You may want to call the folks behind DRM .
I 've read elsewhere on Slashdot that they 've been working on technology to make water not wet .
It may come in very handy for your application .</tokentext>
<sentencetext>You may want to call the folks behind DRM.
I've read elsewhere on Slashdot that they've been working on technology to make water not wet.
It may come in very handy for your application.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536</id>
	<title>Water is a hassle</title>
	<author>BlueParrot</author>
	<datestamp>1258480560000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>I work with particle accelerators that draw enough power that we don't have much choice but to use water cooling, and even though we have major radiation sources, high voltage running across the entire place, liquid helium cooled magnets, high power klystrons that feed microwaves to the accelerator cavities etc... the only thing that typically requires me to place an emergency call during a night shift is still water leaks.</p><p>Water is just that much of a hassle around electronics. Even an absolutely minor leak can raise the humidity in a place you really don't want humidity, it evaporates and then condenses on the colder parts of the system where even a single drop can cause a short circuit and fry some piece of equipment. After it absorbs dirt and dust from the surroundings it starts attacking most materials corrosively, which may not be noticed at first but gives sudden unexpected problems after a few years. If you don't keep the cooling system itself in perfect condition valves and taps will start corroding and you get blockages. Maintenance is a pain because you have to power everything down if you want to move just 1 pipe etc...</p><p>I just don't see why you would go through the hassle with water cooling unless you actually have to, and quite frankly if your servers draw enough power to force you to use water for cooling then you're doing something weird.</p></htmltext>
<tokenext>I work with particle accelerators that draw enough power that we do n't have much choice but to use water cooling , and even though we have major radiation sources , high voltage running across the entire place , liquid helium cooled magnets , high power klystrons that feed microwaves to the accelerator cavities etc... the only thing that typically requires me to place an emergency call during a night shift is still water leaks.Water is just that much of a hassle around electronics .
Even an absolutely minor leak can raise the humidity in a place you really do n't want humidity , it evaporates and then condenses on the colder parts of the system where even a single drop can cause a short circuit and fry some piece of equipment .
After it absorbs dirt and dust from the surroundings it starts attacking most materials corrosively , which may not be noticed at first but gives sudden unexpected problems after a few years .
If you do n't keep the cooling system itself in perfect condition valves and taps will start corroding and you get blockages .
Maintenance is a pain because you have to power everything down if you want to move just 1 pipe etc...I just do n't see why you would go through the hassle with water cooling unless you actually have to , and quite frankly if your servers draw enough power to force you to use water for cooling then you 're doing something weird .</tokentext>
<sentencetext>I work with particle accelerators that draw enough power that we don't have much choice but to use water cooling, and even though we have major radiation sources, high voltage running across the entire place, liquid helium cooled magnets, high power klystrons that feed microwaves to the accelerator cavities etc... the only thing that typically requires me to place an emergency call during a night shift is still water leaks.Water is just that much of a hassle around electronics.
Even an absolutely minor leak can raise the humidity in a place you really don't want humidity, it evaporates and then condenses on the colder parts of the system where even a single drop can cause a short circuit and fry some piece of equipment.
After it absorbs dirt and dust from the surroundings it starts attacking most materials corrosively, which may not be noticed at first but gives sudden unexpected problems after a few years.
If you don't keep the cooling system itself in perfect condition valves and taps will start corroding and you get blockages.
Maintenance is a pain because you have to power everything down if you want to move just 1 pipe etc...I just don't see why you would go through the hassle with water cooling unless you actually have to, and quite frankly if your servers draw enough power to force you to use water for cooling then you're doing something weird.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130616</id>
	<title>Re:Yes, but how much does it cost?</title>
	<author>rhyno46</author>
	<datestamp>1258480920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It's cheap.  Only 93\% of whatever you are paying now.</htmltext>
<tokenext>It 's cheap .
Only 93 \ % of whatever you are paying now .</tokentext>
<sentencetext>It's cheap.
Only 93\% of whatever you are paying now.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131294</id>
	<title>Server standardization...</title>
	<author>HockeyPuck</author>
	<datestamp>1258483980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The problem with this is that it requires server manufacturers to standardize their designs.  There was talk a few years ago about standardizing Bladeservers.  I don't see this happening as there's too much control in the bladecenter chassis, switch interfaces, management abilities etc.  Plus why would IBM want to sell an empty chassis and then let the customer fill it with HP C-Class blades?</p><p>Even racks themselves from IBM/HP/Dell/EMC/netapp/Sun aren't standardized, other than they are 19" wide.  This is why if you mix vendors in the same rack you've got to adjust the depth of the rails.</p><p>As for going out and buying third party cabinets (APC for example), some of these have complex ductwork associated with them which makes them take up more than one tile of width.</p><p>These guys probably want two things, either IBM/HP/DELL license their technology or someone buys the company.  Also, last I checked, there's not a large amount of room in my servers.</p></htmltext>
<tokenext>The problem with this is that it requires server manufacturers to standardize their designs .
There was talk a few years ago about standardizing Bladeservers .
I do n't see this happening as there 's too much control in the bladecenter chassis , switch interfaces , management abilities etc .
Plus why would IBM want to sell an empty chassis and then let the customer fill it with HP C-Class blades ? Even racks themselves from IBM/HP/Dell/EMC/netapp/Sun are n't standardized , other than they are 19 " wide .
This is why if you mix vendors in the same rack you 've got to adjust the depth of the rails.As for going out and buying third party cabinets ( APC for example ) , some of these have complex ductwork associated with them which makes them take up more than one tile of width.These guys probably want two things , either IBM/HP/DELL license their technology or someone buys the company .
Also , last I checked , there 's not a large amount of room in my servers .</tokentext>
<sentencetext>The problem with this is that it requires server manufacturers to standardize their designs.
There was talk a few years ago about standardizing Bladeservers.
I don't see this happening as there's too much control in the bladecenter chassis, switch interfaces, management abilities etc.
Plus why would IBM want to sell an empty chassis and then let the customer fill it with HP C-Class blades?Even racks themselves from IBM/HP/Dell/EMC/netapp/Sun aren't standardized, other than they are 19" wide.
This is why if you mix vendors in the same rack you've got to adjust the depth of the rails.As for going out and buying third party cabinets (APC for example), some of these have complex ductwork associated with them which makes them take up more than one tile of width.These guys probably want two things, either IBM/HP/DELL license their technology or someone buys the company.
Also, last I checked, there's not a large amount of room in my servers.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130810</id>
	<title>Almost...</title>
	<author>hatemonger</author>
	<datestamp>1258481880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>There's a joke somewhere about your server being so ugly you have to put a bag over it before you go inside, but I can't quite work it. Help?</htmltext>
<tokenext>There 's a joke somewhere about your server being so ugly you have to put a bag over it before you go inside , but I ca n't quite work it .
Help ?</tokentext>
<sentencetext>There's a joke somewhere about your server being so ugly you have to put a bag over it before you go inside, but I can't quite work it.
Help?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130512</id>
	<title>Water cooling on that size is no small feat...</title>
	<author>wandazulu</author>
	<datestamp>1258480500000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>The ES/9000 that I had contact with was a series of cabinets that were all water-cooled from the outside in...it was a maze of copper pipes all around the edges and back and looked like a fridge. When you opened a cabinet, you could feel a blast of cold air hit you.</p><p>It was no trivial feat to do this, they had to install a separate water tank, some generators (I remember one of the operations guys pointing to a Detroit Diesel generator outside in the alley and saying it was just for the computer's water system), moved a bathroom (only water they wanted around the computer was the special chilled stuff), and I can distinctly remember seeing the manuals(!)... 3-inch thick binders with the IBM logo on them, and all they were for was the planning and maintenance of the water system.</p><p>No wonder it took almost a year to install the machine.</p></htmltext>
<tokenext>The ES/9000 that I had contact with was a series of cabinets that were all water-cooled from the outside in...it was a maze of copper pipes all around the edges and back and looked like a fridge .
When you opened a cabinet , you could feel a blast of cold air hit you.It was no trivial feat to do this , they had to install a separate water tank , some generators ( I remember one of the operations guys pointing to a Detroit Diesel generator outside in the alley and saying it was just for the computer 's water system ) , moved a bathroom ( only water they wanted around the computer was the special chilled stuff ) , and I can distinctly remember seeing the manuals ( ! ) .. .
3-inch thick binders with the IBM logo on them , and all they were for was the planning and maintenance of the water system.No wonder it took almost a year to install the machine .</tokentext>
<sentencetext>The ES/9000 that I had contact with was a series of cabinets that were all water-cooled from the outside in...it was a maze of copper pipes all around the edges and back and looked like a fridge.
When you opened a cabinet, you could feel a blast of cold air hit you.It was no trivial feat to do this, they had to install a separate water tank, some generators (I remember one of the operations guys pointing to a Detroit Diesel generator outside in the alley and saying it was just for the computer's water system), moved a bathroom (only water they wanted around the computer was the special chilled stuff), and I can distinctly remember seeing the manuals(!)...
3-inch thick binders with the IBM logo on them, and all they were for was the planning and maintenance of the water system.No wonder it took almost a year to install the machine.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132152</id>
	<title>Data Centre/Center?</title>
	<author>Anonymous</author>
	<datestamp>1258486620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Ok, so they are British and they spell 'center' with the 'er' the other way around.  Why don't they spell server as 'servre'?</p></htmltext>
<tokenext>Ok , so they are British and they spell 'center ' with the 'er ' the other way around .
Why do n't they spell server as 'servre ' ?</tokentext>
<sentencetext>Ok, so they are British and they spell 'center' with the 'er' the other way around.
Why don't they spell server as 'servre'?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130498</id>
	<title>Re:Yes, but how much does it cost?</title>
	<author>jaggeh</author>
	<datestamp>1258480440000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>That's really nifty, and I'm sure it works ok and everything, but... how much does it cost?</p></div><p><div class="quote"><p>Figures cited by Iceotope show that the average air-cooled data centre with around 1000 servers costs around $788,400 (&pound;469,446) to cool over three years. The Iceotope system claims to eliminate the need for CRAC units and chillers by connecting the servers in the synthetic cool bags to a channel of warm water that transfers the heat outside the facility. This so-called &ldquo;end to end liquid&rdquo; cooling means that a data centre, fully equipped with Iceotope-cooled servers, could cut cooling costs to just $52,560 - a 93 percent reduction, the company states.</p> </div><p>taking the above figures into account as long as the cost to install is under the 200k figure theres an incentive to switch</p></div>
	</htmltext>
<tokenext>That 's really nifty , and I 'm sure it works ok and everything , but... how much does it cost ? Figures cited by Iceotope show that the average air-cooled data centre with around 1000 servers costs around $ 788,400 (   469,446 ) to cool over three years .
The Iceotope system claims to eliminate the need for CRAC units and chillers by connecting the servers in the synthetic cool bags to a channel of warm water that transfers the heat outside the facility .
This so-called    end to end liquid    cooling means that a data centre , fully equipped with Iceotope-cooled servers , could cut cooling costs to just $ 52,560 - a 93 percent reduction , the company states .
taking the above figures into account as long as the cost to install is under the 200k figure theres an incentive to switch</tokentext>
<sentencetext>That's really nifty, and I'm sure it works ok and everything, but... how much does it cost?Figures cited by Iceotope show that the average air-cooled data centre with around 1000 servers costs around $788,400 (£469,446) to cool over three years.
The Iceotope system claims to eliminate the need for CRAC units and chillers by connecting the servers in the synthetic cool bags to a channel of warm water that transfers the heat outside the facility.
This so-called “end to end liquid” cooling means that a data centre, fully equipped with Iceotope-cooled servers, could cut cooling costs to just $52,560 - a 93 percent reduction, the company states.
taking the above figures into account as long as the cost to install is under the 200k figure theres an incentive to switch
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130606</id>
	<title>Re:Ugh.</title>
	<author>camperdave</author>
	<datestamp>1258480860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No mention of cost it the articles I skimmed, however, no mention of cool bags either.  Actually I'm more reminded of <a href="http://www.thepelicanstore.com/Pelican-1560-Case.jpeg?id=206" title="thepelicanstore.com">Pelican cases</a> [thepelicanstore.com] than <a href="http://www.made-in-jiangsu.com/image/2f0j00oCtQDLHcqVkfM/PP-Non-Woven-Cooler-Bag-FH-19-.jpg" title="made-in-jiangsu.com">cool bags</a> [made-in-jiangsu.com]. What they're doing is immersing a motherboard in an inert synthetic liquid, and sealing that in one half of a hard shell.  They're running coolant water through the other half of the hard shell through a distribution unit in the rack.  All of the coolant water runs through a heat exchanger, which is connected to the building's water cooling system.<br> <br>
So: sealed liquid-immersed motherboard -&gt; sealed rack coolant flow -&gt; building's water supply.  No air cooling, just liquid to liquid to liquid, and the liquids are isolated from each other via heat exchangers.</htmltext>
<tokenext>No mention of cost it the articles I skimmed , however , no mention of cool bags either .
Actually I 'm more reminded of Pelican cases [ thepelicanstore.com ] than cool bags [ made-in-jiangsu.com ] .
What they 're doing is immersing a motherboard in an inert synthetic liquid , and sealing that in one half of a hard shell .
They 're running coolant water through the other half of the hard shell through a distribution unit in the rack .
All of the coolant water runs through a heat exchanger , which is connected to the building 's water cooling system .
So : sealed liquid-immersed motherboard - &gt; sealed rack coolant flow - &gt; building 's water supply .
No air cooling , just liquid to liquid to liquid , and the liquids are isolated from each other via heat exchangers .</tokentext>
<sentencetext>No mention of cost it the articles I skimmed, however, no mention of cool bags either.
Actually I'm more reminded of Pelican cases [thepelicanstore.com] than cool bags [made-in-jiangsu.com].
What they're doing is immersing a motherboard in an inert synthetic liquid, and sealing that in one half of a hard shell.
They're running coolant water through the other half of the hard shell through a distribution unit in the rack.
All of the coolant water runs through a heat exchanger, which is connected to the building's water cooling system.
So: sealed liquid-immersed motherboard -&gt; sealed rack coolant flow -&gt; building's water supply.
No air cooling, just liquid to liquid to liquid, and the liquids are isolated from each other via heat exchangers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130006</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30135490</id>
	<title>PCs are not quite so fragile</title>
	<author>Chemisor</author>
	<datestamp>1258454460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I had my own water cooling experiment about ten years ago. I had a two processor Athlon board and made two aluminium waterblocks for it. Since my metalworking skill was pretty low (and I was limited to hand tools), the blocks leaked, necessitating several patches. First with duct tape (:-), then with plumber's caulk, and finally by covering the whole thing with fiberglass epoxy, which plugged it up. Up to that time I had a nice little waterfall going from the waterblocks down onto the graphics card (a Radeon), onto the network card below that, and finally pooling at the bottom of the case. Surprizingly, the computer kept on working just fine for years, in spite of being constantly drenched. Then I got sick of messing with the plumbing and installed a fan, but then the motherboard failed after only a few months. Go figure.</p></htmltext>
<tokenext>I had my own water cooling experiment about ten years ago .
I had a two processor Athlon board and made two aluminium waterblocks for it .
Since my metalworking skill was pretty low ( and I was limited to hand tools ) , the blocks leaked , necessitating several patches .
First with duct tape ( : - ) , then with plumber 's caulk , and finally by covering the whole thing with fiberglass epoxy , which plugged it up .
Up to that time I had a nice little waterfall going from the waterblocks down onto the graphics card ( a Radeon ) , onto the network card below that , and finally pooling at the bottom of the case .
Surprizingly , the computer kept on working just fine for years , in spite of being constantly drenched .
Then I got sick of messing with the plumbing and installed a fan , but then the motherboard failed after only a few months .
Go figure .</tokentext>
<sentencetext>I had my own water cooling experiment about ten years ago.
I had a two processor Athlon board and made two aluminium waterblocks for it.
Since my metalworking skill was pretty low (and I was limited to hand tools), the blocks leaked, necessitating several patches.
First with duct tape (:-), then with plumber's caulk, and finally by covering the whole thing with fiberglass epoxy, which plugged it up.
Up to that time I had a nice little waterfall going from the waterblocks down onto the graphics card (a Radeon), onto the network card below that, and finally pooling at the bottom of the case.
Surprizingly, the computer kept on working just fine for years, in spite of being constantly drenched.
Then I got sick of messing with the plumbing and installed a fan, but then the motherboard failed after only a few months.
Go figure.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130642</id>
	<title>Doesn't look practical</title>
	<author>YesIAmAScript</author>
	<datestamp>1258481040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Look at the cross section photo. This dispenses completely with convection (air flow) and instead designs the system for direct physical contact from the heat sink to the components. Then the water flows behind the heat sink to take the heat away from that.</p><p>The problem is that means that you have to make a heat sink with varying height "fingers" on it to meet every component that produces heat (which is all of them), which means every time you change a component you have to redo the heat sink. And of course if you change the motherboard you also have to. With components available from multiple sources (second sourcing) and changing spec mid-model for cost-reduction, you can expect the profile of the heat sink to change frequently during the life of a model. And of course, you probably need to put heat sink goop on a lot of components, that might make enough surface tension that you'd have trouble getting it apart to service it.</p><p>Although this is workable, it seems unlikely it would ever be cost-effective. It'd probably be smarter to have certain major (heat-producing) components cooled by direct contact and a plenum for the rest that uses convection to get heat to a radiator-like assembly on the heat sink (except it isn't radiating here, it's absorbing heat).</p><p>I think water cooling is likely for servers in the future. Even end-to-end water heat exchange to the atmosphere like this proposes, instead of transferring the heat to the room air and then taking it out with air handlers might be the future. But I'm not sure these guys have the right strategy at the bottom level.</p></htmltext>
<tokenext>Look at the cross section photo .
This dispenses completely with convection ( air flow ) and instead designs the system for direct physical contact from the heat sink to the components .
Then the water flows behind the heat sink to take the heat away from that.The problem is that means that you have to make a heat sink with varying height " fingers " on it to meet every component that produces heat ( which is all of them ) , which means every time you change a component you have to redo the heat sink .
And of course if you change the motherboard you also have to .
With components available from multiple sources ( second sourcing ) and changing spec mid-model for cost-reduction , you can expect the profile of the heat sink to change frequently during the life of a model .
And of course , you probably need to put heat sink goop on a lot of components , that might make enough surface tension that you 'd have trouble getting it apart to service it.Although this is workable , it seems unlikely it would ever be cost-effective .
It 'd probably be smarter to have certain major ( heat-producing ) components cooled by direct contact and a plenum for the rest that uses convection to get heat to a radiator-like assembly on the heat sink ( except it is n't radiating here , it 's absorbing heat ) .I think water cooling is likely for servers in the future .
Even end-to-end water heat exchange to the atmosphere like this proposes , instead of transferring the heat to the room air and then taking it out with air handlers might be the future .
But I 'm not sure these guys have the right strategy at the bottom level .</tokentext>
<sentencetext>Look at the cross section photo.
This dispenses completely with convection (air flow) and instead designs the system for direct physical contact from the heat sink to the components.
Then the water flows behind the heat sink to take the heat away from that.The problem is that means that you have to make a heat sink with varying height "fingers" on it to meet every component that produces heat (which is all of them), which means every time you change a component you have to redo the heat sink.
And of course if you change the motherboard you also have to.
With components available from multiple sources (second sourcing) and changing spec mid-model for cost-reduction, you can expect the profile of the heat sink to change frequently during the life of a model.
And of course, you probably need to put heat sink goop on a lot of components, that might make enough surface tension that you'd have trouble getting it apart to service it.Although this is workable, it seems unlikely it would ever be cost-effective.
It'd probably be smarter to have certain major (heat-producing) components cooled by direct contact and a plenum for the rest that uses convection to get heat to a radiator-like assembly on the heat sink (except it isn't radiating here, it's absorbing heat).I think water cooling is likely for servers in the future.
Even end-to-end water heat exchange to the atmosphere like this proposes, instead of transferring the heat to the room air and then taking it out with air handlers might be the future.
But I'm not sure these guys have the right strategy at the bottom level.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132114</id>
	<title>Yay! Water and electricity!</title>
	<author>arctic19</author>
	<datestamp>1258486500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>What could possibly go wrong?</htmltext>
<tokenext>What could possibly go wrong ?</tokentext>
<sentencetext>What could possibly go wrong?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30136588</id>
	<title>Re:A few questions</title>
	<author>JohnPombrio</author>
	<datestamp>1258458780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Agree. Look at Google's bare bones servers without even a UPS needed. Got to be a hell of a lot cheaper than this monstrosity. And of course this dual liquid piping will NEVER need maintenance or foul up or have a pump break, right? And there are no places for hard drives. 5 layers of boards and covers, two complete liquid cooling loops, single pump for a row of servers, what could possibly go wrong?</htmltext>
<tokenext>Agree .
Look at Google 's bare bones servers without even a UPS needed .
Got to be a hell of a lot cheaper than this monstrosity .
And of course this dual liquid piping will NEVER need maintenance or foul up or have a pump break , right ?
And there are no places for hard drives .
5 layers of boards and covers , two complete liquid cooling loops , single pump for a row of servers , what could possibly go wrong ?</tokentext>
<sentencetext>Agree.
Look at Google's bare bones servers without even a UPS needed.
Got to be a hell of a lot cheaper than this monstrosity.
And of course this dual liquid piping will NEVER need maintenance or foul up or have a pump break, right?
And there are no places for hard drives.
5 layers of boards and covers, two complete liquid cooling loops, single pump for a row of servers, what could possibly go wrong?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130094</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30135648</id>
	<title>Wi-fi cooling?</title>
	<author>Anonymous</author>
	<datestamp>1258454940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Why not cool data centers remotely, via wi-fi? The heat could be transferred wirelessly to another location far far away.</p></htmltext>
<tokenext>Why not cool data centers remotely , via wi-fi ?
The heat could be transferred wirelessly to another location far far away .</tokentext>
<sentencetext>Why not cool data centers remotely, via wi-fi?
The heat could be transferred wirelessly to another location far far away.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30133948</id>
	<title>Re:Quick Release</title>
	<author>TheGreatDonkey</author>
	<datestamp>1258449480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I agree.  Water cooling of a data center has a history.  The only thing I see here is they are attempting to bring the water at a small, scalable, "standardized" manner to each blade.  <br> <br>
I worked for a large investment company some time ago, and we had an "older" data center that was originally designed to house mainframes and used a pool to hold water for cooling.  A side benefit of the pool was that employees could use it for swimming, and the water was at quite an agreeable temperature.  The benefit here (besides the kosher swimming) is that component failure impact can be minimized, and the cross contamination much more controlled.  It was converted over the years to support servers of today, and last I knew of about 7-8 years ago, they were replacing some of the main pumps and were extending the life.  The nice thing in the updated design was that standard commodity x86 HP servers were being used in the room, requiring no fancy server hardware re-designs.</htmltext>
<tokenext>I agree .
Water cooling of a data center has a history .
The only thing I see here is they are attempting to bring the water at a small , scalable , " standardized " manner to each blade .
I worked for a large investment company some time ago , and we had an " older " data center that was originally designed to house mainframes and used a pool to hold water for cooling .
A side benefit of the pool was that employees could use it for swimming , and the water was at quite an agreeable temperature .
The benefit here ( besides the kosher swimming ) is that component failure impact can be minimized , and the cross contamination much more controlled .
It was converted over the years to support servers of today , and last I knew of about 7-8 years ago , they were replacing some of the main pumps and were extending the life .
The nice thing in the updated design was that standard commodity x86 HP servers were being used in the room , requiring no fancy server hardware re-designs .</tokentext>
<sentencetext>I agree.
Water cooling of a data center has a history.
The only thing I see here is they are attempting to bring the water at a small, scalable, "standardized" manner to each blade.
I worked for a large investment company some time ago, and we had an "older" data center that was originally designed to house mainframes and used a pool to hold water for cooling.
A side benefit of the pool was that employees could use it for swimming, and the water was at quite an agreeable temperature.
The benefit here (besides the kosher swimming) is that component failure impact can be minimized, and the cross contamination much more controlled.
It was converted over the years to support servers of today, and last I knew of about 7-8 years ago, they were replacing some of the main pumps and were extending the life.
The nice thing in the updated design was that standard commodity x86 HP servers were being used in the room, requiring no fancy server hardware re-designs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130408</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30135146</id>
	<title>I spent 4 years doing something similar</title>
	<author>John Sokol</author>
	<datestamp>1258453320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> I had a start up, Nisvara Inc. 2002 - 2006  We had water cooled and could run whole server rooms with no air conditioning at all!  Even had a partnership with NASA Ames.</p><p>
&nbsp; Our system used sealed copper tube, and something I called a thermal ground, basically a copper or aluminum plate with the tube bonded too it. Then shims that connect the heat sources, the CPU, Northbridge, Southbridge and CPU Power supply and possible ram. Powersupply and hard drives were also connected to the plate to remove the heat.</p><p>
&nbsp; We had many meeting with all the big players, Intel, Siemens, Sun, Maxtor, Pac Bell to name a few. None would allow water cooling in data centers. The liability for damaged equipment is too high.</p><p>We did come up with a lower cost fluorinert like solution that we could use, but still getting them to eliminate air conditioning was a very hard sell at the time. Also to including the extra plumbing and what not.</p><p>Maybe today they might start to change there attitude but I am not so sure about it.</p><p><a href="http://web.archive.org/web/20040901070743/http://www.nisvara.com/" title="archive.org">http://web.archive.org/web/20040901070743/http://www.nisvara.com/</a> [archive.org]</p></htmltext>
<tokenext>I had a start up , Nisvara Inc. 2002 - 2006 We had water cooled and could run whole server rooms with no air conditioning at all !
Even had a partnership with NASA Ames .
  Our system used sealed copper tube , and something I called a thermal ground , basically a copper or aluminum plate with the tube bonded too it .
Then shims that connect the heat sources , the CPU , Northbridge , Southbridge and CPU Power supply and possible ram .
Powersupply and hard drives were also connected to the plate to remove the heat .
  We had many meeting with all the big players , Intel , Siemens , Sun , Maxtor , Pac Bell to name a few .
None would allow water cooling in data centers .
The liability for damaged equipment is too high.We did come up with a lower cost fluorinert like solution that we could use , but still getting them to eliminate air conditioning was a very hard sell at the time .
Also to including the extra plumbing and what not.Maybe today they might start to change there attitude but I am not so sure about it.http : //web.archive.org/web/20040901070743/http : //www.nisvara.com/ [ archive.org ]</tokentext>
<sentencetext> I had a start up, Nisvara Inc. 2002 - 2006  We had water cooled and could run whole server rooms with no air conditioning at all!
Even had a partnership with NASA Ames.
  Our system used sealed copper tube, and something I called a thermal ground, basically a copper or aluminum plate with the tube bonded too it.
Then shims that connect the heat sources, the CPU, Northbridge, Southbridge and CPU Power supply and possible ram.
Powersupply and hard drives were also connected to the plate to remove the heat.
  We had many meeting with all the big players, Intel, Siemens, Sun, Maxtor, Pac Bell to name a few.
None would allow water cooling in data centers.
The liability for damaged equipment is too high.We did come up with a lower cost fluorinert like solution that we could use, but still getting them to eliminate air conditioning was a very hard sell at the time.
Also to including the extra plumbing and what not.Maybe today they might start to change there attitude but I am not so sure about it.http://web.archive.org/web/20040901070743/http://www.nisvara.com/ [archive.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130664</id>
	<title>Re:Yes, but how much does it cost?</title>
	<author>Smidge204</author>
	<datestamp>1258481100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>The idea that the mainboard components are sealed inside a liquid-filled compartment seems like a major point against the system. Extra proprietary vendor lock-in components mean extra costs of owning and operating, which probably offset any savings from cooling... if any.</p><p>I'm skeptical that it will significantly reduce cooling costs (Compared to, say, a chilled cabinet system) because the total cooling load stays the same. If you're generating a billion BTUs of heat you still need to remove a billion BTUs of heat. Any savings will only be from the higher energy densities water allows versus air and maybe initial installation.</p><p>Plus, based on their exploded view, there is no less than three heat exchanges before it even gets out of the cabinet: Chip to liquid (via heat sink), submersion liquid to module liquid, module liquid to system liquid. Each time to go through an exchange your temperature gradient goes up.</p><p>What they need is a system that is compatible with commodity components to leverage low cost hardware against lower cost cooling. Why not fit water blocks directly to existing mainboard layouts and circulate chilled water from the main loop directly through them via manifolds and pump at each rack? You can still enclose the mainbaord and cooling block in a sealed, insulated compartment to eliminate condensation problems, but not being submerged means you can actually repair/upgrade the modules.<br>=Smidge=</p></htmltext>
<tokenext>The idea that the mainboard components are sealed inside a liquid-filled compartment seems like a major point against the system .
Extra proprietary vendor lock-in components mean extra costs of owning and operating , which probably offset any savings from cooling... if any.I 'm skeptical that it will significantly reduce cooling costs ( Compared to , say , a chilled cabinet system ) because the total cooling load stays the same .
If you 're generating a billion BTUs of heat you still need to remove a billion BTUs of heat .
Any savings will only be from the higher energy densities water allows versus air and maybe initial installation.Plus , based on their exploded view , there is no less than three heat exchanges before it even gets out of the cabinet : Chip to liquid ( via heat sink ) , submersion liquid to module liquid , module liquid to system liquid .
Each time to go through an exchange your temperature gradient goes up.What they need is a system that is compatible with commodity components to leverage low cost hardware against lower cost cooling .
Why not fit water blocks directly to existing mainboard layouts and circulate chilled water from the main loop directly through them via manifolds and pump at each rack ?
You can still enclose the mainbaord and cooling block in a sealed , insulated compartment to eliminate condensation problems , but not being submerged means you can actually repair/upgrade the modules. = Smidge =</tokentext>
<sentencetext>The idea that the mainboard components are sealed inside a liquid-filled compartment seems like a major point against the system.
Extra proprietary vendor lock-in components mean extra costs of owning and operating, which probably offset any savings from cooling... if any.I'm skeptical that it will significantly reduce cooling costs (Compared to, say, a chilled cabinet system) because the total cooling load stays the same.
If you're generating a billion BTUs of heat you still need to remove a billion BTUs of heat.
Any savings will only be from the higher energy densities water allows versus air and maybe initial installation.Plus, based on their exploded view, there is no less than three heat exchanges before it even gets out of the cabinet: Chip to liquid (via heat sink), submersion liquid to module liquid, module liquid to system liquid.
Each time to go through an exchange your temperature gradient goes up.What they need is a system that is compatible with commodity components to leverage low cost hardware against lower cost cooling.
Why not fit water blocks directly to existing mainboard layouts and circulate chilled water from the main loop directly through them via manifolds and pump at each rack?
You can still enclose the mainbaord and cooling block in a sealed, insulated compartment to eliminate condensation problems, but not being submerged means you can actually repair/upgrade the modules.=Smidge=</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132768
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130664
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131962
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130642
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130356
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130060
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30133948
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130408
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30134186
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130408
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130498
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30135628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130408
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132516
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130634
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30142480
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30139342
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130664
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30137068
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130664
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131852
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131524
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130006
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131130
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30137172
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131020
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130592
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130006
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130616
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130626
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130060
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130724
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130060
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130460
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130086
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131004
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130592
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131188
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130408
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30136588
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130094
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131818
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131050
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130642
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30135490
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132232
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130094
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30138188
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130634
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30142868
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130694
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130746
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130086
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132814
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130112
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_17_1559206_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131370
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130060
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130724
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130356
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130626
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132152
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130796
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130888
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130536
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131852
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30135490
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131130
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131370
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131818
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130094
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132232
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30136588
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130408
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131188
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30135628
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30134186
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30133948
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130676
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30129996
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130664
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30139342
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30142480
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132768
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30137068
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130498
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30137172
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130616
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130128
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130086
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130460
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130746
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131678
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130592
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131020
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131004
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130634
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30138188
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132516
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130694
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30142868
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130810
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130354
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130112
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30132814
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130642
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131050
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131962
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130512
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30134868
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_17_1559206.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130006
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30130606
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_17_1559206.30131524
</commentlist>
</conversation>
