Discussion:
rack power question
(too old to reply)
Patrick Giagnocavo
2008-03-23 02:02:49 UTC
Permalink
Hopefully this classifies as on-topic..

I am discussing with some investors the possible setup of new datacenter
space

Obviously we want to be able to fill a rack, and in order to do so, we
need to provide enough power to each rack

Right now we are in spreadsheet mode evaluating different scenarios

Are there cases where more than 6000W per rack would be needed

(We are not worried about cooling due to the special circumstances of
the space.

Would someone pay extra for > 7KW in a rack? What would be the maximum
you could ever see yourself needing in order to power all 42U

Cordiall

Patrick Giagnocav
***@zill.ne
Alex Rubenstein
2008-03-23 02:32:05 UTC
Permalink
> Obviously we want to be able to fill a rack, and in order to do so, w
> need to provide enough power to each rack

Which is the hardest part of designing and running a datacenter

> Are there cases where more than 6000W per rack would be needed

Yes. We are seeing 10kw rack, and requests for 15 to 20kw are startin
to come in. Think blade

> (We are not worried about cooling due to the special circumstances o
> the space.

You've already lost


> Would someone pay extra for > 7KW in a rack? What would be th
maximu
> you could ever see yourself needing in order to power all 42U

These days, there is not (or should not) be a connection between rac
pricing and what you charge for power.=2

As for how much, ask HP or IBM or whoever how many blades they can shov
in 42U
Alex Rubenstein
2008-03-24 00:53:05 UTC
Permalink
> A very interesting thread... I believe that the cost of power wil
> continue to climb well into the future and that it would be foolish t
> build any new infrastructure without incorporating the ability to pa
> for what you use. So this means being able to measure the powe
> consumption for each server or to aggregate it to the individua
> customer. Then factor in your cost of providing that power and th
> associated cooling loads, add you margin and bill the use
> accordingly. Paying for what you use is inherently fair - and I thin
> of the colo provider also as a technically competent provider o
> "clean", highly-reliable power

Agreed. About four years ago, we saw the writing on the wall. The day
of $750 racks which includes 20 amps of 120v are long gone. It was fin
when the customer consumed 2 amps. No one really thought about the cros
subsidization issues

Today, we sell racks for $500, but include no energy; every outlet t
every rack in our datacenter is metered, and we sell energy at $n/amp
It's that simple. In effect, you can have as many outlets as you lik
(one time charge for them), but you pay for the consumption

No question cost of energy is going up and up; we've seen a 75% to 100
increase in 5 years in Northern NJ. This is why we plan on investin
heavily in solar in the next year or so. With returns being as fast at
to 8 years, you are crazy not to look into this

> If you don't have a pay-as-you-go billing model, then what incentive
> are there for the users to consolidate apps onto fewer boxes or t
> enable the power saving features of the box (or operating system)
> which are becoming more widely available over time? Answer: none
> and human nature will simply be lazy about power saving

Even so, this doesn't seem to factor in. Customers have computing needs
Rarely do you see a customer say, "wow, this amp is costing me $n, so
am not going to run this SQL server.
Derrick
2008-03-24 04:39:24 UTC
Permalink
About 8 months ago we were faced with an expansion issue where our
datacenter upgrade was delayed due to permits. At the time sun had just
announced their blackbox, now called the S20. During the road trip I got
to walk through one of these. I found the cooling aspect to be very
interesting. It works on front to back cooling but has what I would call
radiators are sandwiched between each set of racks. From my
non-technical point of view on HVAC this seemed to reduce a large amount
of wasted cooling seen in large room datacenters. Is this something we
might see in future fixed datacenters or is this limited to the portable
data center due to technical limitations ? Due to cost we didn't get to
use one but I found the idea very interesting

Derric
Alex Rubenstein
2008-03-24 04:43:38 UTC
Permalink
> About 8 months ago we were faced with an expansion issue where ou
> datacenter upgrade was delayed due to permits. At the time sun ha
jus
> announced their blackbox, now called the S20. During the road trip
> go
> to walk through one of these. I found the cooling aspect to be ver
> interesting. It works on front to back cooling but has what I woul
> cal
> radiators are sandwiched between each set of racks. From m
> non-technical point of view on HVAC this seemed to reduce a larg
> amoun
> of wasted cooling seen in large room datacenters. Is this something w
> might see in future fixed datacenters or is this limited to th
> portabl
> data center due to technical limitations ? Due to cost we didn't ge
t
> use one but I found the idea very interesting

I take exception to "wasted cooling" ... Are you saying the laws o
thermodynamics don't apply to heat generated by servers

I can understand and appreciate the thought process of bad airflo
design, ie, no hot or cold rows, or things of that nature. But you can'
really waste AC

We're doing a small test deployment of a dozen or so Chatsworth chimne
style cabinets, with ducted returns. When the system is up and running
will comment on it more, but it seems to make a lot more sense that wha
we all have been doing for the last 10 years

As for the Sun S20, is it still painted black? :
Michael Brown
2008-03-25 22:12:47 UTC
Permalink
Alex Rubenstein wrote
> As for how much, ask HP or IBM or whoever how many blades they can shov
> in 42U
>
Coming from the "implementing the server gear" side of things..

If we're talking IBM gear (as that's what I know) the magic numbers are
- 4 x Bladecenter H chassis in a 42U rac
- 14 x Blades per chassi
- 56 blades per rac

Each chassis has 4 x 2900W power supplies (plus two 1KW fans), so that'
13600W per chassis. 54400W per rack total power. (There's something t
think about - are you talking about providing 6KW of capacity, or 6KW o
a continual load to each rack?

Naturally, that's redundant, so theoretical maximum usage per rack i
half that, 23200W. Plus, the blades available today don't draw enough t
fully load those power supplies. In the config I'm looking at now,
single blade (2x Quad-core 2GHz Intel, 4GB memory, no hard drives) draw
232W max, 160W lightly loaded. Let's pull a number of 195W out of th
air to use

The chassis itself draws 420W (assuming 4 I/O modules) plus
hand-waving 400W for the fans, so a magic number o
(195*14+400+420=3550W) times 4 gives 14.2kW for a loaded rack. But yo
need to make 54.4kW of power availble, which is relatively immense
You'll find this requirement in most blade scenarios, so be prepared fo
it. The plus side is that if you are the hardware provider in a co-l
scenario and you own the chassis, you can meter and bill your customer
for the individual blade power (and a magic coefficient for coolin
cost) they use if you so decide

So, as many others have already said, over 8kW in a rack is
no-brainer. Getting those BTUs out of the rack into the datacenter i
easy to do (at least on the Bladecenter H). It's getting those BTUs ou
of the datacenter that's usually a problem, except in your specia
situation. Which I also am curious about

M
V***@vt.edu
2008-03-23 02:34:11 UTC
Permalink
Ben Butler
2008-03-23 17:10:55 UTC
Permalink
m***@bt.com
2008-03-23 21:15:51 UTC
Permalink
> Surly we should be asking exactly is driving the demand for=2
> high density computing and in which market sectors and is=2
> this actually the best technical solution to solve them=2
> problem. I don't care if IBM, HP etc etc want to keep=2
> selling new shiny boxes each year because they are telling us=2
> we need them - do we really? ...

Perhaps not. But until projects like <http://www.lesswatts.org/
show some major success stories, people will keep demandin
big blade servers

Given that power and HVAC are such key issues in buildin
big datacenters, and that fiber to the office is now a realit
virtually everywhere, one wonders why someone doesn't star
building out distributed data centers. Essentially, you pu
mini data centers in every office building, possibly b
outsourcing the enterprise data centers. Then, you have
more tractable power and HVAC problem. You still need t
scale things but it since each data center is roughly comparabl
in size it is a lot easier than trying to build out on
big data center

If you move all the entreprise services onto virtual server
then you can free up space for colo/hosting services

You can even still sell to bulk customers because few wil
complain that they have to deliver equipment to thre
dara centers, one two blocks west, and another three block
north. X racks spread over 3 locations will work for everyon
except people who need the physical proximity for clusterin
type applications

--Michael Dillo
Alex Rubenstein
2008-03-24 00:47:54 UTC
Permalink
> > Surly we should be asking exactly is driving the demand fo
> > high density computing and in which market sectors and i
> > this actually the best technical solution to solve the
> > problem. I don't care if IBM, HP etc etc want to kee
> > selling new shiny boxes each year because they are telling u
> > we need them - do we really? ...
>=2
> Perhaps not. But until projects like <http://www.lesswatts.org/
> show some major success stories, people will keep demandin
> big blade servers

Disagreed. Customers who don't run datacenters general don't understan
the issues around high density computing, and most enterprises I dea
with don't care about the cost. More and Faster is their vocabulary

> If you move all the entreprise services onto virtual server
> then you can free up space for colo/hosting services

We do quite a bit of VMWare and Xen, both our own and our customers. W
have found power consumption still goes up, simply because there i
always a backlog of the need of resources. In other words, it's almos
"if you build it they will come" relates to CPU cycles as well. I hav
never seen a decrease in customer power consumption when they hav
virtualized. They still have more iron, with a lot more VM's

> You can even still sell to bulk customers because few wil
> complain that they have to deliver equipment to thre
> dara centers, one two blocks west, and another three block
> north. X racks spread over 3 locations will work for everyon
> except people who need the physical proximity for clusterin
> type applications

Send me those customers, because I haven't seen them. Especially th
ones with lots of fiber channel and InfiniBand
Joel Jaeggli
2008-03-23 21:23:51 UTC
Permalink
Ben Butler wrote
> There comes a point where you cant physically transfer the energy using ai
> any more - not less you wana break the laws a physics captin (couldn'
> resist sorry) - to your DX system, gas, then water, then in rack (expensive
> cooling, water and CO2. Sooner or later we will sink the hole room in oil
> much like they use to do with Cray's

The problem there is actually the thermal gradient involved. the fact of
the matter is you're using ~15c air to keep equipment cooled to ~30c.
Your car is probably in the low 20% range as far as thermal efficiency
goes, is generating order of 200kw and has an engine compartment
enclosing a volume of roughly half a rack... All that waste heat is
removed by air, the difference being that it runs a around 250c with
some hot spots approaching 900c

Increase the width of the thermal gradient and you can pull much more
heat out of the rack without moving more air

15 years ago I would have told you that gallium arsenide would be a lot
more common in general purpose semiconductors for precisely this reason.
but silicon has proved superior along a number of other dimensions

> Alternatively we might need to fit the engineers with crampons, climbin
> ropes and ice axes to stop them being blown over by the 70 mph winds in you
> datacenter as we try to shift the volumes of area necessary to transfer th
> energy back to the HVAC for heat pump exchange to remote chillers on th
> roof
>
> In my humble experience, the problems are 1> Heat, 2> Backup UPS, 3> Backu
> Generators, 4> LV/HV Supply to building
>
> While you will be very constrained by 4 in terms of upgrades unless spendin
> a lot of money to upgrade - the practicalities of 1,2&3 mean that you wil
> have spent a significant amount of money getting to the point where you nee
> to worry about 4
>
> Given you are not worried about 1, I wonder about the scale of th
> application or your comprehension of the problem
>
> The bigger trick is planning for upgrades of a live site where you need t
> increase Air con, UPS and Generators
>
> Economically, that 10,000KW of electricity has to be paid for in addition t
> any charge for the rack space. Plus margined, credit risked and cas
> flowed. The relative charge for the electricity consumption - which ha
> less about our ability to deliver and cool it in a single rack versus th
> cost of having four racks in a 2,500KW datacenter and paying for the sam
> amount of electric. Is the racking charge really the significant expens
> any more
>
> For the sake of argument, 4 racks at £2500 pa in a 2500KW datacenter or
> rack at £10,000 pa in a 10000KW datacenter - which would you rather have
> Is the cost of delivering (and cooling) 10000KW to a rack more or less tha
> 400% of the cost of delivering 2500KW per rack. I submit that it is mor
> that 400%. What about the hardware - per mip / cpu horse power am I payin
> more or less in a conventional 1U pizza box format or a high density blad
> format - I submit the blades cost more in Capex and there is no opex saving
> What is the point having a high density server solution if I can only hal
> fill the rack
>
> I think the problem is people (customers) on the whole don't understand th
> problem and they can grasp the concept of paying for physical space, bu
> cant wrap their heads around the more abstract concept of electricit
> consumed by what you put in the space and paying for that to come up with
> TCO for comparisons. So they simply see the entire hosting bill an
> conslude they have to stuff as many processors as possible into the rac
> space and if that is a problem is is one for the colo facility to deliver a
> the same price
>
> I do find myself increasingly feeling that the current market direction i
> simply stupid and had far to much input from sales and marketing people
>
> Let alone the question of is the customers business efficient in terms o
> the amount of CPU compute power required for their business to generate 1
> of customer sales/revenue
>
> Just because some colo customers have cr*ppy business models delivering
> marginal benefit for very high computer overheads and an inability to pay
> for things in a manner that reflects their worth because they are incapable
> of extracting the value from them. Do we really have to drag the entire
> industry down to the lowest common denominator of f*ckwit.
>
> Surly we should be asking exactly is driving the demand for high density
> computing and in which market sectors and is this actually the best
> technical solution to solve them problem. I don't care if IBM, HP etc etc
> want to keep selling new shiny boxes each year because they are telling us
> we need them - do we really? ...?
>
> Kind Regards
>
> Ben
>
>
> -----Original Message-----
> From: owner-***@merit.edu [mailto:owner-***@merit.edu] On Behalf Of
> ***@vt.edu
> Sent: 23 March 2008 02:34
> To: Patrick Giagnocavo
> Cc: ***@nanog.org
> Subject: Re: rack power question
Marshall Eubanks
2008-03-24 05:12:40 UTC
Permalink
The interesting thing is how in a way we seem to have come full =2
circle. I am sure lots of people can remember large room
full of racks of vacuum tube equipment, which required serious power =2
and cooling
On one NASA project I worked on, when the vacuum tube stuff was =2
replaced by solid state in the late 1980's
there was lots of empty floor space and we marveled at how much power

we were saving. In fact, afte
the switch there was almost 2 orders of magnitude too much cooling =2
for the new equipment (200 tons to 5 IIRC)
and we had to spend good money to replace the old cooling system with

a smaller one. Now, we seem to have expande
to more than fill the previous tube-based power and space =2
requirements, and I suspect some people wish they could ge
their old cooling plants back

Regard
Marshal

On Mar 23, 2008, at 5:23 PM, Joel Jaeggli wrot

> Ben Butler wrote
>> There comes a point where you cant physically transfer the energy =2
>> using ai
>> any more - not less you wana break the laws a physics captin =2
>> (couldn'
>> resist sorry) - to your DX system, gas, then water, then in rack =2
>> (expensive
>> cooling, water and CO2. Sooner or later we will sink the hole =2
>> room in oil
>> much like they use to do with Cray's

> The problem there is actually the thermal gradient involved. the =2
> fact of the matter is you're using ~15c air to keep equipment =2
> cooled to ~30c. Your car is probably in the low 20% range as far =2
> as thermal efficiency goes, is generating order of 200kw and has an

> engine compartment enclosing a volume of roughly half a rack... All

> that waste heat is removed by air, the difference being that it =2
> runs a around 250c with some hot spots approaching 900c

> Increase the width of the thermal gradient and you can pull much =2
> more heat out of the rack without moving more air

> 15 years ago I would have told you that gallium arsenide would be a

> lot more common in general purpose semiconductors for precisely =2
> this reason. but silicon has proved superior along a number of =2
> other dimensions

>> Alternatively we might need to fit the engineers with crampons, =2
>> climbin
>> ropes and ice axes to stop them being blown over by the 70 mph =2
>> winds in you
>> datacenter as we try to shift the volumes of area necessary to =2
>> transfer th
>> energy back to the HVAC for heat pump exchange to remote chillers =2
>> on th
>> roof
>> In my humble experience, the problems are 1> Heat, 2> Backup UPS, =2
>> 3> Backu
>> Generators, 4> LV/HV Supply to building
>> While you will be very constrained by 4 in terms of upgrades =2
>> unless spendin
>> a lot of money to upgrade - the practicalities of 1,2&3 mean that =2
>> you wil
>> have spent a significant amount of money getting to the point =2
>> where you nee
>> to worry about 4
>> Given you are not worried about 1, I wonder about the scale of th
>> application or your comprehension of the problem
>> The bigger trick is planning for upgrades of a live site where you

>> need t
>> increase Air con, UPS and Generators
>> Economically, that 10,000KW of electricity has to be paid for in =2
>> addition t
>> any charge for the rack space. Plus margined, credit risked and cas
>> flowed. The relative charge for the electricity consumption - =2
>> which ha
>> less about our ability to deliver and cool it in a single rack =2
>> versus th
>> cost of having four racks in a 2,500KW datacenter and paying for =2
>> the sam
>> amount of electric. Is the racking charge really the significant =2
>> expens
>> any more
>> For the sake of argument, 4 racks at £2500 pa in a 2500KW =2
>> datacenter or
>> rack at £10,000 pa in a 10000KW datacenter - which would you =2
>> rather have
>> Is the cost of delivering (and cooling) 10000KW to a rack more or =2
>> less tha
>> 400% of the cost of delivering 2500KW per rack. I submit that it =2
>> is mor
>> that 400%. What about the hardware - per mip / cpu horse power am

>> I payin
>> more or less in a conventional 1U pizza box format or a high
>> density blade
>> format - I submit the blades cost more in Capex and there is no
>> opex saving.
>> What is the point having a high density server solution if I can
>> only half
>> fill the rack.
>> I think the problem is people (customers) on the whole don't
>> understand the
>> problem and they can grasp the concept of paying for physical
>> space, but
>> cant wrap their heads around the more abstract concept of electricity
>> consumed by what you put in the space and paying for that to come
>> up with a
>> TCO for comparisons. So they simply see the entire hosting bill and
>> conslude they have to stuff as many processors as possible into
>> the rack
>> space and if that is a problem is is one for the colo facility to
>> deliver at
>> the same price.
>> I do find myself increasingly feeling that the current market
>> direction is
>> simply stupid and had far to much input from sales and marketing
>> people.
>> Let alone the question of is the customers business efficient in
>> terms of
>> the amount of CPU compute power required for their business to
>> generate 1$
>> of customer sales/revenue.
>> Just because some colo customers have cr*ppy business models
>> delivering
>> marginal benefit for very high computer overheads and an inability
>> to pay
>> for things in a manner that reflects their worth because they are
>> incapable
>> of extracting the value from them. Do we really have to drag the
>> entire
>> industry down to the lowest common denominator of f*ckwit.
>> Surly we should be asking exactly is driving the demand for high
>> density
>> computing and in which market sectors and is this actually the best
>> technical solution to solve them problem. I don't care if IBM, HP
>> etc etc
>> want to keep selling new shiny boxes each year because they are
>> telling us
>> we need them - do we really? ...?
>> Kind Regards
>> Ben
>> -----Original Message-----
>> From: owner-***@merit.edu [mailto:owner-***@merit.edu] On
>> Behalf Of
>> ***@vt.edu
>> Sent: 23 March 2008 02:34
>> To: Patrick Giagnocavo
>> Cc: ***@nanog.org
>> Subject: Re: rack power question
>
Sean Donelan
2008-03-23 02:54:15 UTC
Permalink
On Sat, 22 Mar 2008, Patrick Giagnocavo wrote
> Would someone pay extra for > 7KW in a rack? What would be the maximum you
> could ever see yourself needing in order to power all 42U

As you recognize, its not an engineering question; its an economic
question. Notice how Google's space/power philosphy changed betwee
leveraging other people's space/power, and now that they own their own
space/power

Existing equipment could exceed 20kW in a rack, and some folks ar
planning for equipment exceeding 30kW in a rack

But things get more interesting when you look at the total economic
of a data center. 8kW/rack is the new "average," but that include
a lot of assumptions. If someone else is paying, I want it and more
If I'm paying for it, I discover I can get by with less
Joe Greco
2008-03-23 03:53:12 UTC
Permalink
> On Sat, 22 Mar 2008, Patrick Giagnocavo wrote
> > Would someone pay extra for > 7KW in a rack? What would be the maximum you
> > could ever see yourself needing in order to power all 42U
>
> As you recognize, its not an engineering question; its an economic
> question. Notice how Google's space/power philosphy changed betwee
> leveraging other people's space/power, and now that they own their own
> space/power
>
> Existing equipment could exceed 20kW in a rack, and some folks ar
> planning for equipment exceeding 30kW in a rack
>
> But things get more interesting when you look at the total economic
> of a data center. 8kW/rack is the new "average," but that include
> a lot of assumptions. If someone else is paying, I want it and more
> If I'm paying for it, I discover I can get by with less

That may not be the correct way to look at it

There's a very reasonable argument to be made that the artificial economi
models used by colocation providers has created this monster to begin with

The primary motivation for many customers to put more stuff in a singl
rack is that the cost for a rack subsidizes at least a portion of the powe
and cooling costs. A single rack with two 20A circuits typically costs les
than two racks with a 20A circuit each. To some extent, this makes sense
However, it often costs *much* less for the single rack with two 20
circuits

Charging substantially less for rack space, even offset by higher costs fo
power, would encourage a lot of colo customers to "spread the load" aroun
and not feel as obligated to maximize the use of space. That would in tur
reduce the tendency for there to be excessive numbers of hot spots

The economic question of how to build your pricing model ultimately become
an engineering question, because it becomes progressively more difficult t
provide power and cooling as density increases

Or, to quote you, in an entirely different context

> If I'm paying for it, I discover I can get by with less

The problem is that this is currently true for values of "it" where "it
equals "racks.

... J
--
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.ne
"We call it the 'one bite at the apple' rule. Give me one chance [and] then
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN
With 24 million small businesses in the US alone, that's way too many apples
david raistrick
2008-03-23 04:26:37 UTC
Permalink
On Sat, 22 Mar 2008, Joe Greco wrote

> Charging substantially less for rack space, even offset by higher costs fo
> power, would encourage a lot of colo customers to "spread the load" aroun
> and not feel as obligated to maximize the use of space. That would in tur
> reduce the tendency for there to be excessive numbers of hot spots

I wonder if we're to the point yet where we should just charge for powe
and give the space away "free"...

When I'm shopping for colo that's pretty much the way I look at it. Power
determines space. I need 80,000W of power at the breaker, so I need
800sqftx15$ in facility A, and ***@40$ in facility B

I can fit my 8 racks into either the 320sqft or into the 800. If I'm
doing the 800, I'll probably spend a bit more up front and use 12 or 14
racks, to keep my density down. A bit more cost up front, but in the
grand scheme of things 4 or 6 extra racks ($6 to 10,000$) don't directly
hurt to much. (80kW worth of power usually means you've got well north of
$2M worth of hardware and software being stuffed into the space in my
experience..but maybe that's because we're an Oracle shop. ;


Of course, I suppose for those customers still doing super-low-density
boxes (webhosting with lots and lots of desktops), I suppose that model
wouldn't work as well

ramble

.

--
david raistrick http://www.netmeister.org/news/learn2quote.htm
***@icantclick.org http://www.expita.com/nomime.htm
Patrick Clochesy
2008-03-23 05:45:30 UTC
Permalink
Tony Finch
2008-03-26 18:25:56 UTC
Permalink
On Sun, 23 Mar 2008, david raistrick wrote

> I wonder if we're to the point yet where we should just charge for powe
> and give the space away "free"...

There's at least one small example of someone doing that already
http://www.jump.net.uk/colo.htm

Tony
--
f.anthony.n.finch <***@dotat.at> http://dotat.at
BAILEY: CYCLONIC BECOMING SOUTHEASTERLY 5 TO 7, OCCASIONALLY GALE 8, PERHAP
SEVERE GALE 9 LATER. MODERATE OR ROUGH, OCCASIONALLY VERY ROUGH. RAIN O
SHOWERS. MODERATE OR GOOD
Edward B. DREGER
2008-03-23 03:19:14 UTC
Permalink
PG> Date: Sat, 22 Mar 2008 22:02:49 -040
PG> From: Patrick Giagnocav

PG> Hopefully this classifies as on-topic..
PG
PG> I am discussing with some investors the possible setup of ne
PG> datacenter space

You might also try the isp-colo.com list

PG> Are there cases where more than 6000W per rack would be needed

It depends how one differentiates between "want" and "need"

PG> (We are not worried about cooling due to the special circumstance
PG> of the space.

ixp.aq? ;-

PG> Would someone pay extra for > 7KW in a rack

They should. If they need more than 6kW, their alternative is to pa
for a second rack, which hardly would be free

PG> What would be the maximum you could ever see yourself needing i
PG> order to power all 42U

1. For colo, think 1U dual-core servers with 3-4 HDD
2. For routers, Google: juniper t640 kw

HTH
Edd
-
Everquick Internet - http://www.everquick.net
A division of Brotsman & Dreger, Inc. - http://www.brotsman.com
Bandwidth, consulting, e-commerce, hosting, and network buildin
Phone: +1 785 865 5885 Lawrence and [inter]nationa
Phone: +1 316 794 8922 Wichit
_______________________________________________________________________
DO NOT send mail to the following addresses
***@brics.com -*- ***@intc.net -*- ***@everquick.ne
Sending mail to spambait addresses is a great way to get blocked
Ditto for broken OOO autoresponders and foolish AV software backscatter
John Curran
2008-03-23 03:50:07 UTC
Permalink
At 10:02 PM -0400 3/22/08, Patrick Giagnocavo wrote
>Hopefully this classifies as on-topic..

>I am discussing with some investors the possible setup of new datacenter space

>Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack

>Right now we are in spreadsheet mode evaluating different scenarios

>Are there cases where more than 6000W per rack would be needed

10K per rack | ~ 400 watts/sqft is a common design point being use
by the large scale colocation/reit players. It's quite possible to excee
with blade servers or high-density storage (Hitachi, EMC, etc) but it'
take unusual business models today to exceed that on every rack

>(We are not worried about cooling due to the special circumstances of the space.

So, even presuming an abundance of cold air right outside the facility
you are still going to move the equipment generated heat to chiller
or cooling towers. It is quite likely that your HVAC plant will could b
your effective limit in ability to add power drops

>Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U

Again, you can find single rack, 30" deep storage arrays/controller
that will exceed 20KW, but the hope is that you've got a cabinet o
two of less dense equipment surrounding them. Best thing to do i
fine someone in the particular market segment you're aiming for an
ask them for some averages and trends, since it's going to vary widel
depending on webhosting/enterprise data center/content behemoth

/Joh
Justin Shore
2008-03-23 04:17:26 UTC
Permalink
This greatly depends on what you want to do with the space. If you're
putting in co-lo space by the square footage footprint then your
requirements will be much less. If you expect a large percentage of it
to be leased out to an enterprise then you should expect the customers
to use every last U in a cabinet before leasing the next cabinet in the
row. Ie your power usage will be immense

I did something similar about 2 years ago. We were moving a customer
from one facility to another. We mapped out each cabinet including
server models. I looked up maximum power consumption for each model
including startup consumption. The heaviest loaded cabinet specced out
at 12,000w. The cabinet was full of old 1U servers. New 1U servers are
the worst-case scenario by far. 12k is rather low IMHO. Some industry
analysts estimate that the power requirements for high-density
applications scale as high as 40kw

http://www.servertechblog.com/pages/2007/01/cabinet_level_p.htm

There are a few things to remember. Code only permits you to load a
circuit to 80% of its maximum-rated capacity. The remaining 20% is the
safety margin required by the NEC. Knowing this that means that the
12Kw specified above require 7x 20a 120v circuits or 5x 30a 120v circuits

You can get 20a and 30a horizontal PDUs for both 120v and 240v. There
are also 208v options. You can also get up to 40a vertical PDUs. One
word of caution about the vertical PDUs. If your cabinets aren't deep
enough in the rear (think J Lo) the power cabling will get in the way of
the rails and other server cabling. There are others but they are less
common

Also remember that many of the larger servers (such as the Dell 6850s or
6950s) are 240v and will require a pair of dedicated circuits (20a or 30a)

I would also recommend that you look into in-row power distribution
cabinets like the Liebert FDC. This means shorter home-runs for the
large number of circuits you'll be putting in (saving your a bundle in
copper too). It also means less under-floor wiring to work around,
making future changes much easier. Changes in distribution cabinets are
also much easier, safer and less prone to accidents/mistakes than they
are in distribution panels

Grounding is a topic that is worthy of its own book. Consult an
electrician used to working with data centers. Don't overlook this
critical thing. Standby power sources fall into this topic as well.
How many 3-phase generators are you going to need to keep your UPSs hot

I'm curious what your cooling plans are. I would encourage you to
consider geothermal cooling though. The efficiencies that geothermal
brings to the table are worth you time to investigate

Best of luck
Justi

Patrick Giagnocavo wrote
>
> Hopefully this classifies as on-topic..
>
> I am discussing with some investors the possible setup of new datacenter
> space
>
> Obviously we want to be able to fill a rack, and in order to do so, we
> need to provide enough power to each rack
>
> Right now we are in spreadsheet mode evaluating different scenarios
>
> Are there cases where more than 6000W per rack would be needed
>
> (We are not worried about cooling due to the special circumstances of
> the space.
>
> Would someone pay extra for > 7KW in a rack? What would be the maximum
> you could ever see yourself needing in order to power all 42U
>
> Cordiall
>
> Patrick Giagnocav
> ***@zill.ne
>
>
Lamar Owen
2008-03-24 17:40:52 UTC
Permalink
On Sunday 23 March 2008, Justin Shore wrote
> There are a few things to remember. Code only permits you to load
> circuit to 80% of its maximum-rated capacity. The remaining 20% is th
> safety margin required by the NEC. Knowing this that means that th
> 12Kw specified above require 7x 20a 120v circuits or 5x 30a 120v circuits

Cord connected loads can be 50A easily enough; something like a NEMA L21-60P
can give you 18KW in one plug (after 80% derating); if you could use 277V the
L22-60P is available to get you almost 40KW on one plug (again, after the 80%
is factored in; it's almost 50KW at 100% rating). Hubbell makes 60 and 100A
plugs and receptacles if 40KW isn't enough. PDU's for these are more scarce,
but I'm sure Marway would build to suit

We have a few of the older Hubbell 50A twistloks here that were used for some
sort of signal processing equipment back in the day

> Also remember that many of the larger servers (such as the Dell 6850s o
> 6950s) are 240v and will require a pair of dedicated circuits (20a or 30a)

The 6950 can run on 120VAC. That is one of the primary reasons we bought
6950's with Opterons instead of 6850's with Xeons; I only had 120VAC capable
UPS's at the time

With router densities going way up, and heating going along with them, this
facilities issue can even impact the network operator

> I would also recommend that you look into in-row power distributio
> cabinets like the Liebert FDC.

We have Liebert PPA's here. Two 125's and a 50

> Grounding is a topic that is worthy of its own book. Consult a
> electrician used to working with data centers. Don't overlook thi
> critical thing.

Ground reference grid. See Cisco's 'Building the Best Data Center for your
Business' book and/or Sun's Blueprint series datacenter book for more good
information. Also be thoroughly familiar with NEC Article 645

While this discussion might seem out of the ordinary for a network operator's
group, it is a very good discussion

Another good resource for datacenter/commcenter information is
www.datacenterknowledge.com; at least I've found it to be
--
Lamar Owe
Chief Information Office
Pisgah Astronomical Research Institut
1 PARI Driv
Rosman, NC 2877
(828)862-555
www.pari.ed
Lamar Owen
2008-03-24 20:07:51 UTC
Permalink
On Monday 24 March 2008, Robert E. Seastrom wrote
> Lamar Owen <***@pari.edu> writes
> > While this discussion might seem out of the ordinary for a networ
> > operator's group, it is a very good discussion

> The sad part is the extraordinariness of discussions that are s
> solidly on-topic as this one. Thanks for your contribution; I (a
> least) really appreciate it

You're quite welcome

There is a whole 'nother side to this, too, especially for those who run DC
power. The ampacity of conductors is quite a bit higher for DC; I have a
copy of the best book on that subject, written by a telco engineer. Let's
see

"DC Power System Design for Telecommunications" by Whitham D. Reeve, Wiley is
the publisher.

This is an expensive book; about $100 from Wiley; low price on Amazon right
now is $75.75. This one goes over EVERYTHING when it comes to DC power
distribution design and implementation. It was worth the price I paid,
that's for sure

We have two 200A Lorains here, with A battery being 450Ah of C&D flooded
cells, and B battery being a bank of 4 135Ah 12V sealed AGM batteries. Our
core switches and all but one of our core routers have DC power supplies

Incidentally, all of this is fresh to me primarily because I'm in the process
of building a new datacenter and moving our existing equipment into it,
primarily for RFI mitigation reasons
--
Lamar Owe
Chief Information Office
Pisgah Astronomical Research Institut
1 PARI Driv
Rosman, NC 2877
(828)862-555
www.pari.ed
John Lee
2008-03-23 05:34:41 UTC
Permalink
Justin M. Streiner
2008-03-23 06:56:57 UTC
Permalink
On Sat, 22 Mar 2008, Patrick Giagnocavo wrote

> I am discussing with some investors the possible setup of new datacenter
> space

> Obviously we want to be able to fill a rack, and in order to do so, we need
> to provide enough power to each rack

> Right now we are in spreadsheet mode evaluating different scenarios

> Are there cases where more than 6000W per rack would be needed

Is this just for servers, or could there be network gear in the racks as
well? We normally deploy our 6509s with 6000W AC power supplies these
days and and I do have some that can draw close to or over 3000W on a
continuous basis. A fully populated 6513 with power hungry blades
could eat 6000W

It's been awhile since I've tumbled the numbers, but I think a 42U rack
full of 1U servers or blade servers could chew through 6000W and still be
hungry. Are you also taking into account a worst-case situation, i.e.
everything in the rack powering on at the same time, such as after a
power outage

> (We are not worried about cooling due to the special circumstances of the
> space.

> Would someone pay extra for > 7KW in a rack? What would be the maximum you
> could ever see yourself needing in order to power all 42U

I don't know what you mean by 'extra', but I'd imagine that if someone
needs 7KW or more in a rack, then they'd be prepared to pay for the amount
of juice they use. This also means deploying a metering/monitoring
solution so you can track how much juice your colo customers use and bill
them accordingly

Power consumption, both direct (by the equipment itself) and indirect
(cooling required to dissipate the heat generated by said equipment) is a
big issue in data center environments these days. Cooling might not be an
issue in your setup, but it is a big headache for most large
enterprise/data center operators

jm
Leo Bicknell
2008-03-23 15:31:06 UTC
Permalink
Petri Helenius
2008-03-23 16:25:53 UTC
Permalink
Leo Bicknell wrote

> Dual quad-core Xeons in a 1RU form factor. 600W power supply. 600
> * 42 = 25,200
>
Supermicro has the "1U Twin" which is 980W for two dual-slot machines in
1U form factor
http://www.supermicro.com/products/system/1U/6015/SYS-6015TW-TB.cf

If you can accommodate that, it should be pretty safe for anything else

Pet
Ray Burkholder
2008-03-23 17:14:06 UTC
Permalink
> Leo Bicknell wrote
>
> > Dual quad-core Xeons in a 1RU form factor. 600W power supply. 600
> > * 42 = 25,200
> >
> Supermicro has the "1U Twin" which is 980W for two dual-slot
> machines in 1U form factor;
> http://www.supermicro.com/products/system/1U/6015/SYS-6015TW-TB.cf
>
> If you can accommodate that, it should be pretty safe for
> anything else
>

My desktop has a 680 Watt power supply, but according to a meter I onc
connected, it is only running at 350 to 400 Watts. So if a server has
980W power supply, does the rack power need to be designed to handl
multiples of such a beast, even though the server may not come clos
(because it may not be fully loaded with drives or whatever)? Wouldn't i
be better to do actual measurements to see the real draw might be

--
Scanned for viruses and dangerous content at
http://www.oneunified.net and is believed to be clean
Mike Tancsa
2008-03-23 17:36:16 UTC
Permalink
At 01:14 PM 3/23/2008, Ray Burkholder wrote

>My desktop has a 680 Watt power supply, but according to a meter I onc
>connected, it is only running at 350 to 400 Watts. So if a server has
>980W power supply, does the rack power need to be designed to handl
>multiples of such a beast, even though the server may not come clos
>(because it may not be fully loaded with drives or whatever)? Wouldn't i
>be better to do actual measurements to see the real draw might be

The startup draw can be quite a bit more. I think before all those
fancy power saving features kick in, some of the servers we have can
draw quite a bit on initial bootup as they spin the fans 100% and
spin up disks etc
I also find the efficiencies of boards really vary. In our spam
scanning cluster we used some "low end" RS480 boards by ECS (AMD
Socket 939). Cool to run to the point where on the bench you would
touch the various heat sinks and wonder if it was powered up. This
compared to some of our Tyan 939 "server boards" which could blister
your finger if you touched the heat sink too long

---Mike
Paul Vixie
2008-03-23 18:29:28 UTC
Permalink
***@mail.com (John Curran) writes

> Also, you're still going to want to size the power drop so tha
> the measured load won't exceed 80% capacity due to code

that's true of output breakers, panel busbars, and wire. on the othe
hand, transformers (e.g., 480->208 or 12K->480) are rated at 100%, a
are input breakers and of course generators
--
Paul Vixi
Scott Weeks
2008-03-24 20:24:18 UTC
Permalink
----- eddy+public+***@noc.everquick.net wrote: -------

PG> (We are not worried about cooling due to the special circumstance
PG> of the space.

ixp.aq? ;-
----------------------------------------------------


Back in 2000-2002 I exchanged email several times with a guy responsible for a (the?) datacenter in Antartica and funny enough they had heat problems there

scot
Barry Shein
2008-03-24 21:20:24 UTC
Permalink
Here's another project which has dubbed themselves "teraflops fro
milliwatts" which I believe is shipping iron. I have no first-han
experience with their products

http://www.sicortex.com

--
-Barry Shei

The World | ***@TheWorld.com | http://www.TheWorld.co
Purveyors to the Trade | Voice: 800-THE-WRLD | Login: Nationwid
Software Tool & Die | Public Access Internet | SINCE 1989 *oo
Duane Waddle
2008-03-25 02:31:53 UTC
Permalink
Deepak Jain
2008-03-25 03:27:14 UTC
Permalink
While I enjoy hand waving as much as the next guy... reading over this
thread, there are several definitions of sq ft (ft^2) here and folks are
interchanging their uses whether aware of it or not

1) sq ft = the amount of sq ft your cabinet/cage sits on

2) sq ft = the amount of sq ft attributed to your cabinet/cage on the
data center floor including aisles and access-way

3) sq ft = the amount of sq ft attributed to your cabinet/cage on the
data center floor including aisles and access-ways and on-the-floor
cooling equipmen

4) sq ft = the amount of sq ft attributed to your cabinet/cage on the
data center floor including aisles and access-ways and on-the-floor
cooling equipment AND the amount attributed to your cabinet/cage from
the equipment room (UPS, batteries, transformers, etc)

The first definition only applies to those renting cabinets
The first/second definitions apply to those renting cabinets and cages
with aisles or access-ways in the
The first/second/third definitions apply to operators of datacenters
within non-datacenter buildings (where datacenter is NOT the entire load
in the facility) and renters
All the definitions apply to anyone with a dedicated datacenter space
(and equipment room) within a building or a stand-alone datacenter

By rough figuring..

A 30KW cabinet while one sounds lovely, a huge amount of space is going
to turned over to most or all of a dedicated PCU and 1/15th of the
infrastructure of 500KVA UPS (@0.9PF) including batteries, transformers,
etc

Assuming power costs and associated maintenance are assigned
appropriately to this one cabinet, the amount of square footage
associated (definition #4) for that one cabinet changes by less than 30%
whether you are going 30KW in one-cabinet or 3KW in each of 10 cabinets

As an owner/operator of very large dedicated data centers for very large
customers of all sorts, I can promise you no one is doing datacenters
full (500+ cabinets) of 10KW+ (production, not theoretical) each in a
dedicated facility with no other uses to lower the average heat demand.
Even smaller numbers probably too

Easy caveat

A "datacenter" that is a fraction of a large building (e.g. a 20,000 sq
ft data center within a 250,000 sq ft building) can appear to bend these
rules because the overall load (by definition #4) is averaged against it

There is simply no economic reason to do so (at scale) -- short of water
cooling -- there is a fixed amount of space taken up per unit-ton of air
cooling (medium-<air>-medium) for heat-rejection. Factor in the premiums
associated with the highest density equipment (e.g. blades, PDUs
-in-cabinet, etc) and the economics become even clearer

Even ignoring heat rejection, the battery + UPS gear for 500KVA (even
with minimal battery times) is approximately the same size (physically)
as the 12 cabinets or so it takes to reach that capacity. [same applies
for flywheel/kinetic systems

Our friends who do calculus in their heads can already figure out the
engineering or business min-max equation to optimize this equation based
on a certain level of redundancy, run-time, etc and there aren't
multiple answers. (Hint: certain variables drop out as rounding errors)

TAANSTAFL, if you are a 1-4 cabinet (or similarly small) use in a larger
datacenter (definitions 1-2) by all means shove as much gear as you can
in as long as there is no additional power premium. If they are giving
you space for power or the premium is too high, take as much space as
you can for the amount of power you need -- your equipment and your
budgets will thank you. If you are operating a data center without a
bigger use in the building to average against, you really don't have
many ways to cheat the math here. (e.g. geothermal only provides a delta
between definition #3 and #4 and a lower energy premium)

Deepak Jai
AiNE
Paul Vixie
2008-03-25 06:17:15 UTC
Permalink
this has been, to me, one of the most fascinating nanog threads in years

at the moment my own datacenter problem is filtration. isc lives in a plac
where outside air is quite cool enough for server inlet seven or more month
out of the year. we've also got quite high ceilings. a 2HP roof fan wil
move 10000 cubic feet per minute. we've got enough make-up air for that
but, the filters on the make-up air have to be cleaned several times a week
and at the moment that's a manual operation

mechanical systems, by comparison, only push 20% make-up air, and the filter
seem to last a month or more between maintainance events. i'm stuck with th
same question that vexes the U S Army when they send the M1A1 into sandstorms
or that caused a lot of shutdowns in NYC in the days after 9/11: what kind o
automation can i deploy that will precipitate the particulates so that ai
can move (for cooling) and so that air won't bring grit (which is conductive)
--
Paul Vixi
m***@bt.com
2008-03-25 11:23:51 UTC
Permalink
> what kind of automation can i deploy that will=2
> precipitate the particulates so that air can move (for=2
> cooling) and so that air won't bring grit (which is conductive)

Have you considered a two-step process using water in the firs
step to remove particulates (water spray perhaps?) and then a
industrial air-drier in the second step

Alternatively, have you considered air liquifiers like thos
used in mining (Draegerman suits) which produce very cold liqui
air? The idea would be to spray the liquid air inside the data=2
center rather than blowing in the gaseous form

Of course, I don't know if the economics of this work out, althoug
there are people working on increasing the efficiency of ai
liquificatio
so there is quite a bit of price variation between older methods an
newer ones

--Michael Dillo
Adrian Chadd
2008-03-25 11:52:55 UTC
Permalink
This thread begs a question - how much do you think it'd be worth to d
things more efficiently


Adria
Leigh Porter
2008-03-25 12:23:15 UTC
Permalink
$

Adrian Chadd wrote
> This thread begs a question - how much do you think it'd be worth to d
> things more efficiently




> Adria
>
Paul Vixie
2008-03-25 15:57:09 UTC
Permalink
***@creative.net.au (Adrian Chadd) writes

> This thread begs a question - how much do you think it'd be worth to d
> things more efficiently

this is a strict business decision involving sustainability and TCO. if i
takes one watt of mechanical to transfer heat away from every watt delivered
whereas ambient air with good-enough filtration will let one watt of roof fa
transfer the heat away from five delivered watts, then it's a no-brainer. bu
as i said at the outset, i am vexed at the moment by the filtration costs
--
Paul Vixi
Paul Vixie
2008-03-25 21:00:08 UTC
Permalink
> Have you made any calculations if geo-cooling makes sense in your region t
> fill in the hottest summer months or is drilling just too expensive for th
> return

i'm too close to san francisco bay
Petri Helenius
2008-03-25 20:36:11 UTC
Permalink
Paul Vixie wrote

> this is a strict business decision involving sustainability and TCO. if i
> takes one watt of mechanical to transfer heat away from every watt delivered
> whereas ambient air with good-enough filtration will let one watt of roof fa
> transfer the heat away from five delivered watts, then it's a no-brainer. bu
> as i said at the outset, i am vexed at the moment by the filtration costs
>
Have you made any calculations if geo-cooling makes sense in your region
to fill in the hottest summer months or is drilling just too expensive
for the return

Pet
William Herrin
2008-03-25 23:01:03 UTC
Permalink
On Tue, Mar 25, 2008 at 5:00 PM, Paul Vixie <***@vix.com> wrote

> > Have you made any calculations if geo-cooling makes sense in your region t
> > fill in the hottest summer months or is drilling just too expensive for th
> > return

> i'm too close to san francisco bay

Paul

Why is that bad? I thought ground-source HVAC systems worked better i
the ground was saturated with water. Better thermal conductivity tha
dry soil

My problem finding someone to install a ground-source system was tha
everyone for miles is on city water. You have to be able to drill
hole in the ground and the folks familiar with well-drilling equipmen
are three hours away

Regards
Bill Herri


--
William D. Herrin ................ ***@dirtside.com ***@herrin.u
3005 Crane Dr. ...................... Web: <http://bill.herrin.us/
Falls Church, VA 22042-300
Paul Vixie
2008-03-25 23:14:36 UTC
Permalink
> > i'm too close to san francisco bay
>
> Why is that bad? I thought ground-source HVAC systems worked better i
> the ground was saturated with water. Better thermal conductivity tha
> dry soil

aside from the corrosive nature of the salt and other minerals, there is a
unbelievable maze of permits from various layers of government since there'
a protected marshland as well as habitat restoration within a few miles.
think it's safe to say that Sun Quentin could not be built under curren
rules

> My problem finding someone to install a ground-source system was tha
> everyone for miles is on city water. You have to be able to drill
> hole in the ground and the folks familiar with well-drilling equipmen
> are three hours away

i could drill in the warehouse, i suppose, and truck the slurry out by night
Petri Helenius
2008-03-26 06:23:24 UTC
Permalink
Paul Vixie wrote

> aside from the corrosive nature of the salt and other minerals, there is a
> unbelievable maze of permits from various layers of government since there'
> a protected marshland as well as habitat restoration within a few miles.
> think it's safe to say that Sun Quentin could not be built under curren
> rules
>
The ones I have are MDPE (Medium Density Polyethylene) and I haven't
understood that the plastic would have corrosive features. Obviously it
can come down to regulation depending on what you use as a cooling agent
but water is very effective if there is no fear of freezing (I use
ethanol for that reason). The whole system is closed circuit, I'm not
pumping water out of the ground but circulating the ethanol in the
vertical ground piping of approximately 360 meters. The amount of slurry
that came out of the hole was in order of 5-6 cubic meters. Cannot
remember exactly what the individual parts cost but the total investment
was less than $10k. (drilling, piping, circulation, air chiller, fluids,
etc.) for a system with somewhat over 4kW of cooling capacity. (I'm
limited by the airflow, not by the ground hole if the calculations prove
correct

Pet
Dorn Hetzel
2008-03-26 11:37:58 UTC
Permalink
Petri Helenius
2008-03-26 13:06:43 UTC
Permalink
Dorn Hetzel wrote
> I believe some of the calculations for hole/trench sizing per ton used
> for geothermal exchange heating/cooling applications rely on the
> seasonal nature of heating/cooling.

> I have heard that if you either heat or cool on a continuous permanent
> basis, year-round, then you need to allow for more hole or trench
> since the cold/heat doesn't have an off-season to equalize from the
> surrounding earth

> I don't have hard facts on hand, but it might be a factor worth verifying
That is definitely a factor. I do know that you can run such systems
24/7 for multiple months but whether the number is 3, 6 or 8 with the
regular sizing I don't know. Obviously it also depends on what's the
target temperature for incoming air, if you shoot for 12-13'C the
warming of the hole cannot be more than a few degrees but for 17-20'C
one would have double the margin to play with. It's also (depending on
your kWh cost) economically feasible to combine geothermal pre-cooling
with "traditional" chillers to take the outside air first from 40'C to
25'C and then chill it further more expensively. This also works the
other way around for us in the colder climates where you actually need
to heat up the inbound air. That way you'll also accelerate the cooling
of the hole

I'm sure somebody on the list has the necessary math to work out how
many joules one can push into a hole for one degree temperature rise

Pet
Alexander Harrowell
2008-03-25 12:31:15 UTC
Permalink
Leigh Porter
2008-03-25 12:35:53 UTC
Permalink
That would be pretty good. But seeing some of the disastrous cablin
situations it'd have to be made pretty idiot proof

Nice double sealed idiot proof piping with self-sealing ends.

-
Leig

-
Leig

Alexander Harrowell wrote
> I still think the industry needs to standardise water cooling to popularis
> it; if there were two water ports on all the pizzaboxes next to the RJ45s
> and a standard set of flexible pipes, how many people would start using it
> There's probably a medical, automotive or aerospace standard out there

> On Tue, Mar 25, 2008 at 12:23 PM, Leigh Porter <***@ukbroadband.com
> wrote

>
>> $
>
>
>> Adrian Chadd wrote
>>
>>> This thread begs a question - how much do you think it'd be worth to d
>>> things more efficiently
>>
>>
>>
>>
>>> Adria
>>
>>>

>
Deepak Jain
2008-03-25 17:56:03 UTC
Permalink
There are vendors working on this, but the point here is that unlike the
medical, automotive or aerospace industries.... Computing (in general)
platforms aren't regulated the same way... you won't see random gear
hanging off the inside of an MRI (in general), or in an airplane, etc

Computer vendors make lots of random sizes and depths of boxes. Want to
get really ambitious? Let's find a set of rails that works with all
rackmountable equipment and cabinets before we get crazy with the water
cooling

The point is that water has lots of issues. Water quality being one of
them. Its fine to "toy" with water cooling a home clocked-up PC. When
you have experience water cooling mainframes or using large chiller
plants (1000+ tons) for years on end, there is a lot of discipline
required to "do it right" -- a discipline that many shops and operators
haven't needed up to this point

Deepak Jai
AiNE

Alexander Harrowell wrote
> I still think the industry needs to standardise water cooling to
> popularise it; if there were two water ports on all the pizzaboxes next
> to the RJ45s, and a standard set of flexible pipes, how many people
> would start using it? There's probably a medical, automotive or
> aerospace standard out there
Alexander Harrowell
2008-03-25 13:08:05 UTC
Permalink
Dorn Hetzel
2008-03-25 13:11:37 UTC
Permalink
Chris Adams
2008-03-25 13:38:25 UTC
Permalink
Once upon a time, Dorn Hetzel <***@gmail.com> said
> Of course, my chemistry is a little rusty, so I'm not sure about th
> prospects for a non-toxic, non-flammable, non-conductive substance wit
> workable fluid flow and heat transfer properties :

Fluorinert - it worked (more or less) for the Cray Triton
--
Chris Adams <***@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Service
I don't speak for anybody but myself - that's enough trouble
Ryan Otis
2008-03-25 15:47:38 UTC
Permalink
I think the modern equivalent is HFE, manufactured by 3M; HFE-7100 i
commonly used in the ATE industry for liquid cooling of test heads. I
is designed for very low temperatures (-135degC to 61degC) so it migh
not be suitable for general datacenter use. HFE-7500 looks like
better fit. (-100degC to 130degC


-----Original Message----
From: owner-***@merit.edu [mailto:owner-***@merit.edu] On Behalf O
Chris Adam
Sent: Tuesday, March 25, 2008 6:38 A
To: nanog lis
Subject: Re: rack power questio

Once upon a time, Dorn Hetzel <***@gmail.com> said
> Of course, my chemistry is a little rusty, so I'm not sure about the=2
> prospects for a non-toxic, non-flammable, non-conductive substance=2
> with workable fluid flow and heat transfer properties :

Fluorinert - it worked (more or less) for the Cray Triton
-
Chris Adams <***@hiwaay.net
Systems and Network Administrator - HiWAAY Internet Services I don'
speak for anybody but myself - that's enough trouble
Joe Abley
2008-03-25 14:15:12 UTC
Permalink
On 25 Mar 2008, at 09:11 , Dorn Hetzel wrote

> It would sure be nice if along with choosing to order servers with
> DC or AC power inputs one could choose air or water cooling

> Or perhaps some non-conductive working fluid instead of water. That
> might not carry quite as much heat as water, but it would surely
> carry more than air and if chosen correctly would have more benign
> results when the inevitable leaks and spills occur

The conductivity of (ion-carrying) water seems like a sensible thing
to worry about. The other thing is its boiling point

I presume that the fact that nobody ever brings that up means it's a
non-issue, but it'd be good to understand why

Seems to me that any large-scale system designed to distribute water
for cooling has the potential for hot spots to appear, and that any
hot spot that approaches 100C is going to cause some interesting
problems

Wouldn't some light mineral oil be a better option than water

Jo
Justin Shore
2008-03-25 14:19:40 UTC
Permalink
Dorn Hetzel wrote
> Of course, my chemistry is a little rusty, so I'm not sure about the
> prospects for a non-toxic, non-flammable, non-conductive substance with
> workable fluid flow and heat transfer properties :

Mineral oil? I'm not sure about the non-flammable part though. Not all
oils burn but I'm not sure if mineral oil is one of them. It is used
for immersion cooling though

Justi
Alex Rubenstein
2008-03-25 15:11:39 UTC
Permalink
Well, seeing as that most pad mounted transformers use mineral oil as
heat transfer agent (in applications up to and exceeding 230kv), I don'
suspect it is of issue

However, we've all seen nice transformer fires

> -----Original Message----
> From: owner-***@merit.edu [mailto:owner-***@merit.edu] On Behal
O
> Justin Shor
> Sent: Tuesday, March 25, 2008 10:20 A
> To: Dorn Hetze
> Cc: nanog lis
> Subject: Re: rack power questio
>=2
>=2
> Dorn Hetzel wrote
> > Of course, my chemistry is a little rusty, so I'm not sure about th
> > prospects for a non-toxic, non-flammable, non-conductive substanc
> wit
> > workable fluid flow and heat transfer properties :
>=2
> Mineral oil? I'm not sure about the non-flammable part though. No
al
> oils burn but I'm not sure if mineral oil is one of them. It is use
> for immersion cooling though
>=2
> Justi
Brian Raaen
2008-03-25 15:15:36 UTC
Permalink
Russia (or the USSR at that time) used to use liquid graphite to cool their
=2
nuclear reactors, even thought it was flammable.... of course that was what
=2
they were using in Chernobyl.=2

--=2
Brian Raae
Network Enginee
***@zcorum.co

On Tuesday 25 March 2008, you wrote
>=2
> Dorn Hetzel wrote
> > Of course, my chemistry is a little rusty, so I'm not sure about the=2
> > prospects for a non-toxic, non-flammable, non-conductive substance with
=2
> > workable fluid flow and heat transfer properties :
>=2
> Mineral oil?  I'm not sure about the non-flammable part though.  Not
all=2
> oils burn but I'm not sure if mineral oil is one of them.  It is used=2
> for immersion cooling though
>=2
> Justi
>=2
Marshall Eubanks
2008-03-25 15:51:51 UTC
Permalink
On Mar 25, 2008, at 11:15 AM, Brian Raaen wrote

> Russia (or the USSR at that time) used to use liquid graphite to
> cool thei
> nuclear reactors, even thought it was flammable.... of course that
> was wha
> they were using in Chernobyl


The RBMK-1000 used graphite for moderation and water for cooling

Regard
Marshal


> --
> Brian Raae
> Network Enginee
> ***@zcorum.co

> On Tuesday 25 March 2008, you wrote
>
>> Dorn Hetzel wrote
>>> Of course, my chemistry is a little rusty, so I'm not sure about th
>>> prospects for a non-toxic, non-flammable, non-conductive
>>> substance wit
>>> workable fluid flow and heat transfer properties :
>
>> Mineral oil? I'm not sure about the non-flammable part though.
>> Not al
>> oils burn but I'm not sure if mineral oil is one of them. It is use
>> for immersion cooling though
>
>> Justi
>
>
Joel Jaeggli
2008-03-25 16:14:06 UTC
Permalink
Brian Raaen wrote
> Russia (or the USSR at that time) used to use liquid graphite to cool their
> nuclear reactors, even thought it was flammable.... of course that was what
> they were using in Chernobyl.

This has diverged far enough that it's now off the topic of cooling. The
melting point of carbon however is 3800k..

you can get it to ignite in graphite form at roughly half that

>
Leigh Porter
2008-03-25 16:56:59 UTC
Permalink
Joel Jaeggli wrote

> Brian Raaen wrote
>> Russia (or the USSR at that time) used to use liquid graphite to coo
>> their nuclear reactors, even thought it was flammable.... of cours
>> that was what they were using in Chernobyl.

> This has diverged far enough that it's now off the topic of cooling
> The melting point of carbon however is 3800k..

> you can get it to ignite in graphite form at roughly half that

>
The graphite was used as a moderator not as a coolant

-
Leig
Michael Holstein
2008-03-25 19:31:10 UTC
Permalink
> Mineral oil? I'm not sure about the non-flammable part though. Not
> all oils burn but I'm not sure if mineral oil is one of them. It is
> used for immersion cooling though

It burns quite well ..
http://video.aol.com/video-detail/transformer-explosion/159983122

Cheers

Michael Holstei
Cleveland State Universit
m***@bt.com
2008-03-25 16:04:58 UTC
Permalink
Barton F Bruce
2008-04-05 10:39:15 UTC
Permalink
>> A close second might be liquid cooled air tight cabinets with the
>> air/wate
>> heat exchangers (redundant pair) at the bottom where leaks are less of a
>> issue (drip tray, anyone? :) )..

> Something like what you suggest has been around for a year or two now
> though using liquid CO2 as the coolant. It doesn't require particularl
> tight cabs

> http://www.troxaitcs.co.uk/aitcs/products

Is anyone using these over here

This is a far more significant strategy that simply using an alternative to
water to carry the heat from the cabinets

The game is PHASE CHANGE, but unlike our traditional fairly complicated
refrigeration system systems with oil return issues and artificiaally high
head pressures simply to have a 100PSI MOPD to keep full flow through the
TXV (even with low ambients outside) this is in its simplest form viewed as
a PUMPED LIQUID heat pipe system, where there is no need for large pressure
drops as the fluid goes around the loop. Your pump only has to cover piping
loses and any elevation differences between the COLO space and the central
machinery

There is NO insulation at all. The liquid being pumped out to finned coils
on the back of each cabinet is at room temperature and as it grabs heat from
the cabinet exhaust air (which is very efficient because you have it HOT and
not needlessly undiluted with other room air) some of the liquid flashes to
gas and you have a slurry that can easily be engineered to handle any size
load you care to put in the rack. The more heat you add, the more gas and
the less liquid you get back, but as long as there is still some liquid, the
fluid stream is still at the room temperature it was at before entering the
coil. It is perfectly happy trying to cool an empty cabinet and does not
over cool that area, and can carry as much overload as you are prepaired to
pay to have built in

At the central equipment, the liquid goes to the bottom of the receiver
ready for immediate pumping again, and the gas is condensed back to liquid
on cold coils in this receiver (think of a large traditional shell and tube
heat exchanger that also acts as a receiver and also a slight subcooler for
the liquid). The coils can be DX fed with any conventional refrigerant, or
could be tied to the building's chilled water supply. Parallel tube bundles
can provide redundant and isolated systems, and duplicating this whole
system with alternate rows or even alternate cabinets fed from different
systems lets you function even with a major failure. Read about their
scenarios when a cooling door is open or even removed. The adjacent cabinets
just get warmer entering air and can easily carry the load. Enough 55 degree
ground water in some places might even let you work with a very big shell
and tube condenser and NO conventional refrigeration system at all

If you have every single cabinet packed full, having just two systems each
needing full double+ capacity would not be as good as having 3 or 4
interleaved systems, but that is simply a design decision, but one that can
be partially deferred. Pipe for 4 interleaved isolated systems, and then run
the ODD ones into one condensing/pumping system, and the EVEN ones into
another. As cabinets fill, and as dollars become available for paranoia, add
the other central units and flick a few normally padlocked preprovisioned
valves and your are done. The valves stay for various backup strategies. You
can accidentally leak some CO2 from one system to another and then sneak it
back. There are NO parallel compressor oil return issues, just a large range
between min and max acceptible charges of CO2

The big problem is that CO2 at room temperature is about 1000 PSI, so all
this is welded stainless steel and flexible metal hoses. There need not be
enough CO2 in any one system to present any suffocation hazard, but you DO
want to be totally aware of that in the design

Unlike regular refrigerants, liguid CO2 is just dirt cheap, and you just
vent it when changing a finned rear door - each has its own valves at the
cabinet top main pipes.. You just go slowly so you don't cover everything or
anyone with dry ice chips.

Here is another site hawking those same Trox systems:

http://www.modbs.co.uk/news/fullstory.php/aid/1735/The_next_generation_of_cooling__for_computer_rooms.html

Over in Europe they are talking of a demo being easliy done if you already
have chilled water the demo could use.

A recent trade mag had a small pumped heat pipe like R134a system for INSIDE
electronic systems - a miniature version of these big CO2 systems. Heat
producing devices could be directly mounted to the evaporator rather than
use air cooling fins or a water based system, and the condenser could be
above. or below or wherever you need to put it and could function in
arbitrary positions in the field. And no heat pipe wicks needed. The fully
hermetic pump has a 50K hour MTBF and is in this case pumping R134a. The
pump looked like one of those spun copper totally sealed inline dryers. I
suspect it was for high end computing and military gear, and not home PCs,
but clearly could move a lot of heat from very dense spaces. Parker is so
sprawling, I can't now seem to readily find the division that makes that
pump and that wants to design and build the whole subsystem for you, but I
bet we will be seeing a lot of these as power density goes up.


And on a totally seperate rack power topic, (code issues aside - that can be
changed given need and time), I would LOVE to see some of these 100 - 250V
universal switching supplies instead made to run 100 - 300V so they could be
run off 277V.
It is silly to take 277/480 through wastefully heat producing Delta-Wyes
just to get 120/208 to feed monster power supplies that really should be fed
with what most building already have plenty of. The codes could easily
recognize high density controlled access data centers as an environment
where 277V would be OK to use for situstions where it isn't ok now. Small
devices sharing the same cabinets should all be allowed to also use 277V.

We used to have cabinets of 3 phase 208Y fed Nortel 200Amp -48VDC rectifiers
for our CO batteries with as many 125KVA delta-wye transformers in front of
them as needed for any particular site. These rectifiers are up about 98+%
efficient (better than the transformers...) but as soon as these rectifiers
became available in 480V, we switched and retrofitted everywhere except very
small sites.

It is just silly to not be using 480 single or better 3 phase for very large
devices and 277V for smaller but still large devices where the single pole
breaker gives more circuits per cabinet and 277V being near previous upper
voltage limits may mean simple supply changes..

And yes, I know the Delta-Wye gives a lot of transient isolation, and a
handy "newly derived neutral" to make a really good single point grounding
system feasible and very local, but 277/480 deserves a better chance..
John Lee
2008-03-25 13:46:11 UTC
Permalink
Paul Vixie
2008-03-25 14:03:29 UTC
Permalink
Matthew Crocker <***@crocker.com> wrote

> Seal off the room so you can control your replacement air source. Put
> series of cyclone dust collectors (think huge Dyson Vacuum) on your inboun
> air
>
> http://www.proventilation.com/products/ProductsView.asp?page=1&gclid=CKyD04SRqJICFQUilgod-isIR

neat stuff. isc's neighbor has got one of these (for an industrial process)
they are noisy, and not 100% duty cycle rated, but it's an interesting idea

> Then distribute your air through some electrostatic dust collector
>
> http://www.dustcollectorexperts.com/electrostatic

two of the ESP Disadvantages listed on that page are fatal in my application

o High initial cos
o Materials with very high or low resistivit
are difficult to collec

however, this page also mentions "baghouse" filters, which i'd also heard i
a private reply, and am now investigating

> Then run it through HEPA filters

:-). my servers don't have asthma. HEPA is hellishly expensive, annually
due to the number of filter replacements you need when the duty cycle is 100%

> How do you manage your humidity when you are pulling in 1% humidity 3
> degree air? It is more expensive to add water to the air then it is to coo
> it sometimes

redwood city, california has signs over several streets leading to its oldtow
that say (and i'm not making this up) "climate best by government test". wha
this appears to mean is, we have about three 30F weeks per year, and we hav
three four 100F weeks per year, and the rest of the time, it's between 50F an
70F, during which time the humidity is perfect for servers
Alexander Harrowell
2008-03-25 15:16:27 UTC
Permalink
Ray Burkholder
2008-03-25 15:53:07 UTC
Permalink
Alexander Harrowell
2008-03-25 17:29:25 UTC
Permalink
Lamar Owen
2008-03-26 14:15:18 UTC
Permalink
On Monday 24 March 2008, Deepak Jain wrote
> While I enjoy hand waving as much as the next guy... reading over thi
> thread, there are several definitions of sq ft (ft^2) here and folks ar
> interchanging their uses whether aware of it or not
[snip
> A 30KW cabinet while one sounds lovely, a huge amount of space is goin
> to turned over to most or all of a dedicated PCU and 1/15th of th
> infrastructure of 500KVA UPS (@0.9PF) including batteries, transformers
> etc
[snip
> Even ignoring heat rejection, the battery + UPS gear for 500KVA (eve
> with minimal battery times) is approximately the same size (physically
> as the 12 cabinets or so it takes to reach that capacity. [same applie
> for flywheel/kinetic systems

This is certainly a fascinating thread

One thing I haven't seen discussed, though, is the other big issue with
high-density equipment, and that is weight

Those raised floors have a weight limit. In our case, our floors, built out
in the early 90's, have a 1500 lb per square inch point load rating, and
7,000 pound per pedestal max weight. The static load rating of 300 pounds
per square foot on top of the point load rating doesn't sound too great, but
it's ok; we just have to be careful. Our floors are concrete-in-steel, on 24
inch pedestals, with stringers

In contrast, a 42U rack loaded with 75 pound 1U servers is going to weigh
upwards of 3,150 pounds (if you figure 300 pounds for the rack and the PDU's
in the rack, make that 3,450 pounds). When we get to heavier than 75 pound
1U servers things are going to get dicey. Also in contrast, a fully loaded
EMC CX700 is about 2,000 pounds

It sounds more and more like simply charging for rack-occupied square footage
is an unsustainable business model. The four actual billables are power,
cooling (could be considered power), bandwidth, and weight. When we see
systems as dense as a Cray 2, but with modern IC's, we'll be treated to
flourinert waterfalls again. :-
--
Lamar Owe
Chief Information Office
Pisgah Astronomical Research Institut
1 PARI Driv
Rosman, NC 2877
(828)862-555
www.pari.ed
Patrick Shoemaker
2008-03-25 15:09:20 UTC
Permalink
Joe Abley wrote
>
>
> On 25 Mar 2008, at 09:11 , Dorn Hetzel wrote
>
>> It would sure be nice if along with choosing to order servers with DC
>> or AC power inputs one could choose air or water cooling
>
>> Or perhaps some non-conductive working fluid instead of water. That
>> might not carry quite as much heat as water, but it would surely carry
>> more than air and if chosen correctly would have more benign results
>> when the inevitable leaks and spills occur
>
> The conductivity of (ion-carrying) water seems like a sensible thing to
> worry about. The other thing is its boiling point
>
> I presume that the fact that nobody ever brings that up means it's a
> non-issue, but it'd be good to understand why
>
> Seems to me that any large-scale system designed to distribute water for
> cooling has the potential for hot spots to appear, and that any hot spot
> that approaches 100C is going to cause some interesting problems
>
> Wouldn't some light mineral oil be a better option than water
>
>
> Jo
>

With IT systems, the equipment being cooled would likely reach therma
overload and trip offline before the cooling water could flash to steam
Of course a properly designed system would have relief valves anyway

One problem with mineral oil is the specific heat. Water has a specifi
heat of 4.19 kJ/kg-degC. Light mineral oil is 1.67 kJ/kg-degC. Tha
means much higher mass flow rates (bigger pumps, tubing, mor
dynamichead loss, etc) for oil than water to transfer the same amount o
heat. Oh, and if you want to see whether mineral oil burns, check ou
this video: http://www.youtube.com/watch?v=YZipeaAkuC0 (that transforme
is filled with mineral oil)

Sun has some good concepts going with its green datacenter initiative
Their approach of using extremely scalable power and coolin
distribution systems that are customizable at the rack level allows fo
a wide variety of densities and configurations throughout the room
Check out the tour at this link

http://www.sun.com/aboutsun/environment/green/datacenter.js

-
Patrick Shoemake
President, Vector Data Systems LL
***@vectordatasystems.co
office: (301) 358-1690 x3
mobile: (410) 991-579
http://www.vectordatasystems.co
Robert Boyle
2008-03-26 15:03:06 UTC
Permalink
At 10:15 AM 3/26/2008, Lamar Owen wrote
>One thing I haven't seen discussed, though, is the other big issue wit
>high-density equipment, and that is weight

>Those raised floors have a weight limit. In our case, our floors, built ou
>in the early 90's, have a 1500 lb per square inch point load rating, an
>7,000 pound per pedestal max weight. The static load rating of 300 pound
>per square foot on top of the point load rating doesn't sound too great, bu
>it's ok; we just have to be careful. Our floors are concrete-in-steel, on 2
>inch pedestals, with stringers

I don't know about others, but we don't use raised floors. If you
look at the airflow required and how high your raised floor actually
has to be (5-6 ft) in our case, it simply doesn't make sense. We use
doors at the ends of aisles, blanking panels, and a lexan cover over
all aisles. We sequester all air and force the air to flow through
the equipment. This typically cuts energy used for cooling roughly by
30-45% We have seen dual 20 ton Lieberts used for a double row
(typically 20-22 racks per row) actually cycle on and off once air is
no longer allowed to mix. We typically will also use two Challenger
3000 5 ton units in the middle of the row for a total of 50 tons of
cooling and about 150KW of electrical use for 35-40 cabinets. That is
a mix of some cabinets with fewer servers and some with high density
10 slot dual quad core blade chassis units. We also like to build our
datacenters on 8-12" slabs at or slightly above ground level so we
don't really need to worry about weight loads either. Not possible if
you are on the 20th floor of headquarters, but something to consider
when talking about greenfield datacenter development

-Rober


Tellurian Networks - Global Hosting Solutions Since 199
http://www.tellurian.com | 888-TELLURIAN | 973-300-921
"Well done is better than well said." - Benjamin Frankli
Martin Hannigan
2008-03-29 03:58:43 UTC
Permalink
On Sat, Mar 22, 2008 at 11:19 PM, Edward B. DREGE
<eddy+public+***@noc.everquick.net> wrote


[ clip


> PG> (We are not worried about cooling due to the special circumstance
> PG> of the space.

> ixp.aq? ;-


I'm not worried about cooling either

http://www.businessweek.com/magazine/content/08_13/b4077060400752.htm?campaig

-M
vijay gill
2008-03-31 05:27:22 UTC
Permalink
MARLON BORBA
2008-03-31 13:11:04 UTC
Permalink
Do not forget physical security (including, but not limited to, access
control & surveillance -- different logs, videos, and people to control),
local/municipal/state laws and regulations (e.g. fire control standards),
personnel to manage all that sites (even third-party)... IMHO too much
administrative burden. :-


Abraços

Marlon Borba, CISSP, APC DataCenter Associat
Técnico Judiciário - Segurança da Informaçã
TRF 3 Regiã
(11) 3012-168
-
Practically no IT system is risk free
(NIST Special Publication 800-30
-
>>> "vijay gill" <***@vijaygill.com> 31/03/08 2:27 >>
[...
On Sun, Mar 23, 2008 at 2:15 PM, <***@bt.com> wrote


> Given that power and HVAC are such key issues in buildin
> big datacenters, and that fiber to the office is now a realit
> virtually everywhere, one wonders why someone doesn't star
> building out distributed data centers. Essentially, you pu
> mini data centers in every office building, possibly b
> outsourcing the enterprise data centers. Then, you have
> more tractable power and HVAC problem. You still need t
> scale things but it since each data center is roughly comparabl
> in size it is a lot easier than trying to build out on
> big data center


Latency matters. Also, multiple small data centers will be more expensiv
than a few big ones, especially if you are planning on average load vs
pea
load heat rejection models

[...
Robert Boyle
2008-04-04 00:17:39 UTC
Permalink
At 03:50 PM 4/3/2008, Derek J. Balling wrote
So your theoretical maximum draw is NOT "1/2 the total"... in a nicel
>populated chassis it will draw more than 1/2 the total and complai
>the whole time about it

That should probably have read in a well designed and fully populated
chassis... I personally know for a fact that the Dell blade chassis
can be fully loaded and operate with only two of four power supplies
when fully loaded on the old 10 slot chassis and 3 of 6 in the new 16
slot chassis when fully loaded. HP also claims the C7000 chassis is
fully redundant with only 3 of 6 power supplies. This is true for all
configurations I have ever seen

-Rober


Tellurian Networks - Global Hosting Solutions Since 199
http://www.tellurian.com | 888-TELLURIAN | 973-300-921
"Well done is better than well said." - Benjamin Frankli
Loading...