White Paper
3
concerns among data center managers as
they struggled to adapt to a 400 to 1,000
percent increase in rack density.
The dramatic increase in data center
energy consumption created both financial
and environmental challenges. Energy
costs, which once had been relatively
inconsequential to overall IT management,
became more significant as the rise in
consumption was exacerbated by a
steady—and in some years significant—
increase in the cost of electricity. In
addition, increased awareness of the role
that power generation plays in atmospheric
carbon dioxide levels prompted the U.S.
EPA to investigate large energy consumers
such as data centers. In 2007 the EPA
presented a report to the U.S. Congress that
included recommendations for reducing
data center energy consumption.
Introduction
The first decade of the twenty-first
century was one of incredible growth
and change for data centers. The demand
for computing and storage capacity
exploded, and many IT organizations
struggled to deploy servers fast enough
to meet the needs of their businesses. At
the same time, the trend to consolidate
data centers and centralize computing
resources resulted in fewer opportunities
for planned downtime while also increas-
ing the cost of unplanned outages.
Data center operators were able to meet
the demand for increased compute
capacity by deploying more powerful
servers—often in the same physical space
as the servers being displaced—creating a
dramatic rise in data center power con-
sumption and density. Between 2004 and
2009, power and heat density became top
The industry responded with a new
focus on energy efficiency and began
implementing server virtualization, higher-
efficiency server power supplies, and new
approaches to cooling. Yet, while signifi-
cant progress has been made in some
areas, the critical power system has yet to
be fully optimized. While individual com-
ponents have been improved, the overall
system complexity is high, which can
create inefficiency and add operational
risk. Faced with the choice of increasing
system efficiency or adding risk, many
continue to choose proven approaches
that deliver high availability but do not
deliver the highest efficiency.
However, a close examination of the
available options reveals that, in many
cases, efficiency can be improved without
sacrificing overall availability.
Established Data Center Power Distribution Options
Traditional AC power distribution systems in North America bring 480V AC power
into a UPS, where it is converted to DC to charge batteries, and then inverted back
to AC. The power is then stepped down to 208V within the distribution system (PDU) for
delivery to the IT equipment. The power supplies in the IT equipment convert the power
back to DC and step it down to lower voltages that are consumed by processors, memory
and storage [Figure 1].
Figure 1. Typical 480V AC to 208V AC data center power system configuration.
Figure 2. In eco-mode, incoming power bypasses the inverter to increase UPS system efficiency.
PDU PSU
Transformer
Rectifier
Bypass
Battery
Double Conversion UPS
Server
480V AC 480V AC 208V AC 12V DC
AC
DC
DC
DC
Load
Inverter
PDU PSU
Transformer
Rectifier
Bypass
Battery
Inverter
Double Conversion UPS
Server
Figure 1. Typical 480V AC to 208V AC data center power system configuration.
480V AC 480V AC 208V AC 12V DC
AC
DC
DC
DC
Load
Figure 3. 480V AC to 277V AC data center power system configuration.
PDU
PDU
PSU
Rectifier
Bypass
Battery
Double Conversion UPS
Server
480V AC 480V/277V AC 277V AC 12V DC
AC
DC
DC
DC
Load
Inverter
Figure 4. -48VDC power as typically implemented in telecommunications central offices.
PSU
Switch
12V DC
DC
DC
Load
Battery
480V AC 48V DC 48V DC
Rectifiers
(N+m)
Figure 5. A row-based DC UPS minimizes the amount of copper and floor space required for installation in the data center.
PSU
Battery
PDU
Row-Based DC UPS Server
480V AC 48V DC 12V DC
DC
DC
Load
Rectifiers
(N+m)