Specifications

At the core of cloud computing is the
ability of the underlying compute,
network, and storage infrastructure to act
as an efcient, shared resource pool that
is dynamically scalable within one data
center or across multiple data centers.
With this foundation, critical higher-level
capabilities such as energy management,
guaranteed quality of service, federation,
and data center automation are made
possible. Intel, along with leaders in
software, works to address these new
core innovations in Infrastructure as a
Service (IaaS). Intel has initiated a program
to rapidly enable enterprises and service
providers to clarify best practices around
design (including reference architectures),
deployment, and management. For
enterprise IT and cloud service providers
who need to utilize their existing data
center infrastructure to supply cloud
services to their customers, this guide, as
part of the Intel® Cloud Builders initiative,
provides a comprehensive solution
overview that covers technical planning
and deployment considerations.
While server performance-per-watt
continues to increase, the energy
consumed per server also continues
to rise. These advancements enable
increasing number of servers and
density in modern data centers, making
planning and managing power and
cooling resources critically important to
ensure efcient utilization of provisioned
capacity. In order to realize the vision of
cloud computing, new technologies are
needed to address power efciency and
energy management. These will become
fundamental to architectures from the
micro-processor stage up through the
application stack. The focus of this paper
is energy management and the related
usage models.
Based on the Environmental Protection
Agency’s report to the government, in
2006 data centers in the US consumed
about 1.5 percent of the nation’s energy
and were poised to double this by 2011
4
If storage, network, and computing
resources continue to grow at their
predicted rate, new power efcient usage
models will be required. Higher server
utilization, better throughput for network
and storage trafc, as well as storage
optimized by data type and needs, are
a few ways to maximize the existing
resources to achieve efciency.
Companies continue to explore
approaches that focus on using existing
data center power more efciently to
increase computing capacity, cut power
costs, and reduce carbon footprint.
Traditionally, organizations have lacked
detailed information about actual server
power consumption in everyday use.
Typically, data center computing capacity
has been based on nameplate power, peak
server power consumption, or derated
power loads. In practice however, actual
power consumption with real data center
workloads is much lower than the ratings.
This situation results in over-provisioned
data center cooling and power capacity,
and increased TCO. Better understanding
and control over server power
consumption allows for more efcient
use of existing data center facilities. All of
this, applied across tens of thousands of
servers, can result in considerable savings.
This paper begins with an overview of
server power management and solutions
offered by Dell and EDCM. We then
describe various usage models in detail
describing the test cases executed and
their results with screenshots of the
conguration and test process. Finally, we
describe architectural considerations to be
taken into account.
Server Power Management
In the past, power consumption used to
be an afterthought for server deployment
in data centers. Unfortunately, this view
persists. For example, in many facilities
the utility bill is bundled with the overall
building charge which reduces the
visibility of the data center cost.
Even though servers have become much
more efcient, packaging densities and
power have increased much faster. As a
result, power and its associated thermal
characteristics have become the dominant
components of operational costs. Power
and thermal challenges in data centers
include:
Increased total operational costs due to
increased power and cooling demands
Physical limitations of cooling and
power within individual servers, racks,
and data center facilities
Lack of visibility into actual real-time
power consumption of servers and
racks
Complexity of management components
and sub-systems from multiple vendors
with incompatible interfaces and
management applications
These challenges to manage datacenters
can be translated into the following
requirements:
Power monitoring and capping
capabilities at all levels of the data
center (system, rack identification, and
data center). What can be done at an
individual server level becomes much
more compelling once physical or virtual
servers are scaled up significantly.
Aggregation of the power consumed
at the rack level and management of
power within a rack group to ensure
that the total power does not exceed
the power allocated to a rack.
Higher level aggregation and control at
the row or data center level to manage
power budget within the average
power and cooling resources available.
Optimization of productivity per watt
through management of power at the
server, rack, row, and data center levels
to optimize TCO.
5
Intel® Cloud Builders Guide: Data Center Energy Management with Dell, Intel, and ZZNode