Datasheet

Reliable, tested configurations
Standard configurations of the
Cluster 1350 include compute nodes,
at least one management node (one
redundant management node for
failover is optional), and up to
64 storage nodes for a maximum of
1,024 managed nodes. Clients who
require configurations larger than a total
of 1,026 nodes (1,024 managed nodes
and up to two management nodes) or
components not included in standard
Cluster 1350 configurations can use a
special bid process to request support
of these custom configurations. All of
these larger configurations utilize
standard 42U racks.
Smaller cluster environments may use
the Cluster 1350 25U racks, which
allow clients to optimize the size and
affordability of their cluster to meet spe-
cific application needs. For example,
clients with database, business intelli-
gence (BI) and general SMB (small and
medium business) applications will find
that these smaller racks enable
extremely cost-effective solutions for
the smaller cluster configurations nor-
mally required in these environments.
Cluster 1350 clients may choose
from a broad variety of cluster intercon-
nect technologies from several of the
industry’s leading network switch and
adapter vendors to meet the specific
performance needs of their cluster
application environment. These choices
span the full range of high-performance
networking technologies including
Gigabit Ethernet, InfiniBand and Myrinet
switches and adapters. In addition,
each Cluster 1350 includes a manage-
ment Ethernet VLAN for highly secure
internode communication.
Expanding possibilities
The Cluster 1350 also offers clients the
opportunity to take advantage of GPFS
for Linux to expand and enhance
their high-performance cluster data
storage requirements. GPFS is a high-
performance, scalable, shared-disk file
system that provides fast data access
from all nodes in a Linux cluster and
NFS export capabilities outside the
cluster. Parallel applications running
across multiple nodes of the cluster as
well as serial applications running on a
single node can readily access shared
files using standard UNIX file system
interfaces. Furthermore, GPFS can be
configured for failover from both HDD
and server malfunctions.
In addition, the Cluster 1350
incorporates the technology of
the IBM TotalStorage® DS4000 to pro-
vide highly reliable data storage for
business-critical applications that
require high-speed transfer and large
amounts of data. Optional cluster com-
ponents include IBM TotalStorage,
DS4100, DS4300, DS4300 Turbo and
DS4500 Storage Servers along with
DS4000 EXP100 and the EXP400 Ultra
320 SCSI Storage Expansion Units. In
addition, the TotalStorage SAN 16B-2
SAN switch is now available to provide
a robust SAN solution for cluster
applications.
Summary
Creating a computing infrastructure is
an exercise in balancing price and per-
formance to deliver the appropriate
solution for each client’s specific
business needs.
For many high-performance workloads,
the most advantageous solution is clus-
tering. Harnessing the power of multiple
servers in parallel allows for the man-
agement and resolution of computa-
tionally intense problems with an
excellent price/performance ratio.
Clustering is also an excellent approach
for consolidating multiple workloads,
thereby enhancing manageability and
availability. In addition, the advent of
Linux as a platform for building powerful
clustered systems offers clients access
to the growing knowledge base and
expert contributions of the open source
community.
The IBM _` Cluster 1350 is a
comprehensive solution that can help
simplify and expedite deployment of a
Linux cluster. IBM combines all hard-
ware, software and services into a sin-
gle product offering, allowing clients the
benefit of a single point-of-contact for
the entire cluster rather than dealing
with multiple vendors for individual
components.
The Cluster 1350 is the solution of
choice for any organization that recog-
nizes the economic advantages of
deploying a Linux cluster, but has con-
cerns about the time and technical
resources required for the end-to-end
implementation.