Datasheet
CORPORATE STYLE GUIDE
Mellanox Technologies | July 2011 | 17
– The world’s fastest interconnect, supporting
up to 40Gb/s per adapter
– Latency as low as 1 microsecond
– Full CPU ofoad with the exibility of RDMA
capabilities to reduce traditional network
protocol processing from the CPU and increase
the processor efciency.
– I/O Capex reduction – one 40Gb/s Mellanox
adapter carries more trafc with higher
reliability than four 10 Gigabit Ethernet
adapters.
KEY ADVANTAGES
SOLUTION BRIEF
©2011 Mellanox Technologies. All rights reserved.
Computer Aided Engineering (CAE) is used to
help manufacturers bring products to market
faster while maintaining a high-level of quality. The
faster companies can conduct tests and perform
product analysis, the bigger the benefits of using
CAE. Advances in software and server hardware
have set the stage for faster results, but manufac-
turers should not overlook a major performance-
robbing bottleneck: the server interconnect. To
gain dramatic improvements in CAE performance,
manufacturing firms are turning to Mellanox’s
InfiniBand-based solutions to speed the move-
ment of data between clustered servers.
The Manufacturer’s Challenge
Bringing products to market fast while meeting
quality requirements and adhering to safety stan-
dards has become a daunting challenge to manu-
facturers. To remain competitive, manufacturers
must deliver products as fast as possible. But if
quality suffers, customers won’t return. If safety
levels decline, significant recalls, lawsuits or
harmful publicity could ensue.
This is why manufacturing companies rely so
heavily on Computer Aided Engineering (CAE),
which helps simulate production and product
performance ahead of time. CAE allows problems
to be corrected before products reach the produc-
tion stage and end up in the hands of customers.
The challenge that has emerged today is how to
run commercially-available CAE software faster
and with more accuracy. Many software vendors
offer capable products, but bottlenecks commonly
occur in the hardware that runs the simulations
and analyzes the production processes.
Without sufficient computing power, these tests
sometimes take days and weeks to run. And be-
cause the tests take longer to run, product-devel-
opment teams are often forced to run fewer tests
in order to meet tight timeframes and to remain
competitive. This can lead to inaccurate testing
that often goes undetected until the products
hit the assembly line—when the cost to make
changes to the design of the products grows
exponentially.
Solving this application run-time challenge will
allow manufacturers to analyze their processes
in the fastest time possible and conduct more
granular testing. This in turn will allow for quicker
and more efficient production. A key element to
running tests more rapidly is choosing the right
hardware to run the CAE applications.
Today’s Solution
Large symmetric multi-processing machines
(SMPs) used to be the answer for generating
compute power in the data center. However,
these proprietary, expensive systems gave way to
cluster and grid architectures consisting of low-
cost commodity elements that offer comparable
performance.
Some companies try to scale CAE clusters
by adding more servers or moving to servers
with multiple cores. This approach can work for
smaller, simpler simulations, but the more com-
plex the analysis, the more likely the need to run
simulations across multiple servers where latency
is a major factor in determining performance.
The answer to speeding analysis and maximizing
return on CAE investments is not simply buying
more or bigger servers, but rather eliminating
bottlenecks to performance by employing the use
of a high performance interconnect.
Because of the ready-availability of Ethernet,
many of today’s clusters are built with Ethernet
as the interconnect. While Gigabit Ethernet-based
clustering is cheaper than SMP-based architec-
tures, it can be very inefficient. For applications
that rely on bandwidth or memory sharing, the ef-
ficiency (percentage of a server-CPU dedicated to
communications overhead) can be a concern.
Today, many manufacturers run CAE systems that
take advantage of InfiniBand-based high-speed
Manufacturing
Bringing New Levels of Performance to CAE Applications
Solution Partner
Documents and Marketing Collateral
SOLUTION BRIEFS
Sidebar Space
Sidebar can be used for
logos or product images
01 Key Features p1
Bell Gothic Std, Bold
10pt
CAPS
White
Aligned Left
10% Black Box
Inset .08 pt
Header
.125” top corner radius
Inset .625 pt
01 Key Features/Apps
Univers LT 47 CondensedLT
9 pt/11
Aligned Left
.875”
1.125”
2.125”
See Product Briefs and
Case Study Specs
page 2
SOLUTION BRIEF
interconnect support incorporated into CAE software. Mel-
lanox InfiniBand interconnects eliminate I/O bottlenecks
allowing applications to run faster and more efficiently.
70
60
50
40
30
21
10
0
02040607010 30 50
# Cores
46.59
75% Efficiency
33.00
51% Efficiency
Parallel Speedup
Parallel Speedup
Linear Scaling
GbE
InfiniBand
Figure 1. Mellanox InniBand-based solution improves performance by
50%
A Better Way
Mellanox InfiniBand solutions help CAE applications run
faster. Mellanox offers high-performance (10, 20 and 40
Gbps), low-latency (< 2 microseconds) interconnect solu-
tions for CAE applications. Benchmark testing has found
that Mellanox interconnect solutions reduce CAE-runtime
by as much as 50 percent.
In addition to offering InfiniBand switch-technology, Mel-
lanox works directly with CAE software-vendors to create
the most efficient, fastest, and lowest-latency solutions in
the industry. By combining the leading CAE software with
Mellanox InfiniBand-based solutions, manufacturing orga-
nizations can now analyze products faster and more effi-
ciently to gain a clear competitive advantage.
As today’s price and performance leader in the industry,
Mellanox builds its solutions using standards-based In-
finiBand technology. InfiniBand is an industry-standard
interconnect for high-performance computing (HPC) and
enterprise applications. The combination of high band-
width, low latency and scalability makes InfiniBand the
interconnect-of-choice to power many of the world’s
largest and fastest computer systems and commercial
data centers. Mellanox solutions support most major
server vendors, operating systems, storage solutions and
chip manufacturers.
1 Gb
Ethernet
10 Gb
Ethernet
Myrinet InniBand
Bandwidth 1 Gb/sec 10Gb/sec 2.5 Gb/sec 10, 20 & 40
Gb/sec
Latency ~10 us 2.5 - 5.5 us <2 us
Average Efciency 53% No Entries 68% 74%
Price Per Gig/Port ~$350.00 >~$700.00 ~$225.00 <$100.00
Table 1. Price/performance advantages for InniBand
Building CAE Clusters
Mellanox offers complete end-to-end server interconnect
solutions for speeding CAE applications. The two major ele-
ments of the solution include:
• High-speed, low latency InfiniBand switches
• Fast storage connectivity
High-performance InniBand Switches
Mellanox’s InfiniBand-based solutions deliver high perfor-
mance and scalability to compute clusters. Mellanox offers
a complete portfolio of products including a scalable line of
InfiniBand switches, high performance I/O gateways (for
seamless connectivity to Ethernet and Fibre Channel net-
works) and fabric management software. Mellanox solu-
tions use the Open Fabric Alliance’s OFED drivers and the
OpenMPI (Message Passing Interface) libraries to optimize
application performance for both MPI-based and socket-
based applications.
For small-to-medium sized clusters, Mellanox offers the
Mellanox Grid Director
™
9024. It is a 1U device with
twenty-four 10 Gbps (SDR) or 20 Gbps (DDR) InfiniBand
ports. The switch is a high performance, low latency, fully
non-blocking edge or leaf-switch with a throughput of 480
Gbps.
Figure 2. Mellanox Grid Director 9024 for small-to-medium sized
clusters ranging from 16 to 24 nodes.
It is well-suited for small InfiniBand fabrics with up to 24
nodes because it includes all of the necessary manage-
ment capabilities to function as a stand-alone switch. The
Grid Director 9024 is internally managed and offers com-
prehensive device and fabric management capabilities.
Designed for high-availability (high MBTF) and easy mainte-
nance, the switch is simple to install and features straight-
forward initialization. The solution is scalable as additional
switches can be added to support additional nodes.
For larger clusters ranging from 25-96 compute nodes,
Mellanox offers the Grid Director
™
2004 multi-service
switch — the industry’s highest performing multi-service
switch for medium-to-large clusters and grids. The switch
enables high performance non-blocking configurations and
features an enterprise-level, high availability design. The
Grid Director 2004 supports up to 96 InfiniBand 4X ports
(20 Gbps) and is scalable through the use of additional, hot-
swappable modules. The Grid Director 2004 also features
10 GbE and Fibre Channel ports so the solution can provide
high-performance, integrated SAN and LAN connectivity.
Some Solution Briefs contain a Key Advantages box.
This element can be found in the InDesign library le.










