Datasheet

CORPORATE STYLE GUIDE
Mellanox Technologies | July 2011 | 21
Documents and Marketing Collateral
PRODUCT BROCHURES
Mellanox Product Brochures are 10” x 6” horizontal format. They can be set up in a 2- or 3-panel conguration
and should follow the color designations described on page 12.
2-PAGE CONFIGURATION: 4, 8, 12, OR 16 PAGES
Saddle Stitched
3-PAGE CONFIGURATION: 6 PAGES
Folded
Performance Accelerated
Mellanox InfiniBand Adapters Provide
Advanced Levels of Data Center IT Performance,
Efficiency and Scalability
© Copyright 2011. Mellanox Technologies. All rights reserved.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, Virtual Protocol Interconnect and Voltaire are registered trade-
marks of Mellanox Technologies, Ltd. FabricIT, MLNX-OS and SwitchX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property
of their respective owners.
350 Oakmead Parkway, Suite 100
Sunnyvale, CA 94085
Tel: 408-970-3400
Fax: 408-970-3403
www.mellanox.com
INFINIBAND
IBTA Specication 1.2.1 compliant
10, 20, or 40, or 56Gb/s per port
RDMA, Send/Receive semantics
Hardware-based congestion control
16 million I/O channels
9 virtual lanes: 8 data + 1 management
ENHANCED INFINIBAND
Hardware-based reliable transport
Collective operations ofoads
GPU communication acceleration
Hardware-based reliable multicast
Extended Reliable Connected transport
Enhanced Atomic operations
HARDWARE-BASED I/O VIRTUALIZA-
TION
Single Root IOV
Address translation and protection
Multiple queues per virtual machine
VMware NetQueue support
ADDITIONAL CPU OFFLOADS
RDMA over Converged Ethernet
TCP/UDP/IP stateless ofoad
Intelligent interrupt coalescence
STORAGE SUPPORT
Fibre Channel over InniBand or Ethernet
FLEXBOOT™ TECHNOLOGY
Remote boot over InniBand
Remote boot over Ethernet
Remote boot over iSCSI
SAFETY
USA/Canada: cTUVus UL
EU: IEC60950
Germany: TUV/GS
International: CB Scheme
EMC (EMISSIONS)
USA: FCC, Class A
Canada: ICES, Class A
EU: EN55022, Class A
EU: EN55024, Class A
EU: EN61000-3-2, Class A
EU: EN61000-3-3, Class A
Japan: VCCI, Class A
Australia: C-Tick
Korea: KCC
ENVIRONMENTAL
EU: IEC 60068-2-64: Random Vibration
EU: IEC 60068-2-29: Shocks, Type I / II
EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS
Operating temperature: 0 to 55° C
Air ow: 100LFM @ 55° C
Requires 3.3V, 12V supplies
CONNECTIVITY
Interoperable with InniBand or 10GigE
switches
microGiGaCN or QSFP connectors
20m+ (10Gb/s), 10m+ (20Gb/s), 7m+ (40Gb/s)
or 5m+ (56Gb/s) of pasive copper cable
External optical media adapter and active
cable support
Quad to Serial Adapter (QSA) module,
connectivity from QSFP to SFP+
OPERATING SYSTEMS/DISTRIBUTIONS
Novell SLES, Red Hat Enterprise Linux (RHEL),
Fedora, and other Linux distributions
Microsoft Windows Server 2008/CCS 2003,
HPC Server 2008
OpenFabrics Enterprise Distribution (OFED)
OpenFabrics Windows Distribution (WinOF)
VMware ESX Server 3.5, vSphere 4.0/4.1
PROTOCOL SUPPORT
Open MPI, OSU MVAPICH, Intel MPI, MS MPI,
Platform MPI
TCP/UDP, EoIB, IPoIB, SDP, RDS
SRP, iSER, NFS RDMA, FCoIB, FCoE
uDAPL
Feature Summary
COmPLIaNCe COmPatIbILIty
Mellanox continues its leadership
providing InfiniBand Host Channel Adapters (HCA) —
the highest performance interconnect solution for Enterprise Data Centers,
Web 2.0, Cloud Computing, High-Performance Computing,
and embedded environments.
VALUE PROPOSITIONS
High Performance Computing needs high bandwidth, low latency, and CPU ofoads to get the highest server efciency and application productivity.
Mellanox HCAs deliver the highest bandwidth and lowest latency of any standard interconnect enabling CPU efciencies of greater than 95%.
Data centers and cloud computing require I/O services such as bandwidth, consolidation and unication, and exibility. Mellanox’s HCAs support LAN
and SAN trafc consolidation and provides hardware acceleration for server virtualization.
Virtual Protocol Interconnect™ (VPI) exibility offers InniBand, Ethernet, Data Center Bridging, EoIB, FCoIB and FCoE connectivity.
World-Class Performance
Mellanox InniBand adapters deliver industry-leading bandwidth with ultra low-latency and efcient
computing for performance-driven server and storage clustering applications. Network protocol processing
and data movement overhead such as RDMA and Send/Receive semantics are completed in the adapter
without CPU intervention. Application acceleration and GPU communication acceleration brings further
levels of performance improvement. Mellanox InniBand adapters’ advanced acceleration technology
enables higher cluster efciency and large scalability to tens of thousands of nodes.
I/O Virtualization
Mellanox adapters utilizing Virtual Intelligent Queuing (Virtual-IQ) technology with SR-IOV provides
dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the
server. I/O virtualization on InniBand gives data center managers better server utilization and LAN and
SAN unication while reducing cost, power, and cable complexity.
Storage Accelerated
A consolidated compute and storage network achieves signicant cost-performance advantages over
multi-fabric networks. Standard block and le access protocols leveraging InniBand RDMA result in high-
performance storage access. Mellanox adapters support SCSI, iSCSI, NFS and FCoIB protocols.
Software Support
All Mellanox adapters are supported by a full suite of drivers for Microsoft Windows, Linux distributions,
VMware, and Citrix XENServer. The adapters support OpenFabrics-based RDMA protocols and software,
and the stateless ofoads are fully interoperable with standard TCP/UDP/IP stacks. The adapters are
compatible with conguration and management tools from OEMs and operating system vendors.
Virtual Protocol Interconnect
VPI
®
exibility enables any standard networking, clustering, storage, and management protocol to
seamlessly operate over any converged network leveraging a consolidated software stack. Each port
can operate on InniBand, Ethernet, or Data Center Bridging (DCB) fabrics, and supports Ethernet over
InniBand (EoIB) and Fibre Channel over InniBand (FCoIB) as well as Fibre Channel over Ethernet (FCoE)
and RDMA over Converged Ethernet (RoCE). VPI simplies I/O system design and makes it easier for IT
managers to deploy infrastructure that meets the challenges of a dynamic data center.
ConnectX-3
Mellanox’s industry-leading ConnectX-3 InniBand adapters provides the highest performing and most
exible interconnect solution. ConnectX-3 delivers up to 56Gb/s throughput across the PCI Express 3.0
host bus, enables the fastest transaction latency, less than 1usec, and can deliver more than 90M MPI
messages per second making it the most scalable and suitable solution for current and future transaction-
demanding applications. ConnectX-3 maximizes the network efciency making it ideal for HPC or
converged data centers operating a wide range of applications.
Complete End-to-End 56Gb/s InfiniBand Networking
ConnectX-3 adapters are part of Mellanox’s full FDR 56Gb/s InniBand end-to-end portfolio for data centers
and high-performance computing systems, which includes switches, application acceleration packages,
and cables. Mellanox’s SwitchX family of FDR InniBand switches and Unied Fabric Management
software incorporate advanced tools that simplify networking management and installation, and provide
the needed capabilities for the highest scalability and future growth. Mellanox’s collectives, messaging,
and storage acceleration packages deliver additional capabilities for the ultimate server performance, and
the line of FDR copper and ber cables ensure the highest interconnect performance. With Mellanox end to
end, IT managers can be assured of the highest performance, most efcient network fabric.
World-class cluster performance
High-performance networking and storage access
Efcient use of compute resources
Guaranteed bandwidth and low-latency services
Reliable transport
I/O unication
Virtualization acceleration
Scales to tens-of-thousands of nodes
BENEFITS
TARGET APPLICATIONS
High-performance parallelized computing
Data center virtualization
Clustered database applications, parallel RDBMS
queries, high-throughput data warehousing
Latency sensitive applications such as nancial
analysis and trading
Web 2.0, cloud and grid computing data centers
Performance storage applications such as backup,
restore, mirroring, etc.
Mellanox InniBand Host
Channel Adapters (HCA) provide
the highest performing interconnect
solution for Enterprise Data
Centers, Web 2.0, Cloud Computing,
High-Performance Computing,
and embedded environments.
Clustered data bases, parallelized
applications, transactional services
and high-performance embedded I/O
applications will achieve signicant
performance improvements resulting
in reduced completion time and lower
cost per operation.
Ports 1x20Gb/s 2x20Gb/s 1x20Gb/s
1x40Gb/s
2x20Gb/s
2x40Gb/s
1x40Gb/s
1x10Gb/s
Connector microGiGaCN microGiGaCN QSFP QSFP QSFP, SFP+
Host Bus PCI Express 2.0
Features VPI, Hardware-based Transport and Application Ofoads, RDMA, GPU Communication
Acceleration, I/O Virtualization, QoS and Congestion Control; IP Stateless Ofoad
OS Support RHEL, SLES, Windows, ESX
Ordering
Number
MHGH19B-XTR MHGH29B-XTR MHRH19B-XTR
MHQH19B-XTR
MHRH29B-XTR
MHQH29C-XTR
MHZH29B-XTR
Ports 1x40Gb/s
1x56Gb/s
2x40Gb/s
2x56Gb/s
1x56Gb/s
1x40Gb/s
Connector QSFP QSFP QSFP, SFP+
Host Bus PCI Express 3.0
Features VPI, Hardware-based Transport and Application Ofoads, RDMA, GPU Communi-
cation Acceleration, I/O Virtualization, QoS and Congestion Control; IP Stateless
Ofoad; Precision Time Protocol
OS Support RHEL, SLES, Windows, ESX
Ordering
Number
MHGH19B-XTR MHGH29B-XTR MHRH19B-XTR
MHQH19B-XTR
Brochure pages set up on 6 column grid
starting at .5” from all page borders