Datasheet
CORPORATE STYLE GUIDE
Mellanox Technologies | July 2011 | 21
Documents and Marketing Collateral
PRODUCT BROCHURES
Mellanox Product Brochures are 10” x 6” horizontal format. They can be set up in a 2- or 3-panel conguration
and should follow the color designations described on page 12.
2-PAGE CONFIGURATION: 4, 8, 12, OR 16 PAGES
Saddle Stitched
3-PAGE CONFIGURATION: 6 PAGES
Folded
Performance Accelerated
Mellanox InfiniBand Adapters Provide
Advanced Levels of Data Center IT Performance,
Efficiency and Scalability
© Copyright 2011. Mellanox Technologies. All rights reserved.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, PhyX, Virtual Protocol Interconnect and Voltaire are registered trade-
marks of Mellanox Technologies, Ltd. FabricIT, MLNX-OS and SwitchX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property
of their respective owners.
350 Oakmead Parkway, Suite 100
Sunnyvale, CA 94085
Tel: 408-970-3400
Fax: 408-970-3403
www.mellanox.com
INFINIBAND
– IBTA Specication 1.2.1 compliant
– 10, 20, or 40, or 56Gb/s per port
– RDMA, Send/Receive semantics
– Hardware-based congestion control
– 16 million I/O channels
– 9 virtual lanes: 8 data + 1 management
ENHANCED INFINIBAND
– Hardware-based reliable transport
– Collective operations ofoads
– GPU communication acceleration
– Hardware-based reliable multicast
– Extended Reliable Connected transport
– Enhanced Atomic operations
HARDWARE-BASED I/O VIRTUALIZA-
TION
– Single Root IOV
– Address translation and protection
– Multiple queues per virtual machine
– VMware NetQueue support
ADDITIONAL CPU OFFLOADS
– RDMA over Converged Ethernet
– TCP/UDP/IP stateless ofoad
– Intelligent interrupt coalescence
STORAGE SUPPORT
– Fibre Channel over InniBand or Ethernet
FLEXBOOT™ TECHNOLOGY
– Remote boot over InniBand
– Remote boot over Ethernet
– Remote boot over iSCSI
SAFETY
– USA/Canada: cTUVus UL
– EU: IEC60950
– Germany: TUV/GS
– International: CB Scheme
EMC (EMISSIONS)
– USA: FCC, Class A
– Canada: ICES, Class A
– EU: EN55022, Class A
– EU: EN55024, Class A
– EU: EN61000-3-2, Class A
– EU: EN61000-3-3, Class A
– Japan: VCCI, Class A
– Australia: C-Tick
– Korea: KCC
ENVIRONMENTAL
– EU: IEC 60068-2-64: Random Vibration
– EU: IEC 60068-2-29: Shocks, Type I / II
– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS
– Operating temperature: 0 to 55° C
– Air ow: 100LFM @ 55° C
– Requires 3.3V, 12V supplies
CONNECTIVITY
– Interoperable with InniBand or 10GigE
switches
– microGiGaCN or QSFP connectors
– 20m+ (10Gb/s), 10m+ (20Gb/s), 7m+ (40Gb/s)
or 5m+ (56Gb/s) of pasive copper cable
– External optical media adapter and active
cable support
– Quad to Serial Adapter (QSA) module,
connectivity from QSFP to SFP+
OPERATING SYSTEMS/DISTRIBUTIONS
– Novell SLES, Red Hat Enterprise Linux (RHEL),
Fedora, and other Linux distributions
– Microsoft Windows Server 2008/CCS 2003,
HPC Server 2008
– OpenFabrics Enterprise Distribution (OFED)
– OpenFabrics Windows Distribution (WinOF)
– VMware ESX Server 3.5, vSphere 4.0/4.1
PROTOCOL SUPPORT
– Open MPI, OSU MVAPICH, Intel MPI, MS MPI,
Platform MPI
– TCP/UDP, EoIB, IPoIB, SDP, RDS
– SRP, iSER, NFS RDMA, FCoIB, FCoE
– uDAPL
Feature Summary
COmPLIaNCe COmPatIbILIty
Mellanox continues its leadership
providing InfiniBand Host Channel Adapters (HCA) —
the highest performance interconnect solution for Enterprise Data Centers,
Web 2.0, Cloud Computing, High-Performance Computing,
and embedded environments.
VALUE PROPOSITIONS
■ High Performance Computing needs high bandwidth, low latency, and CPU ofoads to get the highest server efciency and application productivity.
Mellanox HCAs deliver the highest bandwidth and lowest latency of any standard interconnect enabling CPU efciencies of greater than 95%.
■ Data centers and cloud computing require I/O services such as bandwidth, consolidation and unication, and exibility. Mellanox’s HCAs support LAN
and SAN trafc consolidation and provides hardware acceleration for server virtualization.
■ Virtual Protocol Interconnect™ (VPI) exibility offers InniBand, Ethernet, Data Center Bridging, EoIB, FCoIB and FCoE connectivity.
World-Class Performance
Mellanox InniBand adapters deliver industry-leading bandwidth with ultra low-latency and efcient
computing for performance-driven server and storage clustering applications. Network protocol processing
and data movement overhead such as RDMA and Send/Receive semantics are completed in the adapter
without CPU intervention. Application acceleration and GPU communication acceleration brings further
levels of performance improvement. Mellanox InniBand adapters’ advanced acceleration technology
enables higher cluster efciency and large scalability to tens of thousands of nodes.
I/O Virtualization
Mellanox adapters utilizing Virtual Intelligent Queuing (Virtual-IQ) technology with SR-IOV provides
dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the
server. I/O virtualization on InniBand gives data center managers better server utilization and LAN and
SAN unication while reducing cost, power, and cable complexity.
Storage Accelerated
A consolidated compute and storage network achieves signicant cost-performance advantages over
multi-fabric networks. Standard block and le access protocols leveraging InniBand RDMA result in high-
performance storage access. Mellanox adapters support SCSI, iSCSI, NFS and FCoIB protocols.
Software Support
All Mellanox adapters are supported by a full suite of drivers for Microsoft Windows, Linux distributions,
VMware, and Citrix XENServer. The adapters support OpenFabrics-based RDMA protocols and software,
and the stateless ofoads are fully interoperable with standard TCP/UDP/IP stacks. The adapters are
compatible with conguration and management tools from OEMs and operating system vendors.
Virtual Protocol Interconnect
VPI
®
exibility enables any standard networking, clustering, storage, and management protocol to
seamlessly operate over any converged network leveraging a consolidated software stack. Each port
can operate on InniBand, Ethernet, or Data Center Bridging (DCB) fabrics, and supports Ethernet over
InniBand (EoIB) and Fibre Channel over InniBand (FCoIB) as well as Fibre Channel over Ethernet (FCoE)
and RDMA over Converged Ethernet (RoCE). VPI simplies I/O system design and makes it easier for IT
managers to deploy infrastructure that meets the challenges of a dynamic data center.
ConnectX-3
Mellanox’s industry-leading ConnectX-3 InniBand adapters provides the highest performing and most
exible interconnect solution. ConnectX-3 delivers up to 56Gb/s throughput across the PCI Express 3.0
host bus, enables the fastest transaction latency, less than 1usec, and can deliver more than 90M MPI
messages per second making it the most scalable and suitable solution for current and future transaction-
demanding applications. ConnectX-3 maximizes the network efciency making it ideal for HPC or
converged data centers operating a wide range of applications.
Complete End-to-End 56Gb/s InfiniBand Networking
ConnectX-3 adapters are part of Mellanox’s full FDR 56Gb/s InniBand end-to-end portfolio for data centers
and high-performance computing systems, which includes switches, application acceleration packages,
and cables. Mellanox’s SwitchX family of FDR InniBand switches and Unied Fabric Management
software incorporate advanced tools that simplify networking management and installation, and provide
the needed capabilities for the highest scalability and future growth. Mellanox’s collectives, messaging,
and storage acceleration packages deliver additional capabilities for the ultimate server performance, and
the line of FDR copper and ber cables ensure the highest interconnect performance. With Mellanox end to
end, IT managers can be assured of the highest performance, most efcient network fabric.
■ World-class cluster performance
■ High-performance networking and storage access
■ Efcient use of compute resources
■ Guaranteed bandwidth and low-latency services
■ Reliable transport
■ I/O unication
■ Virtualization acceleration
■ Scales to tens-of-thousands of nodes
BENEFITS
TARGET APPLICATIONS
■ High-performance parallelized computing
■ Data center virtualization
■ Clustered database applications, parallel RDBMS
queries, high-throughput data warehousing
■ Latency sensitive applications such as nancial
analysis and trading
■ Web 2.0, cloud and grid computing data centers
■ Performance storage applications such as backup,
restore, mirroring, etc.
Mellanox InniBand Host
Channel Adapters (HCA) provide
the highest performing interconnect
solution for Enterprise Data
Centers, Web 2.0, Cloud Computing,
High-Performance Computing,
and embedded environments.
Clustered data bases, parallelized
applications, transactional services
and high-performance embedded I/O
applications will achieve signicant
performance improvements resulting
in reduced completion time and lower
cost per operation.
Ports 1x20Gb/s 2x20Gb/s 1x20Gb/s
1x40Gb/s
2x20Gb/s
2x40Gb/s
1x40Gb/s
1x10Gb/s
Connector microGiGaCN microGiGaCN QSFP QSFP QSFP, SFP+
Host Bus PCI Express 2.0
Features VPI, Hardware-based Transport and Application Ofoads, RDMA, GPU Communication
Acceleration, I/O Virtualization, QoS and Congestion Control; IP Stateless Ofoad
OS Support RHEL, SLES, Windows, ESX
Ordering
Number
MHGH19B-XTR MHGH29B-XTR MHRH19B-XTR
MHQH19B-XTR
MHRH29B-XTR
MHQH29C-XTR
MHZH29B-XTR
Ports 1x40Gb/s
1x56Gb/s
2x40Gb/s
2x56Gb/s
1x56Gb/s
1x40Gb/s
Connector QSFP QSFP QSFP, SFP+
Host Bus PCI Express 3.0
Features VPI, Hardware-based Transport and Application Ofoads, RDMA, GPU Communi-
cation Acceleration, I/O Virtualization, QoS and Congestion Control; IP Stateless
Ofoad; Precision Time Protocol
OS Support RHEL, SLES, Windows, ESX
Ordering
Number
MHGH19B-XTR MHGH29B-XTR MHRH19B-XTR
MHQH19B-XTR
1 Gb/s, 10Gb/s and 40Gb/s Ethernet
Switch System Family
Highest Levels of Scalability,
Simplified Network Manageability,
Maximum System Productivity
© Copyright 2011. Mellanox Technologies. All rights reserved.
Mellanox, BridgeX, ConnectX, CORE-Direct, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, InfiniPCI, PhyX, Virtual Protocol Interconnect and Voltaire are registered
trademarks of Mellanox Technologies, Ltd. FabricIT is a trademark of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.
350 Oakmead Parkway, Suite 100
Sunnyvale, CA 94085
Tel: 408-970-3400
Fax: 408-970-3403
www.mellanox.com
HARDWARE
– 1/10Gb/s or 40Gb/s per port
– Full bisectional bandwidth to all ports
– 1.21 compliant
– All port connectors supporting passive and
active cables
– Redundant auto-sensing 110/220VAC
power supplies
– Per port status LED Link, Activity
– System, Fans and PS status LEDs
– Hot-swappable replaceable fan trays
MANAGEMENT
– Comprehensive fabric management
– Secure, remote conguration and
management
– Performance/provisioning manager
– Quality of Service based on trafc type and
service levels
– Cluster diagnostics tools for single node,
peer-to-peer and network verication
– Switch chassis management
– Error, event and status notications
SAFETY
– USA/Canada: cTUVus
– EU: IEC60950
– International: CB Scheme
EMC (EMISSIONS)
– USA: FCC, Class A
– Canada: ICES, Class A
– EU: EN55022, Class A
– EU: EN55024, Class A
– EU: EN61000-3-2, Class A
– EU: EN61000-3-3, Class A
– Japan: VCCI, Class A
ENVIRONMENTAL
– EU: IEC 60068-2-64: Random Vibration
– EU: IEC 60068-2-29: Shocks, Type I / II
– EU: IEC 60068-2-32: Fall Test
OPERATING CONDITIONS
– Operating 0°C to 45°C,
Non Operating -40°C to 70°C
– Humidity: Operating 5% to 95%
– Altitude: Operating -60 to 2000m
Feature Summary
COmPLIaNCe
The Ethernet Switch Family delivers the highest performance
and port density with a complete chassis and fabric management
solution enabling converged data centers to operate at any scale
while reducing operational costs and infrastructure complexity.
This family includes a broad portfolio of xed and modular switches
that range from 24 to 288 ports, and support 1/10 or 40Gb/s per
port. These switches allow IT managers to build cost-effective and
scalable switch fabrics for small to large clusters up to 10’s-of-
thousands of nodes.
Mellanox makes fabric management as easy as it can by providing
the lowest latency and highest bandwidth. This allows IT managers
to deal with serving the company’s business needs, while solving
typical networking issues such as congestion and the inefciencies
generated by adding unnecessary rules and limitations when the
network resources are sufcient.
6024 6048 SX1035 SX1036
40GigE Ports 0 0 36 36
10GigE Ports 24 48 64 64
Height 1U 1U 1U 1U
Switch Capacity 480Gb/s 960Gb/s 2.88Tb/s 2.88Tb/s
Performance Non-blocking Non-blocking Non-blocking Non-blocking
Device Management Y Y Y Y
Fabric Management Y Y N Y
Installation Kit Y Y Y Y
FRUs Fans PS and Fans PS and Fans PS and Fans
PSU Redundancy Y Y Y Y
FAN Redundancy Y Y Y Y
8500
40GigE Ports 0
10GigE Ports 288
Height 15U
Switching Capacity 5.76Tb/s
Spine Modules 4
Leaf Modules 12
Mellanox continues its leadership by providing
40Gb/s Ethernet Switch Systems – the highest performing
fabric solution for Web 2.0, Enterprise Data Centers,
Cloud Computing and High Performance Computing.
BENEFITS
■
Efciency
–
Simple conguration, no need for
QoS (40GigE vs. 10GigE)
■
Easy Scale
–
UFM can maintain from 1 to 1000s
nodes and switches
–
Congure and manage the data
center from a single location
■
Elasticity
–
Low latency on any node
■
Arranged and Organized Data Center
–
40GigE high density switch means
4x less cables
–
Easy deployment
–
Easy maintenance
■
Unprecedented Performance
–
Storage and server application runs
faster
Brochure pages set up on 6 column grid
starting at .5” from all page borders










