Providing Open Architecture High Availability Solutions

Providing Open Architecture High Availability Solutions
84
are used to route Ethernet packets between nodes. Ethernet borrows its electrical signaling, just like
InfiniBand, from Fiber Channel. Gigabit Ethernet uses a differential pair for full duplex
transmission. This differential pair is quad bundled into a set of four full duplex differential pairs (8
wires in total) with each set operating at a quarter of the base (1 GHz) frequency. The total
bandwidth is 1 Gbit/s.
Ethernet implements the two lower layers of the OSI Reference Model (i.e., physical and data
link). The upper layers (networking and transport) are implemented typically with a TCP/IP or
UDP software stack. Ethernet is a message passing architecture. Thus, all work is performed by
passing messages between communicating nodes. The protocols and software stacks have been and
are being used pervasively. Therefore, the software is mature and very well understood.
Ethernet switches attach to their respective control node processor through PCI. Thus, a Ethernet
fabric can be viewed as a switched PCI; the PCI bus is not eliminated from the solution, but rather
a switch fabric is added between the PCI busses on opposite ends of the connection in the host.
Ethernet does not provide any control over Quality of Service (QOS).
InfiniBand
InfiniBand is a new serial interconnect technology that is being developed for interconnecting
systems (servers, storage devices, networking devices, etc.) in the server data center. Up to now,
this level of interconnect within servers was performed by the PCI bus. The interconnection of the
data center elements is referred to as a System Area Network (SAN). Historically, InfiniBand
resulted from the union of the Next Generation I/O (NGIO) and Future I/O efforts. The InfiniBand
specifications are developed by an industry consortium known as the InfiniBand Trade
Association. This makes InfiniBand an open, publicly available standard.
InfiniBand uses a fabric topology based on point-to-point connections between nodes and
switching elements. Interconnected devices, or nodes, send data that is routed through switching
elements — much like a communications network. InfiniBand describes a three-layer architecture
comprised of a physical layer, a data link layer, and a transport layer. Data is transported across the
physical and link layer as packets. Messages flow across the transport layer and represent end-to-
end transfer of data and control. Transactions (end-to-end operations) are initiated and controlled
by the software via InfiniBand’s transport layer services (called verbs in the specifications).
InfiniBand employs a message passing architecture to perform transactions. The fabric nature of
the topology makes it easy to add nodes. The point-to-point connections ensure that the full
bandwidth of the link is available as new nodes are added. This allows effective scalability.
The signaling is low-voltage differential with duplex dual-differential pairs per single width link.
InfiniBand, as well as gigabit Ethernet, borrow their electrical signaling from Fiber Channel. Wider
links (x4 and x12) are also supported. InfiniBand provides very high bandwidth (2.5 Gb/s) per
connection. Host computers nodes attach to InfiniBand via host channel adapters (HCA). Other
nodes attach into InfiniBand via target channel adapters (TCA).
Switches are used to route packets between nodes within a subnet. Subnets are topologies that can
be address within the scope of a 16-bit local ID (LID). Routers are used to route packets between
different subnets. InfiniBand offers a very rich set of transport, addressing and protection features.
As much of the work within InfiniBand is performed in the software, InfiniBand is not PCI
software transparent.