Technical data
5
Setting Up the Memory Channel Cluster
Interconnect
This chapter describes Memory Channel configuration restrictions, and
describes how to set up the Memory Channel cluster interconnect, including
setting up a Memory Channel hub and Memory Channel optical converter
(MC2 only), and connecting link cables.
Two versions of the Memory Channel peripheral component interconnect
(PCI) adapter are available: CCMAA and CCMAB (MC2).
Two variations of the CCMAA PCI adapter are in use: CCMAA-AA (MC1)
and CCMAA-AB (MC1.5). Because the hardware used with these two PCI
adapters is the same, this manual often refers to MC1 when referring to
either of these variations.
See the TruCluster Server Software Product Description (SPD) for a list
of the supported Memory Channel hardware. See the Memory Channel
User’s Guide for illustrations and more detailed information about installing
jumpers, Memory Channel adapters, and hubs.
See Section 2.2 for a discussion on Memory Channel restrictions.
You can have two Memory Channel adapters with TruCluster Server, but
only one rail is active at a time. This is referred to as a failover pair. If the
active rail fails, cluster communications fails over to the formerly inactive
rail.
If you use multiple Memory Channel adapters with the Memory Channel
application programming interface (API) for high performance data delivery
over Memory Channel, setting the rm_rail_style configuration variable
to zero (rm_rail_style = 0) enables single-rail style with multiple active
rails. The default is one, which selects failover pair.
For more information on the Memory Channel failover pair model, see the
Cluster Highly Available Applications manual.
To set up the Memory Channel interconnects, follow these steps, referring to
the appropriate section and the Memory Channel User’s Guide as necessary:
1. Set the Memory Channel jumpers (Section 5.1).
Setting Up the Memory Channel Cluster Interconnect 5–1