User's Guide

Planning the Fabric
Hyper Messaging Protocol (HMP)
Chapter 2
32
Dynamic Resource Utilization (DRU): Partially Supported
When a new HyperFabric resource (node, cable or switch) is added to a cluster
running an HMP application, the HyperFabric subsystem will dynamically identify
the added resource and start using it. The same process takes place when a resource
is removed from a cluster. The distinction for HMP is that DRU is supported when a
node with adapters installed in it is added or removed from a cluster running an
HMP application, but DRU is not supported when an adapter is added or removed
from a node that is running an HMP application. This is consistent with the fact that
OLAR is not supported when an HMP application is running on HyperFabric.
Load Balancing: Supported
When an HP 9000 node that has multiple HyperFabric adapter cards is running
HMP applications, the HyperFabric driver only balances the load across the nodes,
available adapter cards on that node, links and multiple links between switches.
Switch Management: Not Supported
Switch Management is not supported. Switch management will not operate properly
if it is enabled on a HyperFabric cluster.
Diagnostics: Supported
Diagnostics can be run to obtain information on many of the HyperFabric
components via the clic_diag, clic_probe and clic_stat commands, as well as
the Support Tools Manager (STM).
For more detailed information on HyperFabric diagnostics, see “Running
Diagnostics” on page 115 on page 149.
Configuration Parameters
This section details, in general, the maximum limits for HMP HyperFabric
configurations. There are numerous variables that can impact the performance of any
particular HyperFabric configuration. See the “HMP Supported Configurations” section
for guidance on specific HyperFabric configurations for HMP applications.
HyperFabric is only supported on the HP 9000 series unix servers.
The performance advantages HMP offers will not be fully realized unless it is used
with A6386A HF2 (fibre) adapters and related fibre hardware. The local failover
configuration of HMP is supported only on the A6386AA HF2 adapters.
Maximum Supported Nodes and Adapter Cards:
HyperFabric clusters running HMP applications are limited to supporting a
maximum of 64 adapter cards. However, in local failover configurations, a maximum
of only 52 adapters are supported.
In point to point configurations running HMP applications, the complexity and
performance limitations of having a large number of nodes in a cluster make it
necessary to include switching in the fabric. Typically, point to point configurations
consist of only 2 or 3 nodes.
In switched configurations running HMP applications, HyperFabric supports a
maximum of 64 interconnected adapter cards.