HP-MPI User's Guide (11th Edition)
Understanding HP-MPI
Scalability
Chapter 3 163
Scalability
Interconnect support of MPI-2 functionality
HP-MPI has been tested on InfiniBand clusters with as many as 2048
ranks using the VAPI protocol. Most HP-MPI features function in a
scalable manner. However, a few are still subject to significant resource
growth as the job size grows.
Table 3-6 Scalability
Feature
Affected
Interconnect/
Protocol
Scalability Impact
spawn All
Forces use of pairwise socket
connections between all mpid’s
(typically one mpid per machine)
one-sided
shared
lock/unlock
All except VAPI
and IBV
Only VAPI and IBV provide
low-level calls to efficiently
implement shared lock/unlock.
All other interconnects require
mpid’s to satisfy this feature.
one-sided
exclusive
lock/unlock
All except
VAPI, IBV, and
Elan
VAPI, IBV, and Elan provide
low-level calls which allow
HP-MPI to efficiently implement
exclusive lock/unlock. All other
interconnects require mpid’s to
satisfy this feature.
one-sided
other
TCP/IP
All interconnects other than
TCP/IP allow HP-MPI to
efficiently implement the
remainder of the one-sided
functionality. Only when using
TCP/IP are mpid’s required to
satisfy this feature.