Product specifications

Table Of Contents
4–InfiniPath Cluster Setup and Administration
Memory Footprint
4-4 IB6054601-00 H
S
The following paragraphs provide an example for a 1024 processor system:
1024 cores over 256 nodes (each node has 2 sockets with dual-core
processors).
One adapter per node
Each core runs an MPI process, with the four processes per node
communicating via shared memory.
Each core uses OpenFabrics to connect with storage and file system targets
using 50 QPs and 50 EECs per core.
This example breaks down to a memory footprint of 290 MB per node, as shown
in Table 4-2.
OpenFab-
rics
Optional 1~6 MB
+ ~500 bytes per QP
+ TBD bytes per MR
+ ~500 bytes per EE
+ OpenFabrics stack from
openfabrics.org (size not
included in these guidelines)
This component has not
been fully characterized at
the time of publication.
Table 4-2. Memory Footprint, 290 MB per Node
Component
Footprint
(in MB)
Breakdown
Driver 9 Per node
MPI 273 4×68 MB (MPI per process including shared memory)
+ 4× 264× 1020 (for 1020 remote ranks)
OpenFabrics 8 6 MB + 1024 × 200 KB per node
Table 4-1. Memory Footprint of the QLogic Adapter on Linux x86_64
Systems (Continued)
Adapter
Component
Required/
Optional
Memory Footprint Comment