White Papers
Dell Storage for HPC with Intel Enterprise Edition for Lustre sofware
In order to reduce the cache effects from server and client memory, it was decided to use a file size
that was twice the combined memory size of the OSSs and the clients’ memory, according to the
following formula and rounding to whole values where necessary:
File Size = 2 * (2 OSSs*256 GiB memory per OSS + Number of physical clients * 24 GiB memory
per client).
Table 3 shows the size of the data manipulated by each set of clients, the number of threads, and the
size of the total shared file.
Table 3: IOR Shared File Size
Number of
Threads
Number of
Physical
Clients
Data
Written per
Thread (GB)
Shared File
Size (GB)
1
1
1072
1072
2
2
560
1120
4
4
304
1216
8
8
176
1408
12
12
133
1600
16
16
112
1792
24
24
91
2176
32
32
80
2560
48
48
69
3328
64
64
64
4096
72
64
57
4096
96
64
43
4096
120
64
34
4096
128
64
32
4096
144
64
28
4096
168
64
24
4096
192
64
21
4096
216
64
19
4096
240
64
17
4096
256
64
16
4096
Figure 11 shows the IOR results, where reads have the advantage and peak at 10.7GB/s at 32 threads
with trends very similar to the sequential N-to-N performance. Write performance increases steadily as
threads to OST ratio increases reaching a plateau at 24 threads, the same number of OSTs available in
the test system. Peak write is at 6.2GB/s. Single client IOR performance has reads at 806MB/sec and
writes at 948MB/sec.