Administrator Guide

Performance characterization
39 Dell EMC Ready Solution for HPC PixStor Storage | Document ID
Figure 21 N to 1 Sequential Performance
From the results we can observe read and write performance are high regardless of the implicit need for
locking mechanisms since all threads access the same file. Performance rises again very fast with the
number of threads used and then reaches a plateau that is relatively stable for reads and writes all the way to
the maximum number of threads used on this test. Notice that the maximum read performance was 51.6 GB/s
at 512 threads, but the plateau in performance is reach at about 64 threads. Similarly, notice that the
maximum write performance of 34.5 GB/s was achieved at 16 threads and reached a plateau that can be
observed until the maximum number of threads used.
Random small blocks IOzone Performance N clients to N files
Random N clients to N files performance was measured with IOzone version 3.487. Tests executed varied
from single thread up to 1024 threads in increments of powers of two.
Tests executed varied from single thread up to 512 threads since there was not enough client-cores for 1024
threads. Each thread was using a different file and the threads were assigned round robin on the client nodes.
This benchmark tests used 4 KiB blocks for emulating small blocks traffic and using a queue depth of 16.
Results from the large size solution and the capacity expansion are compared.
Caching effects were again minimized by setting the GPFS page pool tunable to 16GiB and to avoid any
possible data caching effects from the clients, the total data size of the files was twice the total amount of
RAM in the clients used. That is, since each client has 128 GiB of RAM, for threads counts equal or above 16
threads the file size was 4096 GiB divided by the number of threads (the variable $Size below was used to
manage that value). For those cases with less than 16 threads (which implies each thread was running on a
different client), the file size was fixed at twice the amount of memory per client, or 256 GiB.