Administrator Guide
Appendix A Storage array cabling
16 Dell EMC Ready Solutions for HPC BeeGFS High Capacity Storage | ID 424
Figure 7 shows the sequential N-N performance of the solution:
Sequential N-N read and write
As the figure shows, the peak read throughput of 23.70 GB/s was attained at 128 threads. The peak write was
22.07 GB/s at 512 threads. The single thread write performance was 623 MB/s and read performance was
717 MB/s. The read and write performance scale linearly with the increase in the number of threads until the
system attained its peak. After this, we see that reads and writes saturate as we scale. This brings us to
understand that the overall sustained performance of this configuration for reads is ≈ 23GB/s and that for the
writes is ≈ 22 GB/s with the peaks as mentioned above. The reads are very close to or slightly higher than the
writes, independent of the number of the threads used.
4.1.2 Random reads and writes
To evaluate random I/O performance, we used IOzone version 3.487 in random mode. Tests were conducted
on thread counts from 16 to 512 threads. Direct IO option (-I) was used to run IOzone so that all operations
bypassed the buffer cache and went directly to the disk.
As described in the IOzone sequential N-N reads and writes, stripe count of 1 and chunk size of 1 MB was
used. The files that written were distributed evenly across the STs (round-robin) to prevent uneven I/O loads
on any single SAS connection or ST in the same way that a user would expect to balance a workload.
The request size was set to 4KiB. Performance was measured in I/O operations per second (IOPS). The
operating system caches were dropped between the runs on the BeeGFS servers. The file system was
unmounted and remounted on clients between iterations of the test.