Administrator Guide

Appendix A Storage array cabling
15 Dell EMC Ready Solutions for HPC BeeGFS High Capacity Storage | ID 424
To prevent inflated results due to caching effects, we ran the tests with a cold cache. Before each test started,
the BeeGFS file system under test was remounted. A sync was performed, and the kernel was instructed to
drop caches on all the clients and BeeGFS servers (MDS and SS) with the following commands:
sync && echo 3 > /proc/sys/vm/drop_caches
In measuring the solution performance, we performed all tests with similar initial conditions. The file system
was configured to be fully functional and the targets tested were emptied of files and directories before each
test.
4.1.1 IOzone sequential N-N reads and writes
To evaluate sequential reads and writes, we used IOzone benchmark version 3.487 in the sequential read
and write mode. We conducted the tests on multiple thread counts, starting at one thread and increasing in
powers of two to 1,024 threads. Because this test works on one file per thread, at each thread count, the
number of files equal to the thread count were generated. The threads were distributed across eight physical
client nodes in a round-robin fashion.
We converted throughput results to GB/s from the KB/s metrics that were provided by the tool. For thread
counts 16 and above, an aggregate file size of 8 TB was chosen to minimize the effects of caching from the
servers as well as from BeeGFS clients. For thread counts below 16, the file size is 768 GB per thread (i.e.
1.5 TB for two threads, 3 TB for four threads and 6 TB for eight threads). Within any given test, the aggregate
file size used was equally divided among the number of threads. A record size of 1 MB was used for all runs.
Operating system caches were also dropped or cleaned on the client nodes between tests and iterations and
between writes and reads.
The files that were written were distributed evenly across the STs (round-robin) to prevent uneven I/O loads
on any single SAS connection or ST, in the same way that a user would expect to balance a workload.
The default stripe count for BeeGFS is four. However, the chunk size and the number of targets per file (stripe
count) can be configured on a per-directory or per-file basis. For all these tests, BeeGFS stripe size was set
to 1 MB and stripe count was set to 1 as shown below:
$beegfs-ctl --setpattern --numtargets=1 --chunksize=1m /mnt/beegfs/benchmark
$beegfs-ctl --getentryinfo --mount=/mnt/beegfs/ /mnt/beegfs/benchmark/ --verbose
Entry type: directory
EntryID: 1-5E72FAD3-1
ParentID: root
Metadata node: metaA-numa0-1 [ID: 1]
Stripe pattern details:
+ Type: RAID0
+ Chunksize: 1M
+ Number of storage targets: desired: 1
+ Storage Pool: 1 (Default)
Inode hash path: 61/4C/1-5E72FAD3-1