Concept Guide
Performance Characterization
16 Dell EMC Ready Solutions for HPC BeeGFS High Performance Storage | ID 460
To minimize the effects of caching, OS caches were also dropped or cleaned on the client nodes between
iterations as well as between write and read tests by running the command:
# sync && echo 3 > /proc/sys/vm/drop_caches
The default stripe count for BeeGFS is 4. However, the chunk size and the number of targets per file can be
configured on a per-directory basis. For all these tests, BeeGFS stripe size was chosen to be 2MB and stripe
count was chosen to be 3 since we have three targets per NUMA zone as shown below:
$ beegfs-ctl --getentryinfo --mount=/mnt/beegfs /mnt/beegfs/benchmark --verbose
EntryID: 0-5D9BA1BC-1
ParentID: root
Metadata node: node001-numa0-4 [ID: 4]
Stripe pattern details:
+ Type: RAID0
+ Chunksize: 2M
+ Number of storage targets: desired: 3
+ Storage Pool: 1 (Default)
Inode hash path: 7/5E/0-5D9BA1BC-1
Sequential IOzone 8TB aggregate file size
3.0
5.6
11.0
11.0
4…
71.2
102.4
116.7
120.7
117.2
103.9
26.0
36.8
64.2
99.8
124.5
132.1
132.4
0
10
20
30
40
50
60
70
80
90
100
110
120
130
140
1 2 4 8 16 32 64 128 256 512 1024
Throughput in GB/s
Number of concurrent threads
BeeGFS N to N Sequential I/O Performance
Writes Reads