White Papers
Table Of Contents
- Executive Summary (updated May 2011)
- 1. Introduction
- 2. Dell NFS Storage Solution Technical Overview
- 3. NFS Storage Solution with High Availability
- 4. Evaluation
- 5. Performance Benchmark Results (updated May 2011)
- 6. Comparison of the NSS Solution Offerings
- 7. Conclusion
- 8. References
- Appendix A: NSS-HA Recipe (updated May 2011)
- A.1. Pre-install preparation
- A.2. Server side hardware set-up
- A.3. Initial software configuration on each PowerEdge R710
- A.4. Performance tuning on the server
- A.5. Storage hardware set-up
- A.6. Storage Configuration
- A.7. NSS HA Cluster setup
- A.8. Quick test of HA set-up
- A.9. Useful commands and references
- A.10. Performance tuning on clients (updated May 2011)
- A.11. Example scripts and configuration files
- Appendix B: Medium to Large Configuration Upgrade
- Appendix C: Benchmarks and Test Tools
Dell HPC NFS Storage Solution - High Availability Configurations
Page 21
longer to complete. In the 10 Gigabit Ethernet case, the client process takes 5-10% longer to
complete. The actual additional time taken by the client process is of the order of minutes – one to
three minutes. During the failover period when the data share is temporarily unavailable, the
client process was observed to be in an uninterruptible sleep state.
Depending on the characteristics of the client process it can be expected to abort or sleep while
the NFS share is temporarily unavailable during the failover process. Any data that has already
been written to the file system will be available. The cluster configuration includes several design
choices to protect data during a failover scenario. These and other choices were explained in the
section on Design Choices that Impact Functionality and Performance.
For read and write operations during the failover case, data correctness was successfully verified
using the checkstream utility.
Details on the tools used are provided in Appendix C: Benchmarks and Test Tools.
5. Performance Benchmark Results (updated May 2011)
This section presents the results of performance benchmarking on the NSS-HA Solution. Performance
tests were run to evaluate the following common I/O patterns.
Large sequential reads and writes
Small random reads and writes
Metadata operations
These tests were performed for the 10 Gigabit Ethernet as well as the IP-over-InfiniBand (IPoIB) case.
The iozone and mdtest benchmarks were used for this study. Details of the benchmarks are provided
in Appendix C: Benchmarks and Test Tools.
Iozone was used for the sequential tests as well as the random tests. The I/O access patterns are N-to-
N, i.e., each thread reads and writes to its own file. Iozone was executed in clustered mode and one
thread was launched on each compute node. For the sequential tests, the performance metric used
was throughput in terms of MB/s. For random tests, I/O operations per second (IOPS) was the metric.
The large sequential read and large sequential write tests were conducted using a request size of
1024KB. The total amount of data written was 128GB to ensure that the NFS server cache is saturated.
The small random tests were performed with 4 KB record sizes since the size corresponds to typical
random I/O workloads. All clients read and write a 2GB file for these tests.
The metadata tests were performed with the mdtest utility and include file creates, stats, and
removals. While these benchmarks do not cover every I/O pattern, they help characterize the I/O
performance of the NSS-HA solution.
Each set of tests was run on a range of clients to test the scalability of the solution. The number of
simultaneous clients involved in each test was varied from one to 64 clients.
Tests were performed on the two NSS-HA configurations – Medium and Large. The Medium configuration
provided 40TB of usable space across two storage arrays with each storage controller managing one
virtual disk. The Large configuration provides 80TB of usable space across four storage arrays, with
each storage controller owning two virtual disks.