HP Scalable File Share User Guide G3.
© Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Table of Contents About This Document.........................................................................................................9 Intended Audience.................................................................................................................................9 New and Changed Information in This Edition.....................................................................................9 Typographic Conventions...................................................................
3.5.1 Configuring Ethernet and InfiniBand or 10 GigE Interfaces..................................................32 3.5.2 Creating the /etc/hosts file................................................................................................33 3.5.3 Configuring pdsh...................................................................................................................33 3.5.4 Configuring ntp......................................................................................................
7 Known Issues and Workarounds................................................................................61 7.1 Server Reboot...................................................................................................................................61 7.2 Errors from install2....................................................................................................................61 7.3 Application File Locking.............................................................................
List of Figures 1-1 1-2 A-1 A-2 A-3 A-4 A-5 A-6 A-7 A-8 A-9 A-10 A-11 A-12 A-13 A-14 A-15 A-16 A-17 6 Platform Overview........................................................................................................................15 Server Pairs....................................................................................................................................16 Benchmark Platform.......................................................................................................
List of Tables 1-1 3-1 Supported Configurations ............................................................................................................13 Minimum Firmware Versions.......................................................................................................
About This Document This document provides installation and configuration information for HP Scalable File Share (SFS) G3.2-0. Overviews of installing and configuring the Lustre® File System and MSA Storage Arrays are also included in this document. Pointers to existing documents are provided where possible. Refer to those documents for related information. Intended Audience This document is intended for anyone who installs and uses HP SFS.
\ Indicates the continuation of a code example. | Separates items in a list of choices. WARNING A warning calls attention to important information that if not understood or followed will result in personal injury or nonrecoverable system problems. CAUTION A caution calls attention to important information that if not understood or followed will result in data loss, data corruption, or damage to hardware or software.
HP StorageWorks Scalable File Share Release Notes Version 2.3 For documentation of previous versions of HP SFS, see: • HP StorageWorks Scalable File Share Client Installation and User Guide Version 2.2 at: http://docs.hp.com/en/8957/HP_StorageWorks_SFS_Client_V2_2-0.pdf Structure of This Document This document is organized as follows: Chapter 1 Provides information about what is included in this product. Chapter 2 Provides information about installing and configuring MSA arrays.
1 What's In This Version 1.1 About This Product HP SFS G3.2-0 uses the Lustre File System on MSA hardware to provide a storage system for standalone servers or compute clusters. Starting with this release, HP SFS servers can be upgraded. If you are upgrading from one version of HP SFS G3 to a more recent version, see the instructions in “Upgrade Installation” (page 35). IMPORTANT: If you are upgrading from HP SFS version 2.3 or older, you must contact your HP SFS 2.
1 CentOS 5.3 is available for download from the HP Software Depot at: http://www.hp.com/go/softwaredepot 1.3.
Figure 1-1 Platform Overview 1.
Figure 1-2 Server Pairs Figure 1-2 shows typical wiring for server pairs. 1.3.1.1 Server Memory Requirements The Lustre Operations Manual section 3.1.6 discusses memory requirements for SFS servers. These should be regarded as minimum memory requirements. Additional memory greatly increases the performance of the system.
IMPORTANT: Memory requirements for HP SFS G3.2-0 have increased from previous versions. Before deciding whether to upgrade to HP SFS G3.2-0, please determine whether additional memory is needed for your systems. Insufficient memory can cause poor performance, or can cause the system to become unresponsive and/or crash. A new default feature called OSS Read Cache in Lustre V1.8 increases performance for read intensive workloads at the expense of additional memory usage on the OSS servers.
• • DL380 G6 server support (required for IB QDR) The -c option to the gen_hb_config_files.pl script automatically copies the Heartbeat configuration files to the servers and sets the appropriate permissions on the files. For more information, see “Copying Files” (page 49). For the new Luster 1.8 features, see: http://wiki.lustre.org/index.php/Lustre_1.8 1.5.2 Bug Fixes For the Luster 1.8 changelog (bug fixes), see: http://wiki.lustre.org/index.php/Use:Change_Log_1.8 1.5.
2 Installing and Configuring MSA Arrays This chapter summarizes the installation and configuration steps for MSA2000fc arrays use in HP SFS G3.2-0 systems. 2.1 Installation For detailed instructions of how to set up and install the MSA arrays, see Chapter 4 of the HP StorageWorks 2012fc Modular Smart Array User Guide on the HP website at: http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01394283/c01394283.pdf 2.
IMPORTANT: The size of a Lustre MDT or OST is limited to 8 TB. Therefore, any volume created on the MSA2000 must be less than or equal to 8796 GB. If a vdisk is larger than 8796 GB, due to the number and size of disks used, a volume less than or equal to 8796 GB must be created from the vdisk. 2.3.2 Creating New Volumes To create new volumes on a set of MSA2000 arrays, follow these steps: 1. 2. Power on all the MSA2000 shelves. Define an alias.
# forallmsas show disks ; done The CLI syntax for specifying disks in enclosures differs based on the controller type used in the array. The following vdisk and volume creation steps are organized by controller types MSA2212fc and MSA2312fc, and provide examples of command-line syntax for specifying drives. This assumes that all arrays in the system are using the same controller type. • MSA2212fc Controller Disks are identified by SCSI ID.
correct assignment of multipath priorities. HP recommends mapping all ports to each volume to facilitate proper hardware failover. a. Create vdisks in the MGS and MDS array. The following example assumes the MGS and MDS do not have attached disk enclosures and creates one vdisk for the controller enclosure. # formdsmsas create vdisk level raid10 disks 1.1-2:1.3-4:1.5-6:1.7-8:1.9-10 assigned-to a spare 1.
1. Enable FTP on the MSA with the CLI command: # set protocols ftp enable 2. Use FTP from a Linux host to upload log files: # ftp MSAIPaddress 3. 4. Log in with the manage account and password. ftp> get logs Linuxfilename The MSA logs and configuration information will be saved to the Linuxfilename on your Linux host. You might be asked to provide this information to the HP MSA support team. 2.4.
2. If you are running with a firewall, the sendmail firewall port 25 must be open by adding the following line to /etc/sysconfig/iptables before the final COMMIT line: -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 25 -j ACCEPT 3. Restart the firewall: # service iptables restart 4. 5. 6. Make sure a fully qualified host name for the server node is present in /etc/hosts.
3 Installing and Configuring HP SFS Software on Server Nodes This chapter provides information about installing and configuring HP SFS G3.2-0 software on the Lustre file system server. The following list is an overview of the installation and configuration procedure for file system servers and clients. These steps are explained in detail in the following sections and chapters. 1. Update firmware. 2. Installation Phase 1 a. Choose an installation method.
3.1 Supported Firmware Follow the instructions in the documentation which was included with each hardware component to ensure that you are running the latest qualified firmware versions. The associated hardware documentation includes instructions for verifying and upgrading the firmware. For the minimum firmware versions supported, see Table 3-1. Upgrade the firmware versions, if necessary. You can download firmware from the HP IT Resource Center on the HP website at: http://www.itrc.hp.
3.2 Installation Requirements A set of HP SFS G3.2-0 file system server nodes should be installed and connected by HP in accordance with the HP SFS G3.2-0 hardware configuration requirements. The file system server nodes use the CentOS 5.3 software as a base. The installation process is driven by the CentOS 5.3 Kickstart process, which is used to ensure that required RPMs from CentOS 5.3 are installed on the system. NOTE: CentOS 5.3 is available for download from the HP Software Depot at: http://www.hp.
The following optional, but recommended, line sets up an Ethernet network interface. More than one Ethernet interface may be set up using additional network lines. The --hostname and --nameserver specifications are needed only in one network line. For example, (on one line): ## Template ADD network --bootproto static --device %{prep_ext_nic} \ --ip %{prep _ext_ip} --netmask %{prep_ext_net} --gateway %{prep_ext_gw} \ --hostname %{host_name}.
During the Kickstart post-installation phase, you are prompted to install the HP SFS G3.2-0 DVD into the DVD drive: Please insert the HP SFS G3.2-0 DVD and enter any key to continue: After you insert the HP SFS G3.2-0 DVD and press enter, the Kickstart installs the HP SFS G3.2-0 software onto the system in the directory /opt/hp/sfs. Kickstart then runs the /opt/hp/sfs/ scripts/install1.sh script to perform the first part of the software installation.
NOTE: USB drives are not scanned before the installer reads the Kickstart file, so you are prompted with a message indicating that the Kickstart file cannot be found. If you are sure that the device you provided is correct, press Enter, and the installation proceeds. If you are not sure which device the drive is mounted on, press Ctrl+Alt+F4 to display USB mount information. Press Ctrl+Alt+F1 to return to the Kickstart file name prompt.
NOTE: The output from Installation Phase 1 is contained in /var/log/postinstall.log. Proceed to “Installation Phase 2”. 3.4 Installation Phase 2 After the Kickstart and install1.sh have been run, the system reboots and you must log in and complete the second phase of the HP SFS G3.2-0 software installation. 3.4.1 Patch Download and Installation Procedure To download and install HP SFS patches, if any, from the ITRC website, follow this procedure: 1. Create a temporary directory for the patch download.
IMPORTANT: This step must be performed for 10 GigE systems only. Do not use this process on InfiniBand systems. If your system uses Mellanox ConnectX HCAs in 10 GigE mode, HP recommends that you upgrade the HCA board firmware before installing the Mellanox 10 GigE driver. If the existing board firmware revision is outdated, you might encounter errors if you upgrade the firmware after the Mellanox 10 GigE drivers are installed.
3.5.2 Creating the /etc/hosts file Create an /etc/hosts file with the names and IP addresses of all the Ethernet interfaces on each system in the file system cluster, including the following: • Internal interfaces • External interface • iLO interfaces • InfiniBand or 10 GigE interfaces • Interfaces to the Fibre Channel switches • MSA2000 controllers • InfiniBand switches • Client nodes (optional) Propagate this file should be to all nodes in the file system cluster. 3.5.
enabling direct user login access to the file system server nodes. In particular, the shadow password information should not be provided through NIS or LDAP. IMPORTANT: HP requires that users do not have direct login access to the file system servers. If support for secondary user groups is not desired, or to avoid the server configuration requirements above, the Lustre file system can be created so that it does not require user credential information.
This import command should be performed by root on each system that installs signed RPM packages. 3.6 Upgrade Installation In some situations you may upgrade an HP SFS system running an older version of HP SFS software to the most recent version of HP SFS software. Upgrades can be as simple as updating a few RPMs, as in the case of some patches from HP SFS G3 support, or as complex as a complete reinstallation of the server node. The upgrade of a major or minor HP SFS release, such as from HP SFS G3.
1. For the first member of the failover pair, stop the Heartbeat service to migrate the Lustre file system components from this node to its failover pair node. # chkconfig heartbeat off # service heartbeat stop At this point, the node is no longer serving the Lustre file system and can be upgraded. The specific procedures will vary depending on the type of upgrade to be performed. 2.
6. For the upgrade from SFS G3.0-0 to G3.1-0 or SFS G3.2-0, you must re-create the Heartbeat configuration files to account for licensing. For the details, see “Configuration Files” (page 47). For other upgrades, the previously saved Heartbeat files can be restored or re-created from the CSV files. IMPORTANT: HP SFS G3.2-0 requires a valid license. For license installation instructions, see Chapter 6 (page 59).
4 Installing and Configuring HP SFS Software on Client Nodes This chapter provides information about installing and configuring HP SFS G3.2-0 software on client nodes running CentOS 5.3, RHEL5U3, SLES10 SP2, and HP XC V4.0. 4.1 Installation Requirements HP SFS G3.2-0 software supports file system clients running CentOS 5.3/RHEL5U3 and SLES10 SP2, as well as the HP XC V4.0 cluster clients. Customers using HP XC V4.0 clients should obtain HP SFS client software and instructions from the HP XC V4.
If the client is using the HP recommended 10 GigE ConnectX cards from Mellanox, the ConnectX EN drivers must be installed. These drivers can be downloaded from www.mellanox.com, or copied from the HP SFS G3.2-0 server software image in the /opt/hp/sfs/ofed/ mlnx_en-1.4.1 subdirectory. Copy that software to the client system and install it using the supplied install.sh script. See the included README.txt and release notes as necessary.
NOTE: The network addresses shown above are the InfiniBand IPoIB ib0 interfaces for the HP SFS G3.2-0 Management Server (MGS) node, and the MGS failover node which must be accessible from the client system by being connected to the same InfiniBand fabric and with a compatible IPoIB IP address and netmask. For 10 GigE systems, to automatically mount the Lustre file system after reboot, add the following line to /etc/fstab: 172.31.80.1@tcp:172.31.80.2@tcp:/testfs /testfs lustre _netdev,rw,flock 0 0 6. 7. 8.
5. When successfully completed, the newly built RPMs are available in /usr/src/redhat/ RPMS/x86_64. Proceed to “Installation Instructions” (page 40). 4.3.2 SLES10 SP2 Custom Client Build Procedure Additional RPMs from the SLES10 SP2 DVD may be necessary to build Lustre. These RPMs may include, but are not limited to the following: • expect • gcc • kernel-source-xxx RPM to go with the installed kernel 1. Install the Lustre source RPM as provided on the HP SFS G3.
5 Using HP SFS Software This chapter provides information about creating, configuring, and using the file system. 5.1 Creating a Lustre File System The first required step is to create the Lustre file system configuration. At the low level, this is achieved through the use of the mkfs.lustre command. However, HP recommends the use of the lustre_config command as described in section 6.1.2.3 of the Lustre 1.8 Operations Manual.
To see the multipath configuration, use the following command. Output will be similar to the example shown below: # multipath -ll mpath7 (3600c0ff000d547b5b0c95f4801000000) dm-5 HP,MSA2212fc [size=4.1T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=20][active] \_ 0:0:3:5 sdd 8:48 [active][ready] \_ 1:0:3:5 sdh 8:112 [active][ready] mpath6 (3600c0ff000d548aa1cca5f4801000000) dm-4 HP,MSA2212fc [size=4.
node3,options lnet networks=o2ib0,/dev/mapper/mpath6,/mnt/ost4,ost,testfs,icnode1@o2ib0:icnode2@o2ib0 ,,,,"_netdev,noauto",icnode4@o2ib0 node4,options lnet networks=o2ib0,/dev/mapper/mpath7,/mnt/ost5,ost,testfs,icnode1@o2ib0:icnode2@o2ib0 ,,,,"_netdev,noauto",icnode3@o2ib0 node4,options lnet networks=o2ib0,/dev/mapper/mpath8,/mnt/ost6,ost,testfs,icnode1@o2ib0:icnode2@o2ib0 ,,,,"_netdev,noauto",icnode3@o2ib0 node4,options lnet networks=o2ib0,/dev/mapper/mpath9,/mnt/ost7,ost,testfs,icnode1@o2ib0:icnode2@o2ib0
2. Start the file system manually and test for proper operation before configuring Heartbeat to start the file system. Mount the file system components on the servers: # lustre_start -v -a ./testfs.csv 3. Mount the file system on a client node according to the instructions in Chapter 4 (page 39). # mount /testfs 4. 5. Verify proper file system behavior as described in “Testing Your Configuration” (page 52). After the behavior is verified, unmount the file system on the client: # umount /testfs 6.
3. 4. 5. Heartbeat uses one or more of the network interfaces to send Heartbeat messages using IP multicast. Each failover pair of nodes must have IP multicast connectivity over those interfaces. HP SFS G3.2-0 uses eth0 and ib0. Each node of a failover pair must have mount-points for all the Lustre servers that might be run on that node; both the ones it is primarily responsible for and those which might fail over to it. Ensure that all the mount-points are present on all nodes.
You can generate the simple files ha.cf, haresources, and authkeys by hand if necessary. One set of ha.cf with haresources is needed for each failover pair. A single authkeys is suitable for all failover pairs. ha.cf The /etc/ha.d/ha.cf file for the example configuration is shown below: use_logd yes deadtime 10 initdead 60 mcast eth0 239.0.0.3 694 1 0 mcast ib0 239.0.0.3 694 1 0 node node5 node node6 stonith_host * external/riloe node5 node5_ilo_ipaddress ilo_login ilo_password 1 2.
The haresources2cib.py script is executed by gen_hb_config_files.pl. 5.2.3.2 Editing cib.xml The haresources2cib.py script places a number of default values in the cib.xml file that are unsuitable for HP SFS G3.2-0. The changes to the default action timeout and the stonith enabled values are incorporated by gen_hb_config_files.pl. • By default, a server fails back to the primary node for that server when the primary node returns from a failure.
• • • • The .sig and .last files should be removed from /var/lib/heartbeat/crm when a new cib.xml is copied there. Otherwise, Heartbeat ignores the new cib.xml and uses the last one. The /var/lib/heartbeat/crm/cib.xml file owner should be set to hacluster and the group access permission should be set to haclient. Heartbeat writes cib.xml to add status information. If cib.
NOTE: Changing the authentication string in /etc/ha.d/authkeys causes Heartbeat to report numerous warnings instead of error messages. atlas1 heartbeat: [2420]: WARN: string2msg_ll: node [world1] failed authentication Updating the mcast addresses is the only way to fix the problem. 5.3 Starting the File System After the file system has been created, it can be started.
This forces you to manually start the Heartbeat service and the file system after a file system server node is rebooted. 5.5 Monitoring Failover Pairs Use the crm_mon command to monitor resources in a failover pair. In the following sample crm_mon output, there are two nodes that are Lustre OSSs, and eight OSTs, four for each node. ============ Last updated: Thu Sep 18 16:00:40 2008 Current DC: n4 (0236b688-3bb7-458a-839b-c19a69d75afa) 2 Nodes configured. 10 Resources configured.
5.7.1 Examining and Troubleshooting If your file system is not operating properly, you can refer to information in the Lustre 1.8 Operations Manual, PART III Lustre Tuning, Monitoring and Troubleshooting. Many important commands for file system operation and analysis are described in the Part V Reference section, including lctl, lfs, tunefs.lustre, and debugfs. Some of the most useful diagnostic and troubleshooting commands are also briefly described below. 5.7.1.
This displays INACTIVE if no recovery is in progress.
The problem is in line #08. The MDT is related to 10.129.10.1@o2ib, but in this example the IP address is for the MGS node not the MDT node. So MDT will never mount on the MDT node. To fix the problem, use the following procedure: IMPORTANT: 1. The following steps must be performed in the exact order as they appear below. Unmount HP SFS from all client nodes. # umount /testfs 2. Stop Heartbeat on HP SFS server nodes. a.
After a few minutes, the MGS mount is active with df. 16. Boot the MDS node. 17. Start the Heartbeat service on the MDS node: # service heartbeat start After a few minutes, the MDS mount is active with df. 18. Start Heartbeat on the OSS nodes. # pdsh -w oss[1-n] service heartbeat start 19. Run the following command on all nodes: # chkconfig heartbeat on 5.7.1.
common system performance data such as CPU, disk, and network traffic, it also supports reporting of both Lustre and InfiniBand statistics. Read/write performance counters can be reported in terms of both bytes-per-second and operations-per-second. For more information about the collectl utility, see http://collectl.sourceforge.net/ Documentation.html. Choose the Getting Started section for information specific to Lustre.
6 Licensing A valid license is required for normal operation of HP SFS G3.2-0. HP SFS G3.2-0 systems are preconfigured with the correct license file at the factory, making licensing transparent for most HP SFS G3.2-0 users. No further action is necessary if your system is preconfigured with a license, or if you have an installed system. However, adding a license to an existing system is required when upgrading a G3.0-0 server to G3.2-0. NOTE: HP SFS is licensed by storage capacity.
[root@atlas1] grep "SFS License" /var/log/messages Feb 9 17:04:08 atlas1 SfsLicenseAgent: Error: No SFS License file found. Check /var/flexlm/license.lic. Also the cluster monitoring command will output an error like the following. Note the "Failed actions" at the end. hpcsfsd1:root> crm_mon -1 ...
7 Known Issues and Workarounds The following items are known issues and workarounds. 7.1 Server Reboot After the server reboots, it checks the file system and reboots again. /boot: check forced You can ignore this message. 7.2 Errors from install2 You might receive the following errors when running install2.
NOTE: b. Use the appropriate device in place of /dev/mapper/mpath? For example, if the --dryrun command returned: Parameters: mgsnode=172.31.80.1@o2ib mgsnode=172.31.80.2@o2ib failover.node=172.31.80.1@o2ib Run: tunefs.lustre --erase-params --param="mgsnode=172.31.80.1@o2ib mgsnode=172.31.80.2@o2ib failover.node=172.31.80.1@o2ib mdt.group_upcall=NONE" --writeconf /dev/mapper/mpath? 4. Manually mount mgs on the MGS node: # mount /mnt/mgs 5.
A HP SFS G3 Performance A.1 Benchmark Platform HP SFS G3, based on Lustre File System Software, is designed to provide the performance and scalability needed for very large high-performance computing clusters. Performance data in the first part of this appendix (sections A-1 through A-6) is based on HP SFS 3.0-0. Performance of HP SFS G3.1-0 and HP SFS G3.2-0 is expected to be comparable to HP SFS G3.0-0.
Figure A-2 shows more detail about the storage configuration. The storage comprised a number of HP MSA2212fc arrays. Each array had a redundant pair of RAID controllers with mirrored caches supporting failover. Each MSA2212fc had 12 disks in the primary enclosure, and a second JBOD shelf with 12 more disks daisy-chained using SAS. Figure A-2 Storage Configuration Each shelf of 12 disks was configured as a RAID6 vdisk (9+2+spare), presented as a single volume to Linux, and then as a single OST by Lustre.
Figure A-3 Single Stream Throughput For a file written on a single OST (a single RAID volume), throughput is in the neighborhood of 200 MB/s. As the stripe count is increased, spreading the load over more OSTs, throughput increases. Single stream writes top out above 400 MB/s and reads exceed 700 MB/s. Figure A-4 compares write performance in three cases. First is a single process writing to N OSTs, as shown in the previous figure. Second is N processes each writing to a different OST.
filled with the new data. At the point (14:10:14 in the graph) where the amount of data reached the cache limit imposed by Lustre (12 GB), throughput dropped by about a third. NOTE: This limit is defined by the Lustre parameter max_cached_mb. It defaults to 75% of memory and can be changed with the lctl utility. Figure A-5 Writes Slow When Cache Fills Because cache effects at the start of a test are common, it is important to understand what this graph shows and what it does not.
Figure A-6 Multi-Client Throughput Scaling In general, Lustre scales quite well with additional OSS servers if the workload is evenly distributed over the OSTs, and the load on the metadata server remains reasonable. Neither the stripe size nor the I/O size had much effect on throughput when each client wrote to or read from its own OST. Changing the stripe count for each file did have an effect as shown in Figure A-7.
A.4 One Shared File Frequently in HPC clusters, a number of clients share one file either for read or for write. For example, each of N clients could write 1/N'th of a large file as a contiguous segment. Throughput in such a case depends on the interaction of several parameters including the number of clients, number of OSTs, the stripe size, and the I/O size.
Another way to measure throughput is to only average over the time while all the clients are active. This is represented by the taller, narrower box in Figure A-8. Throughput calculated this way shows the system's capability, and the stragglers are ignored. This alternate calculation method is sometimes called "stonewalling". It is accomplished in a number of ways. The test run is stopped as soon as the fastest client finishes. (IOzone does this by default.
For workloads that require a lot of disk head movement relative to the amount of data moved, SAS disk drives provide a significant performance benefit. Random writes present additional complications beyond those involved in random reads. These additional complications are related to Lustre locking, and the type of RAID used. Small random writes to a RAID6 volume requires a read-modify-write sequence to update a portion of a RAID stripe and compute a new parity block.
Each disk shelf in the platform used for deep shelf testing was configured in the same manner as described in “Benchmark Platform” (page 63). The arrangement of the shelves and controllers was modified as shown in Figure A-10. A.7.2 Single Stream Throughput For a single stream, striping improves performance immediately when applied across the available OSSs, but additional striping does not provide further benefit as shown in Figure A-11.
Figure A-12 Client Count Versus Total Throughput (MB/s) A.7.3 Throughput Scaling A single file accessed by eight clients benefits from increased striping up to the number of available OSTs. Figure A-13 Stripe Count Versus Total Throughput (MB/s) – Single File A.8 10 GigE Performance This section describes the performance characteristics of the HP SFS system when the clients are connected with 10 GigE network links. Tests were run with HP SFS G3.2-0 and HP-MPI V2.3.
The OSTs were populated with 146 GB SAS drives. Stripe placement was controlled by default operation of the HP SFS file system software. Specific control of striping can affect performance. Due to variability in configuration, hardware, and software versions, it is not valid to directly compare the results indicated in this section with those indicated in other sections. A.8.1 Benchmark Platform The performance data is based on MSA2212 controllers for the HP SFS component.
network buffering parameters were set as described in the documentation for the configured network controller. A.8.2 Single Stream Throughput Throughput is limited by the characteristics of the single client. In this particular case, performance with more than one stripe is mainly limited by the network connection. Figure A-15 shows the effect of striping on the operation of a single client. Read performance is adversely affected by striping across OSSs due to contention at the inbound client port.
Figure A-16 Client Count Versus Total Throughput (MB/s) A.8.3 Throughput Scaling As in “Throughput Scaling” (page 66), a set of 16 clients wrote or read 16 files of 16 GB each. In this case, the significant difference is the throughput limitation imposed by architecture of the interconnect. As striping is increased, the communication channels are better utilized due to the spread of the traffic among the links and the consequent improvement of the utilization of the switch network buffers.
Index Symbols /etc/hosts file configuring, 33 10 GigE clients, 39 configuring, 32 installation, 31 performance, 72 B benchmark platform, 63 C RHEL systems, 39 server, 27 SLES systems, 39 XC systems, 39 IOR processes, 63 K kickstart template, 27 usb drive installation, 29 known issues and workarounds, 61 L cache limit, 66 cib.
release notes, 17 rolling upgrades, 35 S scaling, 66 server security policy, 17 shared files, 68 stonewalling, 68 stonith, 46 support, 10 T throughput scaling, 66 U upgrade installation, 35 upgrades client, 37 installation, 35 rolling, 35 upgrading servers, 13 usb drive, 29 user access configuring, 33 V volumes, 20 W workarounds, 61 writeconf procedure, 54 78 Index