HP StorageWorks Clustered File System 3.1.
Legal and notice information Copyright © 1999-2006 PolyServe, Inc. Portions Copyright © 2006 Hewlett-Packard Development Company, L.P. Neither PolyServe, Inc. nor Hewlett-Packard Company makes any warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose.
Contents 1 HP Technical Support HP Storage Web Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 HP NAS Services Web Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Configuration Information Hardware Configuration Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supported HBA Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Requirements . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Contents of the Clustered File System Distribution . . . . . . . . . . . . . Initial Pre-Planning Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation IProcedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. Install a Supported Operating System and Kernel . . . . . . . . . 2. Configure the Storage Array. . . . . . . . . . . . . . . . . .
Contents v B Configure the Cluster from the Command Line Run mxconfig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify the Fencing Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complete the Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Start Clustered File System on One Server . . . . . . . . . . . . . . . . . . Start the Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 HP Technical Support Telephone numbers for worldwide technical support are listed on the following HP web site: http://www.hp.com/support. From this web site, select the country of origin. For example, the North American technical support number is 800-633-3600. NOTE: For continuous quality improvement, calls may be recorded or monitored.
Chapter 1: HP Technical Support 2 HP NAS Services Web Site The HP NAS Services site allows you to choose from convenient HP Care Pack Services packages or implement a custom support solution delivered by HP ProLiant Storage Server specialists and/or our certified service partners. For more information see us at http://www.hp.com/hps/storage/ns_nas.html.
2 Configuration Information HP is continually expanding its supported hardware and operating system configurations. For the latest information, check the QuickSpecs for the HP Clustered File System on the HP web site: http://www.hp.com. Hardware Configuration Limits The configuration limits for hardware used in a Clustered File System configuration are as follows. Hardware Configuration Limit Servers Two to 16 servers. Network Interface Cards Up to four network interfaces per server have been tested.
Chapter 2: Configuration Information 4 Supported HBA Drivers The Host Bus Adapter vendors frequently release HBA drivers for Linux. HP has chosen to narrow the number of versions that it will validate with Clustered File System. Other drivers will work, but may not produce optimal results during failure or failover situations. HP will continue to evaluate newer drivers with Clustered File System as they become available from the vendors.
Chapter 2: Configuration Information 5 • 30 MB of disk space on /var for log and runtime files. • Ethernet 10/100/1000 port. All servers in the cluster must be on the same subnet. Management Console Requirements Servers running the Management Console must have a windowing environment installed. The Management Console requires that the display be set to a minimum of 256 colors.
Chapter 2: Configuration Information 6 Cluster SAN Configuration Guidelines Following are guidelines for configuring the cluster SAN to be used with Clustered File System: • The FibreChannel fabric used for a Clustered File System can be shared with other HP Clustered File Systems clusters or with noncluster servers and devices.
3 Configuration Best Practices HP Clustered File System is supported on HP ProLiant Servers. The recommended and supported configurations can be found in the QuickSpec documents for HP Clustered File System on the HP web site: http://www.hp.com/go/nas When using HP Clustered File System on ProLiant Server, follow the best practices and caveats described in this chapter for optimum results.
Chapter 3: Configuration Best Practices 8 PSP Component Best Practice Comment FCA2214 FC HBA Driver (QLogic Driver) DO NOT INSTALL Use the QLogic driver that comes with HP Clustered File System (see below) NCxxxx Gigabit Ethernet NIC Driver Highly Recommended Optimum Performance (see below) 802.
Chapter 3: Configuration Best Practices 9 QLogic HBA Driver HP performs all Clustered File System testing with QLogic HBAs (although Emulex HBAs are also supported). This section provides insights on interacting with the QLogic driver. Load Balancing and Failover The QLogic driver includes a variety of load balancing options. The 8.00.02 driver base only has static load balancing, while the 8.01.xx driver include several dynamic load balancing schemes.
Chapter 3: Configuration Best Practices 10 • Ensure that no more than the supported number of paths to a LUN exist. • Use zoning to restrict the number of paths a given node sees. HP strongly recommends the use of zoning not only to address this issue, but in general as a way to manage the LUN presentation to nodes in a cluster. Setting Driver Options Note that QLogic HBA driver options are configured through the /etc/opt/hpcfs/fc_pcitable file.
Chapter 3: Configuration Best Practices 11 EVA Snapshots - SSSU Tool HP Clustered Gateway directly integrates with HP Snapshots (Business Copy). The HP Clustered File System Installation guide discusses this in chapter 3. You must have the EVA management tool ‘sssu’ installed to use this feature. To locate this utility, go to www.hp.com, select Software & Driver Downloads, then search for “HP StorageWorks Command View EVA Software.
4 Install HP Clustered File System This chapter describes how to perform a new installation of HP Clustered File System on servers running SuSE Linux Enterprise Server 9. Supported Operating Systems The supported operating systems and kernels are as follows. See the HP Web site for any updates to the list of supported kernels. Operating System Kernels SuSE Linux Enterprise Server 9 with Service Pack 2 2.6.5-7.191 2.6.5-7.
Chapter 4: Install HP Clustered File System 13 • mxconsole-3.1.1..msi. The Management Console and mx utility in Microsoft Windows format. • pmxs-sles9-support-3.1.1-..rpm. Clustered File System support files for the supported SLES9 kernels. The files include sample configuration files for building the kernel from source. There are two versions of the Support RPM; one for 32-bit SLES9 and one for 64-bit.
Chapter 4: Install HP Clustered File System 14 – The iLO card • Hostnames and DNS servers • NTP Server • SAN Storage Installation IProcedure Before starting the installation be sure to review the configuration and hardware requirements specified in Chapter 2. Installation Checklist Clustered File System must be installed on each server in your cluster. Complete the following steps, which are described in detail following this checklist. Action Description Install the operating system and kernel.
Chapter 4: Install HP Clustered File System Action Description Run the mxcheck utility on each server. This utility verifies that the server’s configuration meets the requirements for Clustered File System. Set a Clustered File System parameter. This step is needed only if your SAN configuration includes a FalconStor device. Configure the cluster. Connect to the console on one node and configure the cluster via the Management Console. Complete the configuration.
Chapter 4: Install HP Clustered File System 16 These settings are standard QLogic driver settings but are found in the following file: /etc/opt/hpcfs/fc_pcitable Standard QLogic config files and tools (for example, SAN Surfer) cannot be used for driver settings. Instead, the above file is the location for this purpose. If the QLogic failover driver setting is disabled, it is possible to use the built-it mxmpio command to configure multi pathing setting.
Chapter 4: Install HP Clustered File System 17 To configure FibreChannel switches, complete the following tasks: • Enable server access to the SAN. Each server that will be in the cluster must be able to see the disks in the SAN. You may need to enable server ports on the FC switches or to change the zoning configuration to give servers the necessary access to the SAN. • Modify the SNMP setup. Make the following changes: – Enable access to the SNMP agent from each server that will be in the cluster.
Chapter 4: Install HP Clustered File System 18 If your disk array software allows you to create LUNs, we recommend that you create three LUNs for the membership partitions. Each LUN should be a minimum of 8 MB in size. If you are unable to create LUNs on your disk array, you can use regular disk partitions for the membership partitions. You must install a partition table (using fdisk or a similar tool) on the LUNs that will be used as membership partitions.
Chapter 4: Install HP Clustered File System 19 # rpm -i ///pmxs-3.1.1..rpm Also install the Management Console and mx utility: # rpm -i ///mxconsole-3.1.0.i386.rpm 8. Install the Quota Tools RPM (Optional) The quota tools RPM includes several Linux quota commands that have been modified to work on PSFS filesystems. You can use the modified commands in place of the commands provided with the Linux distribution.
Chapter 4: Install HP Clustered File System 20 Next, run the following command to see a list of devices. # cat /proc/partitions Review the output to verify that the SAN is configured as you expect. 11. Run the mxcheck Utility This utility should be run on each server. It verifies that the server’s configuration meets the requirements for running Clustered File System.
Chapter 4: Install HP Clustered File System #psd_round2_delay 21 -1 On the last line, remove the # sign preceding psd_round2_delay and replace -1 with the number of seconds to wait before the psd driver retries the I/O. The recommended value is 45 seconds. psd_round2_delay 45 13. Configure the Cluster from the Management Console A windowing environment must be installed on the server where you are running the Management Console.
Chapter 4: Install HP Clustered File System 22 NOTE: If you click the Login button, an error message appears and you will be asked whether you want to set up the cluster. Click Yes to configure the cluster. The Configure Cluster window then appears. You will need to specify information on the tabs in this order: General Settings, Fencing, Storage Configuration, Cluster Wide Configuration. General Settings This tab asks for general information needed for cluster operations.
Chapter 4: Install HP Clustered File System 23 1. License. Clustered File System can be used with either a temporary or a permanent license. The license is provided in a separate license file. HP Provides a 90- day trial license file in the /Licenses directory of the installation CD. (This file must be present on the server that you are using to connect to the Management Console.) To install the license, click the Change License File button.
Chapter 4: Install HP Clustered File System 24 2. Secret Network Key. This password is required. It provides additional security for network communications among the cluster servers. To set this key, click the Set Secret Network Key button. You can enter anything you want for this password. 3. Administrator Password. You will need to be Clustered File System user admin to configure the cluster. By default, the password for this user is set to admin.
Chapter 4: Install HP Clustered File System 25 When you have completed the fields on the General Settings tab, go to the Fencing tab. Fencing When certain problems occur on a server (for example, hardware problems or the loss of cluster network communications), and the server ceases to effectively coordinate and communicate with other servers in the cluster, Clustered File System must remove the server’s access to filesystems to preserve data integrity. This step is called fencing.
Chapter 4: Install HP Clustered File System 26 There are two fencing methods: • FibreChannel Switch-based fencing. When a server needs to be fenced, Clustered File System will disable the server’s access in the FibreChannel fabric. The server must be rebooted to regain access to the SAN. If you select this method, next go to the Storage Configuration tab and configure the FC switches. (See “Storage Configuration” on page 29.) • Web Management-Based Fencing via Server Reset/Shutdown.
Chapter 4: Install HP Clustered File System 27 1. Remote Management Controller Vendor. Select the Hewlett-Packard for your Remote Management Controllers. By default, the item “Vendor and type selection apply to all controllers for the cluster” is checked. Remove the checkmark from this item if your Remote Management Controllers are from different vendorsor if, in the case of IBM Remote Management Controllers, some are associated with IBM BladeCenter servers and others are not.
Chapter 4: Install HP Clustered File System 28 2. Remote Management Controller ID. Specify how Clustered File System should identify the Remote Management Controller iLO associated with each server. Use one of the following methods. – Select “Cluster-wide Pattern” and then specify the common naming scheme that you are using for the Remote Management Controllers (either a hostname suffix or an IP address delta).
Chapter 4: Install HP Clustered File System 29 When the Fencing tab is complete, go to the Storage Configuration tab. Storage Configuration The Storage Configuration tab allows you to identify the FibreChannel switches included in the cluster, to set the SNMP community string for Clustered File System, and to select membership partitions, which Clustered File System uses to control access to the SAN. 1. SAN Switches.
Chapter 4: Install HP Clustered File System 30 System Management Console can display the switch ports used by the SAN. (The preceding window shows the text that appears for FibreChannel switch-based fencing.) To configure SAN switches, you will need to specify the hostnames or IP addresses of the FibreChannel switches that are directly connected to the nodes in the cluster. Click Add, and then specify the hostname or IP address of the first FC switch.
Chapter 4: Install HP Clustered File System 31 To create a membership partition, click Add. The Add Membership Partition window then lists all of the disks or LUNs that it can access. Select the disk or LUN where you want to place the first membership partition. All of the available partitions on that disk or LUN then appear in the bottom of the window. Select one of these partitions and click Add. (4 MB is adequate for a membership partition.
Chapter 4: Install HP Clustered File System 32 cluster in order for HP EVA snapshots to work. To locate this utility, go to www.hp.com, select Software & Driver Downloads, and search for “HP StorageWorks Command View EVA Software.” Choose the latest media kit version, and select the correct version for your OS. A version of 4.1 or newer must be used. The current media kit version at the time of this document is 5.0.
Chapter 4: Install HP Clustered File System 33 The configuration is then installed on the server that you are using to connect to the Management Console. You will then be asked whether you want to start the cluster on that server. If you configured Web Based Management Fencing, answer No. Otherwise, answer Yes. Go to the Cluster-Wide Configuration tab. Cluster Wide Configuration This tab is used to export the cluster configuration to the other servers that will be in the cluster.
Chapter 4: Install HP Clustered File System 34 Repeat this procedure to add the remaining servers to the Address column. 2. Export the configuration. Click Select All to select all of the servers in the Address column. Then click Export. The Last Operation Progress column will display status messages as the configuration is exported to each server. If you are using Web Management-based fencing, you may be asked for additional information about each server.
Chapter 4: Install HP Clustered File System 35 14. Configure HP CFS for Public versus Private Network After a cluster has been formed, configure it to correctly use the private network for intra-cluster network communication. The HP CFS software does not identify which network connection is the “private” network. You must indicate which network is considered to be the private one. Failure to perform this step could allow the public network to be selected which could adversely affect performance.
Chapter 4: Install HP Clustered File System 36 Test the Fencing Configuration The Test Fencing button on the ClusterWide Configuration tab can be used to verify that the fencing configuration is correct for each server. This feature is particularly useful for Web Management Based Fencing via Server Reset/Shutdown. On the Cluster Wide Configuration tab, select one or more servers to test and click the Test Fencing button. (You cannot select the server being used to connect to the Management Console.
Chapter 4: Install HP Clustered File System 37 Red Hat Enterprise Linux AS/ES 2.1, Red Hat Enterprise Linux AS/ES 3.0, Red Hat Enterprise Linux AS/ES 4.0. • On Red Hat systems, the “compat-libstdc++” package must be installed. • A windowing environment such as the X Window System must be installed and configured.
5 Install FS Option After Clustered File System has been installed, you can install the HP Clustered File System FS Option for Linux (FS Option). This chapter describes how to install the FS Option. This product is supported only on SLES9. Install FS Option Before FS Option is installed, you will need to rebuild the kernel as described in Appendix B. After the kernel is rebuilt and Clustered File System is installed, you can install FS Option. To install FS Option, complete the following steps.
Chapter 5: Install FS Option 39 # /etc/init.d/pmxs start The FS Option installation copies the existing /etc/exports file to /etc/exports.pre_mxfs and then writes over the original file. You can later convert this file into an Export Group. Uninstall FS Option To uninstall FS Option, run the following commands on each server: # rpm -e mxfs # rpm -e mxfs-support # rpm -e mxfs-patches NOTE: The server will need to be rebooted with the kernel that does not include FS Option. If an /etc/exports.
A Install the SLES9 Operating System and Kernel Install the Operating System and Kernel Before installing Clustered File System, you will need to perform the following steps: 1. Install SuSE Linux Enterprise Server Version 9. 2. Modify system files. 3. Install a supported kernel. 1. Install SuSE Linux Enterprise Server Version 9 SuSE Linux Enterprise Server Version 9 must be installed on each server that will be in the cluster.
Appendix A: Install the SLES9 Operating System and Kernel 41 • Clustered File System requires that the following packages be installed. (The packages are included in the “default” server installation.) – glibc 2.3.2 or higher – net-snmp 5.0.8 or higher – openssl 0.9.7a or higher – e2fsprogs 1.32 or higher – bind-utils 9.2.2 or higher • If you will be building the kernel from source, be sure to install the gcc compiler. (If you want to install the FS Option, you will need to build the kernel from source.
Appendix A: Install the SLES9 Operating System and Kernel 42 Normal operation of the cluster depends on a reliable network hostname resolution service. If the hostname lookup facility becomes unreliable, this can cause reliability problems for the running cluster. Therefore, you should ensure that your hostname lookup services are configured to provide highly reliable lookups, particularly for the hostnames that are critical to cluster operation.
Appendix A: Install the SLES9 Operating System and Kernel 43 3. Install a Supported Kernel Clustered File System supports the SLES9 2.6.5-7.191 kernel. To check your current kernel version, run the following command: # uname -rv If you are currently using an unsupported kernel version, you will need to upgrade it. You can use a binary kernel from SuSE or you can build a new kernel from the SuSE kernel source RPM.
Appendix A: Install the SLES9 Operating System and Kernel 44 boot time, allowing it to load its own modules. If this behavior is not desired, the blacklist file can be edited; however, doing this is not recommended. 1. Download and Install the Kernel Source The appropriate kernel source can be downloaded from the SuSE Web site. To install the kernel, use the instructions provided by SuSE. 2.
Appendix A: Install the SLES9 Operating System and Kernel 45 # rpm -i ///mxfs-patches-3.1.x.i386.rpm 4. Compile the Kernel and Reboot Following is a suggested procedure that you can use as a guide for building the kernel. Modify this procedure as necessary for your local circumstances. NOTE: If the servers have identical hardware configurations, you can create the kernel on one server and then copy it to the other servers. 1.
Appendix A: Install the SLES9 Operating System and Kernel 46 NOTE: Do not change the CONFIG_CFGNAME parameter in your configuration file. Clustered File System requires that the default value be used. Also, if you are using a configuration file that was not supplied in the Support RPM, be sure that the following parameter is set to yes. This parameter enables the Linux SCSI subsystem to probe for LUNs on SAN devices. CONFIG_SCSI_MULTI_LUN=y 3.
B Configure the Cluster from the Command Line This appendix describes how to use the mxconfig utility to configure the cluster from the command line. The utility allows you to upgrade the license file, to select a cluster password and a Network Authentication Secret password, to select a fencing method, to specify the FibreChannel switches connected to the nodes in the cluster, and to select the LUNs or disk partitions to be used as Clustered File System membership partitions.
Appendix B: Configure the Cluster from the Command Line 48 Welcome to mxconfig Welcome to mxconfig. You may abort mxconfig at any time by pressing the key. < OK > On windows that require input, use the Tab key to move between OK and Cancel or between Yes and No. Press the Enter key to go to the next window. Press the Escape key to abort the mxconfig utility. Clustered File System License File Clustered File System can be used with either a temporary or a permanent license.
Appendix B: Configure the Cluster from the Command Line 49 Console as user admin to configure the cluster.) The password does not display on the window as you type it. Enter cluster Password This password is used for authenticating the UI. Enter cluster password: (Will not echo password) < OK > You will next be asked to re-enter the password.
Appendix B: Configure the Cluster from the Command Line 50 Select the Cluster Administrative Traffic Protocol Specify either multicast or unicast mode. Multicast mode is recommended; however, if your network configuration does not allow multicast traffic through the network, you will need to use unicast mode.
Appendix B: Configure the Cluster from the Command Line 51 servers.) Cluster Fence Module Selection Select a fencing module from the following list fcsan Fibrechannel switch port manipulation (recommended) webmgmt Web Management Based Fencing < OK > If you selected FibreChannel switch port manipulation, next go to “Configure FibreChannel Switches” on page 58.
Appendix B: Configure the Cluster from the Command Line 52 Remote Management Controller Vendor Select the vendor of the remote management controller for this server Dell HP IBM IPMI Dell ERA or DRAC III Hewlett Packard ILO IBM MM, RSA, or RSA II IPMI v1.5 (IPMI over LAN) < OK > CAUTION: You will next be asked whether all servers in the cluster are from the same vendor. If you will be using IPMI as the fencing method, you should be aware that only one IPMI session can be active at a time.
Appendix B: Configure the Cluster from the Command Line 53 Remote Management Controller Configuration Select a method for HP Clustered File System to determine the remote management controller associated with each server Hostname-Suffix controller name = server name + common suffix IP-Delta controller IP address = server IP address + delta None Enter each controller hostname/address individually < OK > Enter the configuration information in accordance with the method that you selected.
Appendix B: Configure the Cluster from the Command Line 54 IP Delta. Specify the delta to add to each server’s IP address to determine the IP addresses of the associated Remote Management Controllers. For example, if your servers are 1.255.200.12 and 1.255.200.15 and their Remote Management Controllers are 1.255.201.112 and 1.255.201.115, enter 0.0.1.100 as the delta.
Appendix B: Configure the Cluster from the Command Line 55 Fencing Action. When a server needs to be restricted from the SAN, Clustered File System can either power-cycle the server or shut it down. Specify the method that you want to use on the following window.
Appendix B: Configure the Cluster from the Command Line 56 Remote Management Controller Access Enter remote management controller password: (Will not echo password) < OK > You are now asked whether the same username and password are used by all of the Remote Management Controllers iLO in the cluster.
Appendix B: Configure the Cluster from the Command Line 57 SAN Configuration. You will next be asked whether you want to configure the SAN switches in Clustered File System. This step is optional for Web Management Based Fencing configurations; however, if the switches are configured the Management Console can display the switch ports used by the SAN. If the SAN switches have not previously been configured in Clustered File System, you will see the following window.
Appendix B: Configure the Cluster from the Command Line 58 Configure FibreChannel Switches On the SAN Configuration window, specify the hostnames or IP addresses of the FC switches that are directly connected to the nodes in the cluster. (If you are using Web Management-Based Fencing, you will see this window only if you chose to configure or reconfigure SAN switches.) SAN Configuration Enter hostnames of SAN switches to configure, separated by whitespace.
Appendix B: Configure the Cluster from the Command Line 59 If you selected Yes on the Specify SNMP Community String window, type the appropriate string on the Enter SNMP Community String window that appears next.
Appendix B: Configure the Cluster from the Command Line 60 • LUNs. You will need to create a partition on each LUN. Answer yes on the Create Membership Partitions window and then use fdisk to create a partition on each LUN. These partitions can then be used for the membership partitions. • Regular disk partitions.
Appendix B: Configure the Cluster from the Command Line 61 NOTE: When you use fdisk, the modified partition table is visible only on the server where you made the changes. When you start Clustered File System, disks or LUNs with membership partitions are imported into the cluster automatically. The revised partition table will then be visible to all of the servers. Create Membership Partitions The Membership Partition Setup window asks you to select a disk where you want to create a membership partition.
Appendix B: Configure the Cluster from the Command Line 62 Membership Partition Setup No partitions currently selected. Select one or three disk partitions to use as membership partitions. Select from the following partitions found on disk UID:20:00:00:20:37:e4:f8:78::0 [ [ [ [ ] ] ] ] Partition:1 Partition:2 Partition:3 Partition:4 < OK > Path:/dev/sdc1 Path:/dev/sdc2 Path:/dev/sdc3 Path:/dev/sdc4 The partition you selected is displayed on the Membership Partition Setup window.
Appendix B: Configure the Cluster from the Command Line configurations, to add a new snapshot configuration, and to edit or remove existing snapshot configurations.
Appendix B: Configure the Cluster from the Command Line 64 To add a new snapshot configuration, select Add. Then specify the requested information on the screens that appear next. For the HP EVA Management Appliance, you will be asked for the hostname or IP address of the appliance associated with the cluster, and also the username and password that should be used to access the appliance. Be aware the login information is for the sssu tool.
Appendix B: Configure the Cluster from the Command Line 65 If you chose to export the configuration, type the names of the servers that you want to receive the configuration on the Export Configuration window. Use white space to separate the names. Export Configuration Enter hostnames of the servers you wish to copy this configuration to, separated by whitespace. < OK > mxconfig uses ssh as user root to copy the configuration to each server.
Appendix B: Configure the Cluster from the Command Line 66 Clustered File System has the correct information. The server must be up when you use the utility. /opt/hpcfs/sbin/mxfence When you run mxfence, Clustered File System uses the specified hostname or IP address to access the Remote Management Controller. The server is then either power-cycled or shut down in accordance with the method you selected when you configured the fencing module.
Appendix B: Configure the Cluster from the Command Line The Management Console window will now appear. The server where you started Clustered File System is currently the only server in the cluster.
Appendix B: Configure the Cluster from the Command Line 68 Add the Remaining Servers to the Cluster To add another server to the cluster, select Cluster > Server > New and enter the name or IP address of the server on the New Server window. Repeat this procedure to add the remaining servers to the cluster. The Management Console will show that the servers are down because Clustered File System is not yet running on them.
C HBA Driver Procedures This appendix describes the following procedures: • Replacing an HBA card. • Installing an HBA driver version that is provided with HP Clustered File System but is not the default. • Installing an HBA driver version that is not provided with HP Clustered File System. • Enabling the QLogic failover feature. Replacing an HBA Card To replace an HBA card, complete the following steps: 1. Shut down the HP Clustered File System software. /etc/init.d/pmxs stop 2.
Appendix C: HBA Driver Procedures 70 6. Configure the HP Clustered File System default HBA driver version for the hardware installed on your system. /opt/hpcfs/lib/chhbadriver default 7. Import the cluster configuration to the server. mxconfig -import 8. Enable the HP Clustered File System startup script. /sbin/chkconfig --add pmxs 9. Start HP Clustered File System. /etc/init.
Appendix C: HBA Driver Procedures 71 6. Enable the HP Clustered File System startup script. Enter this command. /sbin/chkconfig --add pmxs 7. Start HP Clustered File System. /etc/init.d/pmxs start Installing an HBA Driver Version not Provided with HP Clustered File System If your system configuration requires that you use an HBA driver version not provided with HP Clustered File System, you can install that driver version on the system.
Appendix C: HBA Driver Procedures 72 5. Update the /etc/hpcfs/fc_pcitable file with information about the driver that you installed. The beginning of the file describes the syntax of the entries in the file. Be sure to review this information. The end of the file contains uncommented lines that specify the hardware installed on your system. Following is an example: # Adapters found on this system: 0x1077 0x2300 qla2300 scsi/ qla2x00-6.06.
Appendix C: HBA Driver Procedures 73 the driver, the option is in the module corresponding to the HBA model, such as qla2200 or qla2300.) The following line in the fc_pcitable file sets the failover option for a version 8.00.00 driver: 0x1077 0x0 qla2xxx qla2xxx-8.00.00 "ql2xfailover=1" QLogic Abstraction Layer The next example sets the option for the 7.01.01 driver. 0x1077 ox2312 qla2300 scsi/qla2x00-7.01.
Appendix C: HBA Driver Procedures 74 • Path. If the path begins with "/", it is considered to be an absolute path. Otherwise, it is considered to be relative to the /opt/hpcfs/lib/modules/ current directory. • Options, enclosed in double quotes, to pass to insmod when it loads the driver. If no options are required, enter a pair of double quotes ("") in the field. • A text description of the driver. Notes: The chhbadriver script overwrites the fc_pcitable file.
Index B Best practices 7 C Cluster SAN configuration 6 Clustered File System installation 12 Configuration best practices 7 cluster SAN 6 configure cluster from command line 47 hardware limits 3 information 3 network requirements 5 supported HBA drivers 4 E Emulex HBA driver 10 EVA Snapshots 11 F Failover enabling for QLogic driver 72 FS Option install 38 uninstall 39 G Getting help 1 H HBA driver Emulex 10 procedures 69 QLogic 9 replacing HBA card 69 HBA drivers 4 HP storage web site 1 technical supp
Index Q 76 QLogic driver enabling failover 72 QLogic HBA driver 9 SLES 9 kernel 40 operating system 40 SSSU tool 11 R T Replacing HBA card 69 Technical support, HP 1 S U Server requirements 4 Uninstall FS Option 39