Veritas Storage Foundation 5.1 SP1 Cluster File System Installation Guide HP-UX 11i v3 HP Part Number: 5900-1510 Published: April 2011 Edition: 1.
© Copyright 2011 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Technical Support ............................................................................................... 4 Section 1 Installation overview and planning .................. 21 Chapter 1 About Storage Foundation Cluster File System ............ 23 Veritas Storage Foundation Cluster File System suites ........................ About I/O fencing ......................................................................... About Veritas graphical user interfaces .................................
8 Contents Cluster platforms ................................................................... 42 Chapter 3 System requirements ......................................................... 43 Release notes .............................................................................. Hardware compatibility list (HCL) ................................................... I/O fencing requirements .............................................................. Coordinator disk requirements for I/O fencing ....
Contents Chapter 6 Installing Storage Foundation Cluster File System using the web-based installer .................................... 65 About the Web-based installer ........................................................ Features not supported with Web-based installer ............................... Before using the Veritas Web-based installer ..................................... Starting the Veritas Web-based installer ..........................................
10 Contents Verifying the CP server configuration ...................................... 109 Chapter 8 Configuring Veritas Storage Foundation Cluster File System .................................................................... 111 Configuring Veritas Storage Foundation Cluster File System using the script-based installer ....................................................... Overview of tasks to configure Storage Foundation Cluster File System using the script-based installer ......................
Contents Setting up non-SCSI3 server-based I/O fencing using installsfcfs .......................................................................... Setting up server-based I/O fencing manually .................................. Preparing the CP servers manually for use by the SFCFS cluster .......................................................................... Configuring server-based fencing on the SFCFS cluster manually ......................................................................
12 Contents Performing a full upgrade from SFCFS 5.x on HP-UX 11iv3 to 5.1 SP1 on the latest HP-UX 11iv3 ...................................................... 208 Chapter 12 Performing a phased upgrade ........................................ 213 Performing a phased upgrade from version 5.x on HP-UX 11i v3 to Veritas Storage Foundation Cluster File System 5.1 SP1 .............. Performing phased upgrade of SFCFS from versions 4.x or 5.x on HP-UX 11i v2 to 5.1SP1 ........................................
Contents Enabling LDAP authentication for clusters that run in secure mode ............................................................................ Starting and stopping processes for the Veritas products .................. Checking Veritas Volume Manager processes .................................. Checking Veritas File System installation ....................................... Command installation verification ...........................................
14 Contents Chapter 19 Removing a node from Storage Foundation Cluster File System clusters .................................................... 297 About removing a node from a cluster ............................................ Removing a node from a cluster .................................................... Modifying the VCS configuration files on existing nodes .................... Editing the /etc/llthosts file .................................................... Editing the /etc/gabtab file ....
Contents Setting up replication using VVR on the secondary site ..................... Creating the data and SRL volumes on the secondary site ............ Editing the /etc/vx/vras/.rdg files ............................................ Setting up IP addresses for RLINKs on each cluster ..................... Setting up the disk group on secondary site for replication ........... Starting replication of application database volume ..........................
16 Contents Section 9 Installation reference .............................................. 367 Appendix A Installation scripts ............................................................ 369 About installation scripts ............................................................. 369 Installation script options ............................................................ 370 Appendix B Response files .................................................................... 377 About response files ..
Contents Appendix E Storage Foundation Cluster File System components .................................................................. 411 Veritas Storage Foundation Cluster File System installation depots ................................................................................ Veritas Cluster Server installation depots ....................................... Veritas Cluster File System installation depots ................................
18 Contents Appendix G Troubleshooting information .......................................... 445 Restarting the installer after a failed connection ............................. What to do if you see a licensing reminder ...................................... Storage Foundation Cluster File System installation issues ................ Incorrect permissions for root on remote system ........................ Resource temporarily unavailable ........................................... Inaccessible system ..
Contents Multiple client clusters served by highly available CP server and 2 SCSI-3 disks ................................................................ 470 Appendix J Reconciling major/minor numbers for NFS shared disks ............................................................................... 473 Reconciling major/minor numbers for NFS shared disks .................... 473 Checking major and minor numbers for disk partitions ...............
20 Contents
Section 1 Installation overview and planning ■ Chapter 1. About Storage Foundation Cluster File System ■ Chapter 2. Before you install ■ Chapter 3. System requirements ■ Chapter 4.
22
Chapter 1 About Storage Foundation Cluster File System This chapter includes the following topics: ■ Veritas Storage Foundation Cluster File System suites ■ About I/O fencing ■ About Veritas graphical user interfaces Veritas Storage Foundation Cluster File System suites The following table lists the Symantec products and optionally licensed features available with each Veritas Storage Foundation Cluster File System (SFCFS) product suite.
24 About Storage Foundation Cluster File System About I/O fencing Table 1-1 Contents of Veritas Storage Foundation Cluster File System products (continued) Storage Foundation Cluster File System Products and features Storage Foundation Cluster File System HA Veritas File System Veritas Volume Manager Veritas Quick I/O option Global Cluster Option Veritas Extension for Oracle Disk Manager option Veritas Storage Checkpoint option Veritas Storage Mapping option Optionally licensed features: Veritas Volume
About Storage Foundation Cluster File System About Veritas graphical user interfaces Coordination point server (CP server) I/O fencing that uses at least one CP server system is referred to as server-based I/O fencing. Server-based I/O fencing ensures data integrity in multiple clusters. In virtualized environments that do not support SCSI-3 PR, Storage Foundation Cluster File System supports non-SCSI3 server-based I/O fencing.
26 About Storage Foundation Cluster File System About Veritas graphical user interfaces Foundation and High Availability Solutions release. You can download Veritas Operations Manager at no charge at http://go.symantec.com/vom. Refer to the Veritas Operations Manger documentation for installation, upgrade, and configuration instructions.
Chapter 2 Before you install This chapter includes the following topics: ■ About planning for SFCFS installation ■ About installation and configuration methods ■ Assessing system preparedness ■ Downloading the Veritas Storage Foundation Cluster File System software ■ Setting environment variables ■ Optimizing LLT media speed settings on private NICs ■ Guidelines for setting the media speed of the LLT interconnects ■ Creating the /opt directory ■ About configuring ssh or remsh using the Ve
28 Before you install About installation and configuration methods Document version: 5.1SP1.0. This installation guide is designed for system administrators who already have a knowledge of basic UNIX system and network administration. Basic knowledge includes commands such as tar, mkdir, and simple shell scripting. Also required is basic familiarity with the specific platform and operating system where SFCFS will be installed.
Before you install About installation and configuration methods Table 2-1 Installation and configuration methods Method Description Interactive installation and configuration using the script-based installer You can use one of the following script-based installers: Note: If you obtained SFCFS from an ■ electronic download site, you must use the installsfcfs script instead of the installer script.
30 Before you install Assessing system preparedness Table 2-1 Installation and configuration methods (continued) Method Description Manual installation and configuration Manual installation uses the HP-UX 11i v3 commands to install SFCFS. To retrieve a list of all depots and patches required for all products in the correct installation order, enter: # installer -allpkgs Use the HP-UX 11i v3 commands to install SFCFS.
Before you install Downloading the Veritas Storage Foundation Cluster File System software Among its broad set of features, SORT provides patches, patch notifications, and documentation for Symantec enterprise products. To access SORT, go to: http://sort.symantec.
32 Before you install Setting environment variables If you download a standalone Veritas product, the single product download files do not contain the product installer. Use the installation script for the specific product to install the product. See “About installation scripts” on page 369. To download the software 1 Verify that you have enough space on your filesystem to store the downloaded software. The estimated space for download, gunzip, and tar extract is 4 GB.
Before you install Optimizing LLT media speed settings on private NICs ■ If you are using a C shell (csh or tcsh), enter the following: % set path = ( $path /usr/sbin /opt/VRTS/bin ) % setenv MANPATH /usr/share/man:/opt/VRTS/man:$MANPATH Optimizing LLT media speed settings on private NICs For optimal LLT communication among the cluster nodes, the interface cards on each node must use the same media speed settings.
34 Before you install About configuring ssh or remsh using the Veritas installer If you are upgrading, you cannot have a symbolic link from /opt to an unconverted volume. If you do have a symbolic link to an unconverted volume, the symbolic link will not function during the upgrade and items in /opt will not be installed. About configuring ssh or remsh using the Veritas installer The installer can configure passwordless secure shell (ssh) or remote shell (remsh) communications among systems.
Before you install Setting up shared storage See also the Veritas Storage Foundation Cluster File System Administrator's Guide for a description of I/O fencing. Setting up shared storage: SCSI Perform the following steps to set up shared storage. Figure 2-1 shows how to cable systems for shared storage. Cabling the shared storage Figure 2-1 SCSI bus Termination Termination System A System B Shared disks To set up shared storage 1 Shut down the systems in the cluster.
36 Before you install Setting up shared storage To check and change SCSI initiator IDs 1 For systems with PA architecture, turn on the power of the first system. During the boot process, the system delays for ten seconds, giving you the opportunity to stop the boot process and enter the boot menu: To discontinue, press any key within 10 seconds. Press any key. The boot process discontinues. Boot terminated.
Before you install Setting up shared storage Path (dec) -------------0/3/0/0 Initiator ID -----------7 SCSI Rate --------Fast Auto Term --------Unknown The output in this example shows the SCSI ID is 7, the preset default for the HBA as shipped. ■ If you choose, you can leave the ID set at 7 and return to the Main Menu: Service Menu: enter command or menu > main ■ You can change the SCSI ID for the HBA.
38 Before you install Setting up shared storage To set up Fibre Channel shared storage 1 Shut down the cluster systems that must share the devices. 2 Install the required Fibre Channel host bus adapters on each system. 3 Cable the shared devices. 4 Reboot each system. 5 Verify that each system can see all shared devices. Use the command: # ioscan -fnC disk Where "disk" is the class of devices to be shared.
Before you install Cluster environment requirements Cluster environment requirements If your configuration has a cluster, which is a set of hosts that share a set of disks, there are additional requirements. To set up a cluster environment 1 If you plan to place the root disk group under VxVM control, decide into which disk group you want to configure it for each node in the cluster. The root disk group, usually aliased as bootdg, contains the volumes that are used to boot the system.
40 Before you install Hardware overview and requirements for Veritas Storage Foundation Cluster File System from which the installation utility is run must have permissions to run rsh (remote shell) or ssh (secure shell) utilities as root on all cluster nodes or remote systems. ■ Symantec recommends configuring the cluster with I/O fencing enabled. I/O fencing requires shared devices to support SCSI-3 Persistent Reservations (PR).
Before you install Hardware overview and requirements for Veritas Storage Foundation Cluster File System Figure 2-2 Four Node SFCFS Cluster Built on Fibre Channel Fabric Shared storage Shared storage can be one or more shared disks or a disk array connected either directly to the nodes of the cluster or through a Fibre Channel Switch. Nodes can also have non-shared or local devices on a local I/O channel. It is advisable to have /, /usr, /var and other system partitions on local devices.
42 Before you install Hardware overview and requirements for Veritas Storage Foundation Cluster File System Cluster platforms There are several hardware platforms that can function as nodes in a Storage Foundation Cluster File System (SFCFS) cluster. Install the HP-UX 11i 64-bit operating system with the Sept 2010 HP-UX 11i Version 3.0 or later version of 11iv3 on each node and install a Fibre Channel host bus adapter to allow connection to the Fibre Channel switch.
Chapter 3 System requirements This chapter includes the following topics: ■ Release notes ■ Hardware compatibility list (HCL) ■ I/O fencing requirements ■ Veritas File System requirements ■ Supported HP-UX operating systems ■ Memory requirements ■ CPU requirements ■ Node requirements ■ Mandatory patch required for Oracle Bug 4130116 ■ Disk space requirements ■ Number of nodes supported Release notes The Release Notes for each Veritas product contains last minute news and important de
44 System requirements Hardware compatibility list (HCL) Hardware compatibility list (HCL) The hardware compatibility list contains information about supported hardware and is updated regularly. Before installing or upgrading Storage Foundation and High Availability Solutions products, review the current compatibility list to confirm the compatibility of your hardware and software. For the latest information on supported hardware, visit the following URL: http://entsupport.symantec.
System requirements I/O fencing requirements ■ Coordinator disks cannot be the special devices that array vendors use. For example, you cannot use EMC gatekeeper devices as coordinator disks. CP server requirements Storage Foundation Cluster File System 5.1SP1 clusters (application clusters) support CP servers which are hosted on the following VCS and SFHA versions: ■ VCS 5.1 or 5.
46 System requirements I/O fencing requirements Table 3-1 CP server hardware requirements Hardware required Description Disk space To host the CP server on a VCS cluster or SFHA cluster, each host requires the following file system space: 550 MB in the /opt directory (additionally, the language pack requires another 15 MB) ■ 300 MB in /usr ■ ■ 20 MB in /var Storage When CP server is hosted on an SFHA cluster, there must be shared storage between the CP servers.
System requirements I/O fencing requirements ■ Symantec recommends that network access from the application clusters to the CP servers should be made highly-available and redundant. The network connections require either a secure LAN or VPN. ■ The CP server uses the TCP/IP protocol to connect to and communicate with the application clusters by these network paths. The CP server listens for messages from the application clusters using TCP port 14250.
48 System requirements Veritas File System requirements Non-SCSI3 I/O fencing requirements Supported virtual environment for non-SCSI3 fencing: ■ HP-UX Integrity Virtual Machines (IVM) Server 4.0 and 4.
System requirements Memory requirements Memory requirements 2 GB of memory is required for Veritas Storage Foundation Cluster File System. CPU requirements A minimum of 2 CPUs is required for Veritas Storage Foundation Cluster File System. Node requirements All nodes in a Cluster File System must have the same operating system version and update level. Mandatory patch required for Oracle Bug 4130116 If you are running Oracle versions 9.2.0.6 or 9.2.0.
50 System requirements Number of nodes supported http://www.symantec.
Chapter 4 Licensing Veritas products This chapter includes the following topics: ■ About Veritas product licensing ■ Setting or changing the product level for keyless licensing ■ Installing Veritas product license keys About Veritas product licensing You have the option to install Veritas products without a license key. Installation without a license does not eliminate the need to obtain a license.
52 Licensing Veritas products Setting or changing the product level for keyless licensing Within 60 days of choosing this option, you must install a valid license key corresponding to the license level entitled or continue with keyless licensing by managing the server or cluster with a management server. If you do not comply with the above terms, continuing to use the Veritas product is a violation of your end user license agreement, and results in warning messages.
Licensing Veritas products Setting or changing the product level for keyless licensing http://go.symantec.com/vom When you set the product license level for the first time, you enable keyless licensing for that system. If you install with the product installer and select the keyless option, you are prompted to select the product and feature level that you want to license. After you install, you can change product license levels at any time to reflect the products and functionality that you want to license.
54 Licensing Veritas products Installing Veritas product license keys Installing Veritas product license keys The VRTSvlic depot enables product licensing.
Section 2 Installation of Storage Foundation Cluster File System ■ Chapter 5. Installing Storage Foundation Cluster File System using the common product installer ■ Chapter 6.
56
Chapter 5 Installing Storage Foundation Cluster File System using the common product installer This chapter includes the following topics: ■ Installation preparation overview ■ About installing Veritas Storage Foundation Cluster File System on HP-UX ■ Summary of Veritas Storage Foundation installation tasks ■ Mounting the product disc ■ About the Veritas installer ■ Installing Storage Foundation Cluster File System using the product installer Installation preparation overview Table 5-1 provide
58 Installing Storage Foundation Cluster File System using the common product installer About installing Veritas Storage Foundation Cluster File System on HP-UX Table 5-1 Installation overview (continued) Installation task Section Download the software, or insert the product See “Downloading the Veritas Storage DVD. Foundation Cluster File System software” on page 31. See “Mounting the product disc” on page 59. Set environment variables. See “Setting environment variables” on page 32.
Installing Storage Foundation Cluster File System using the common product installer Mounting the product disc The operating system is bundled with Veritas Volume Manager and Veritas File System. If the Veritas Volume Manager or Veritas File System is in use, follow the steps in the upgrade chapter to upgrade the Storage Foundation and the operating system. ■ If patches for the operating system are required, install the patches before upgrading the product. ■ Mount the disk. ■ Install the 5.
60 Installing Storage Foundation Cluster File System using the common product installer About the Veritas installer 5 Verify that the disc is mounted: # mount About the Veritas installer The installer also enables you to configure the product, verify preinstallation requirements, and view the product’s description. If you obtained a standalone Veritas product from an electronic download site, the single-product download files do not contain the general product installer.
Installing Storage Foundation Cluster File System using the common product installer Installing Storage Foundation Cluster File System using the product installer Note: If you have obtained a Veritas product from an electronic download site, the single product download files do not contain the installer installation script, so you must use the product installation script to install the product.
62 Installing Storage Foundation Cluster File System using the common product installer Installing Storage Foundation Cluster File System using the product installer ■ Recommended depots: installs the full feature set without optional depots. ■ All depots: installs all available depots. Each option displays the disk space that is required for installation. Select which option you want to install and press Return.
Installing Storage Foundation Cluster File System using the common product installer Installing Storage Foundation Cluster File System using the product installer 63 12 You are prompted to choose your licensing method. To ensure compliance with the terms of Symantec's End User License Agreement you have 60 days to either: * Enter a valid license key matching the functionality in use on the systems * Enable keyless licensing and manage the systems with a Management Server (see http://go.symantec.
64 Installing Storage Foundation Cluster File System using the common product installer Installing Storage Foundation Cluster File System using the product installer 16 At the prompt, specify whether you want to send your installation information to Symantec. Would you like to send the information about this installation to Symantec to help improve installation in the future? [y,n,q,?] (y) n 17 Reboot all the nodes in the cluster. 18 View the log file, if needed, to confirm the installation.
Chapter 6 Installing Storage Foundation Cluster File System using the web-based installer This chapter includes the following topics: ■ About the Web-based installer ■ Features not supported with Web-based installer ■ Before using the Veritas Web-based installer ■ Starting the Veritas Web-based installer ■ Obtaining a security exception on Mozilla Firefox ■ Performing a pre-installation check with the Veritas Web-based installer ■ Installing SFCFS with the Web-based installer About the Web-b
66 Installing Storage Foundation Cluster File System using the web-based installer Features not supported with Web-based installer When the webinstaller script starts the xprtlwid process, the script displays a URL. Use this URL to access the Web-based installer from Internet Explorer or FireFox. The Web installer creates log files whenever the Web installer is operating. While the installation processes are operating, the log files are located in a session-based directory under the /var/tmp directory.
Installing Storage Foundation Cluster File System using the web-based installer Starting the Veritas Web-based installer Table 6-1 Web-based installer requirements (continued) System Function Requirements Administrative system The system where you run the Web browser to perform the installation. Must have a Web browser. Supported browsers: Internet Explorer 6, 7, and 8 ■ Firefox 3.x ■ Starting the Veritas Web-based installer This section describes starting the Veritas Web-based installer.
68 Installing Storage Foundation Cluster File System using the web-based installer Performing a pre-installation check with the Veritas Web-based installer 3 Click Get Certificate button. 4 Uncheck Permanently Store this exception checkbox (recommended). 5 Click Confirm Security Exception button. 6 Enter root in User Name field and root password of the web server in the Password field.
Installing Storage Foundation Cluster File System using the web-based installer Installing SFCFS with the Web-based installer 4 Select Veritas Storage Foundation Cluster File System from the Product drop-down list, and click Next. 5 On the License agreement page, read the End User License Agreement (EULA). To continue, select Yes, I agree and click Next. 6 Choose minimal, recommended, or all depots. Click Next. 7 Indicate the systems where you want to install.
70 Installing Storage Foundation Cluster File System using the web-based installer Installing SFCFS with the Web-based installer 11 The installer prompts you to configure the cluster. Select Yes to continue with configuring the product. If you select No, you can exit the installer. You must configure the product before you can use SFCFS. After the installation completes, the installer displays the location of the log and summary files. If required, view the files to confirm the installation status.
Section 3 Configuration of Veritas Storage Foundation Cluster File System ■ Chapter 7. Preparing to configure SFCFS ■ Chapter 8. Configuring Veritas Storage Foundation Cluster File System ■ Chapter 9.
72
Chapter 7 Preparing to configure SFCFS This chapter includes the following topics: ■ Preparing to configure the clusters in secure mode ■ About configuring SFCFS clusters for data integrity ■ About I/O fencing for Storage Foundation Cluster File System in virtual machines that do not support SCSI-3 PR ■ About I/O fencing components ■ About I/O fencing configuration files ■ About planning to configure I/O fencing ■ Setting up the CP server Preparing to configure the clusters in secure mode Yo
74 Preparing to configure SFCFS Preparing to configure the clusters in secure mode ■ To use an external root broker, identify an existing root broker system in your enterprise or install and configure root broker on a stable system. See “Installing the root broker for the security infrastructure” on page 77. To use one of the cluster nodes as root broker, the installer does not require you to do any preparatory tasks.
Preparing to configure SFCFS Preparing to configure the clusters in secure mode Workflow to configure Storage Foundation Cluster File System cluster in secure mode Figure 7-1 External system Root broker system? One of the cluster nodes Choose automatic mode at the installer prompt to configure the cluster in secure mode Identify a root broker system or install root broker on a system Choose the node that the installer must configure as root broker Semiautomatic mode On the root broker system, crea
76 Preparing to configure SFCFS Preparing to configure the clusters in secure mode Table 7-1 lists the preparatory tasks in the order which the AT and VCS administrators must perform. These preparatory tasks apply only when you use an external root broker system for the cluster.
Preparing to configure SFCFS Preparing to configure the clusters in secure mode Table 7-1 77 Preparatory tasks to configure a cluster in secure mode (with an external root broker) (continued) Tasks Who performs this task Copy the files that are required to configure a cluster in secure mode VCS administrator to the system from where you plan to install and configure Storage Foundation Cluster File System. See “Preparing the installation system for the security infrastructure” on page 81.
78 Preparing to configure SFCFS Preparing to configure the clusters in secure mode ■ Checks to make sure that AT supports the operating system ■ Checks if the depots are already on the system. The installer lists the depots that the program is about to install on the system. Press Enter to continue. 8 Review the output as the installer installs the root broker on the system. 9 After the installation, configure the root broker.
Preparing to configure SFCFS Preparing to configure the clusters in secure mode ■ If the output displays the principal account on root broker for the authentication broker on the node, then delete the existing principal accounts. For example: venus> # vssat deleteprpl --pdrtype root \ --domain root@venus.symantecexample.
80 Preparing to configure SFCFS Preparing to configure the clusters in secure mode root_domain The value for the domain name of the root broker system. Execute the following command to find this value: venus> # vssat showalltrustedcreds 2 Make a note of the following authentication broker information for each node.
Preparing to configure SFCFS Preparing to configure the clusters in secure mode start_broker=false enable_pbx=false 4 Back up these input files that you created for the authentication broker on each node in the cluster. Note that for security purposes, the command to create the output file for the encrypted file deletes the input file.
82 Preparing to configure SFCFS About configuring SFCFS clusters for data integrity Semi-automatic mode Do the following: Manual mode Do the following: Copy the encrypted files (BLOB files) to the system from where you plan to install VCS. Note the path of these files that you copied to the installation system. ■ During SFCFS configuration, choose the configuration option 2 when the installsfcfs prompts. ■ Copy the root_hash file that you fetched to the system from where you plan to install VCS.
Preparing to configure SFCFS About I/O fencing for Storage Foundation Cluster File System in virtual machines that do not support SCSI-3 PR to PROM level with a break and subsequently resumes operations, the other nodes may declare the system dead. They can declare it dead even if the system later returns and begins write operations. I/O fencing is a feature that prevents data corruption in the event of a communication breakdown in a cluster.
84 Preparing to configure SFCFS About I/O fencing components ■ Coordination points—Act as a global lock during membership changes See “About coordination points” on page 84. About data disks Data disks are standard disk devices for data storage and are either physical disks or RAID Logical Units (LUNs). These disks must support SCSI-3 PR and must be part of standard VxVM or CVM disk groups. CVM is responsible for fencing data disks on a disk group basis.
Preparing to configure SFCFS About I/O fencing components The metanode interface that HP-UX provides does not meet the SCSI-3 PR requirements for the I/O fencing feature. You can configure coordinator disks to use Veritas Volume Manager Dynamic Multi-pathing (DMP) feature. See the Veritas Volume Manager Administrator’s Guide. ■ Coordination point servers The coordination point server (CP server) is a software solution which runs on a remote system or cluster.
86 Preparing to configure SFCFS About I/O fencing configuration files ■ Disable preferred fencing policy to use the default node count-based race policy. See “Enabling or disabling the preferred fencing policy” on page 184. About I/O fencing configuration files Table 7-2 lists the I/O fencing configuration files. Table 7-2 I/O fencing configuration files File Description /etc/rc.config.
Preparing to configure SFCFS About I/O fencing configuration files Table 7-2 I/O fencing configuration files (continued) File Description /etc/vxfenmode This file contains the following parameters: ■ vxfen_mode ■ scsi3—For disk-based fencing ■ customized—For server-based fencing ■ disabled—To run the I/O fencing driver but not do any fencing operations. vxfen_mechanism This parameter is applicable only for server-based fencing. Set the value as cps.
88 Preparing to configure SFCFS About planning to configure I/O fencing Table 7-2 I/O fencing configuration files (continued) File Description /etc/vxfentab When I/O fencing starts, the vxfen startup script creates this /etc/vxfentab file on each node. The startup script uses the contents of the /etc/vxfendg and /etc/vxfenmode files. Any time a system is rebooted, the fencing driver reinitializes the vxfentab file with the current list of all the coordinator points.
Preparing to configure SFCFS About planning to configure I/O fencing If you have installed Storage Foundation Cluster File System in a virtual environment that is not SCSI-3 PR compliant, you can configure non-SCSI3 server-based fencing. See Figure 7-3 on page 91. Figure 7-2 illustrates a high-level flowchart to configure I/O fencing for the Storage Foundation Cluster File System cluster.
90 Preparing to configure SFCFS About planning to configure I/O fencing Figure 7-2 Workflow to configure I/O fencing Install and configure SFCFS Configure disk-based fencing (scsi3 mode) Three disks Coordination points for I/O fencing? Configure server-based fencing (customized mode) At least one CP server Preparatory tasks Preparatory tasks vxdiskadm or vxdisksetup utilities Identify an existing CP server Initialize disks as VxVM disks vxfenadm and vxfentsthdw utilities Check disks for I/O f
Preparing to configure SFCFS About planning to configure I/O fencing Figure 7-3 Workflow to configure non-SCSI3 server-based I/O fencing SFCFS in nonSCSI3 compliant virtual environment ? Configure server-based fencing (customized mode) with CP servers Preparatory tasks Identify existing CP servers Establish TCP/IP connection between CP server and SFCFS cluster (OR) Set up CP server Install and configure VCS or SFHA on CP server systems Establish TCP/IP connection between CP server and SFCFS cluster If
92 Preparing to configure SFCFS About planning to configure I/O fencing Using the installsfcfs See “Setting up disk-based I/O fencing using installsfcfs” on page 147. See “Setting up server-based I/O fencing using installsfcfs” on page 159. See “Setting up non-SCSI3 server-based I/O fencing using installsfcfs” on page 169. Using the Web-based installer See “Configuring Storage Foundation Cluster File System using the Web-based installer” on page 131.
Preparing to configure SFCFS About planning to configure I/O fencing Figure 7-4 CP server, SFCFS cluster, and coordinator disks CP server TCP/IP Coordinator disk Coordinator disk Fiber channel Client Cluster LLT links Node 1 Node 2 Application Storage Recommended CP server configurations Following are the recommended CP server configurations: ■ Multiple application clusters use three CP servers as their coordination points. See Figure 7-5 on page 94.
94 Preparing to configure SFCFS About planning to configure I/O fencing Although the recommended CP server configurations use three coordination points, you can use more than three (must be an odd number) coordination points for I/O fencing. In a configuration where multiple application clusters share a common set of CP server coordination points, the application cluster as well as the CP server use a Universally Unique Identifier (UUID) to uniquely identify an application cluster.
Preparing to configure SFCFS About planning to configure I/O fencing Single CP server with two coordinator disks for each application cluster Figure 7-6 CP server hosted on a single-node VCS cluster (can also be hosted on an SFHA cluster) TCP/IP Public network TCP/IP Fibre channel coordinator disks coordinator disks application clusters (clusters which run VCS, SFHA, SFCFS, SVS, or SF Oracle RAC to provide high availability for applications) Fibre channel Public network TCP/IP Figure 7-7 displays
96 Preparing to configure SFCFS Setting up the CP server Setting up the CP server Table 7-3 lists the tasks to set up the CP server for server-based I/O fencing. Table 7-3 Tasks to set up CP server for server-based I/O fencing Task Reference Plan your CP server setup See “Planning your CP server setup” on page 96. Install the CP server See “Installing the CP server using the installer” on page 97.
Preparing to configure SFCFS Setting up the CP server ■ 3 Decide whether you want to configure server-based fencing for the SFCFS cluster (application cluster) with a single CP server as coordination point or with at least three coordination points. Symantec recommends using at least three coordination points. Decide whether you want to configure the CP server cluster in secure mode using the Symantec Product Authentication Service (AT).
98 Preparing to configure SFCFS Setting up the CP server CP server setup uses a single system Install and configure VCS to create a single-node VCS cluster. Meet the following requirements for CP server: ■ During installation, make sure to select all depots for installation. The VRTScps depot is installed only if you select to install all depots. ■ During configuration, make sure to configure LLT and GAB.
Preparing to configure SFCFS Setting up the CP server To configure the CP server cluster in secure mode ◆ Run the installer as follows to configure the CP server cluster in secure mode: # installsfcfs -security See “Preparing to configure the clusters in secure mode” on page 73. Setting up shared storage for the CP server database Symantec recommends that you create a mirrored volume for the CP server database and that you use the vxfs file system type.
100 Preparing to configure SFCFS Setting up the CP server 3 Create a mirrored volume over the disk group. For example: # vxassist -g cps_dg make cps_vol volume_size layout=mirror 4 Create a file system over the volume. The CP server configuration utility only supports vxfs file system type. If you use an alternate file system, then you must configure CP server manually.
Preparing to configure SFCFS Setting up the CP server 3 Enter 1 at the prompt to configure CP server on a single-node VCS cluster. The configuration utility then runs the following preconfiguration checks: 4 ■ Checks to see if a single-node VCS cluster is running with the supported platform. The CP server requires VCS to be installed and configured before its configuration. ■ Checks to see if the CP server is already configured on the system.
102 Preparing to configure SFCFS Setting up the CP server 7 Choose whether the communication between the CP server and the SFCFS clusters has to be made secure. If you have not configured the CP server cluster in secure mode, enter n at the prompt. Warning: If the CP server cluster is not configured in secure mode, and if you enter y, then the script immediately exits. You must configure the CP server cluster in secure mode and rerun the CP server configuration script.
Preparing to configure SFCFS Setting up the CP server 103 10 The configuration utility proceeds with the configuration process, and creates a vxcps.conf configuration file. Successfully generated the /etc/vxcps.conf configuration file. Successfully created directory /etc/VRTScps/db.
104 Preparing to configure SFCFS Setting up the CP server 14 After the configuration process has completed, a success message appears. For example: Successfully added the CPSSG service group to VCS configuration. Bringing the CPSSG service group online. Please wait... The Veritas Coordination Point Server has been configured on your system. 15 Run the hagrp -state command to ensure that the CPSSG service group has been added.
Preparing to configure SFCFS Setting up the CP server 105 The CP server requires SFHA to be installed and configured before its configuration. ■ 5 Checks to see if the CP server is already configured on the system. If the CP server is already configured, then the configuration utility informs the user and requests that the user unconfigure the CP server before trying to configure it. Enter the name of the CP server. Enter the name of the CP Server: mycps1.symantecexample.
106 Preparing to configure SFCFS Setting up the CP server 9 Enter the absolute path of the CP server database or press Enter to accept the default value (/etc/VRTScps/db). CP Server uses an internal database to store the client information. Note: As the CP Server is being configured on SFHA cluster, the database should reside on shared storage with vxfs file system. Please refer to documentation for information on setting up of shared storage for CP server database.
Preparing to configure SFCFS Setting up the CP server 107 12 Confirm whether you use the same NIC name for the virtual IP on all the systems in the cluster. Is the name of NIC for virtual IP 10.209.83.85 same on all the systems? [y/n] : y NOTE: Please ensure that the supplied network interface is a public NIC 13 Enter a valid network interface for the virtual IP address for the CP server process. Enter a valid interface for virtual IP 10.209.83.
108 Preparing to configure SFCFS Setting up the CP server 18 After the configuration process has completed, a success message appears. For example: Successfully added the CPSSG service group to VCS configuration. Bringing the CPSSG service group online. Please wait... The Veritas Coordination Point Server has been configured on your system. 19 Run the hagrp -state command to ensure that the CPSSG service group has been added.
Preparing to configure SFCFS Setting up the CP server 3 Verify the main.cf file using the following command: # hacf -verify /etc/VRTSvcs/conf/config If successfully verified, copy this main.cf to all other cluster nodes. 4 Create the /etc/vxcps.conf file using the sample configuration file provided at /etc/vxcps/vxcps.conf.sample.
110 Preparing to configure SFCFS Setting up the CP server ■ 2 /etc/VRTScps/db (default location for CP server database) Run the cpsadm command to check if the vxcpserv process is listening on the configured Virtual IP. # cpsadm -s cp_server -a ping_cps where cp_server is the virtual IP address or the virtual hostname of the CP server.
Chapter 8 Configuring Veritas Storage Foundation Cluster File System This chapter includes the following topics: ■ Configuring Veritas Storage Foundation Cluster File System using the script-based installer ■ Configuring Storage Foundation Cluster File System using the Web-based installer ■ Configuring Veritas Storage Foundation Cluster File System manually ■ Configuring the SFDB repository database after installation Configuring Veritas Storage Foundation Cluster File System using the script-base
112 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer Table 8-1 Tasks to configure Storage Foundation Cluster File System using the script-based installer Task Reference Start the software configuration See “Starting the software configuration” on page 112. Specify the systems where you want to configure Storage Foundation Cluster File System See “Specifying systems for configuration” on page 113.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer To configure Storage Foundation Cluster File System using the product installer 1 Confirm that you are logged in as the superuser and that you have mounted the product disc. 2 Start the installer. # ./installer The installer starts the product installation program with a copyright message and specifies the directory where the logs are created.
114 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer 3 ■ Makes sure that the systems are running with the supported operating system ■ Checks whether Storage Foundation Cluster File System is installed ■ Exits if Veritas Storage Foundation Cluster File System 5.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer 2 ■ Option 2: LLT over UDP (answer installer questions) Make sure that each NIC you want to use as heartbeat link has an IP address configured. Enter the heartbeat link details at the installer prompt to configure LLT over UDP.
116 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer 3 If you chose option 2, enter the NIC details for the private heartbeat links. This step uses examples such as private_NIC1 or private_NIC2 to refer to the available names of the NICs. Enter the NIC for the first private heartbeat NIC on galaxy: [b,q,?] private_NIC1 Do you want to use address 192.168.0.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer 5 If you chose option 3, the installer detects NICs on each system and network links, and sets link priority. If the installer fails to detect heartbeat links or fails to find any high-priority links, then choose option 1 or option 2 to manually configure the heartbeat links. See step 2 for option 1, or step 3 for option 2.
118 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer Is lan0 to be the public NIC used by all systems [y,n,q,b,?] (y) 5 Enter the virtual IP address for the cluster. You can enter either an IPv4 address or an IPv6 address. For IPv4: ■ Enter the virtual IP address. Enter the Virtual IP address for the Cluster: [b,q,?] 192.168.1.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer For IPv6 ■ Enter the virtual IP address. Enter the Virtual IP address for the Cluster: [b,q,?] 2001:454e:205a:110:203:baff:feee:10 ■ Enter the prefix for the virtual IPv6 address you provided.
120 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer ■ If you want to configure the cluster in secure mode, make sure you meet the prerequisites and enter y. ■ If you do not want to configure the cluster in secure mode, enter n. You must add VCS users when the configuration program prompts. See “Adding VCS users” on page 123. 2 Select one of the options to enable security.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer Option 1. Automatic configuration 121 Based on the root broker you want to use, do one of the following: To use an external root broker: Enter the name of the root broker system when prompted. Requires remote access to the root broker. Make sure that all the nodes in the cluster can successfully ping the root broker system.
122 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer Option 3. Manual configuration Enter the following Root Broker information as the installer prompts you: Enter root broker name: [b] east.symantecexample.com Enter root broker FQDN: [b] (symantecexample.com) symantecexample.com Enter the root broker domain name for the Authentication Broker's identity: [b] root@east.symantecexample.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer 123 Adding VCS users If you have enabled Symantec Product Authentication Service, you do not need to add VCS users now. Otherwise, on systems operating under an English locale, you can add VCS users at this time. To add VCS users 1 Review the required information to add VCS users. 2 Reset the password for the Admin user, if necessary.
124 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer To configure SMTP email notification 1 Review the required information to configure the SMTP email notification. 2 Specify whether you want to configure the SMTP notification. Do you want to configure SMTP notification? [y,n,q,?] (n) y If you do not want to configure the SMTP notification, you can skip to the next configuration option.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer (example: user@yourcompany.com): [b,q,?] harriet@example.com Enter the minimum severity of events for which mail should be sent to harriet@example.com [I=Information, W=Warning, E=Error, S=SevereError]: [b,q,?] E ■ If you do not want to add, answer n.
126 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer Active NIC devices discovered on galaxy: lan0 Enter the NIC for the VCS Notifier to use on galaxy: [b,q,?] (lan0) Is lan0 to be the public NIC used by all systems? [y,n,q,b,?] (y) ■ Enter the SNMP trap daemon port. Enter the SNMP trap daemon port: [b,q,?] (162) ■ Enter the SNMP console system name.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer 127 Would you like to add another SNMP console? [y,n,q,b] (n) 5 Verify and confirm the SNMP notification information.
128 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer 3 Provide information to configure this cluster as global cluster. The installer prompts you for a NIC, a virtual IP address, value for the netmask, and value for the network hosts. If you had entered virtual IP address details, the installer discovers the values you entered.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer ■ 129 Depending on the security mode you chose to set up Authentication Service, the installer does one of the following: ■ Creates the security principal ■ Executes the encrypted file to create security principal on each node in the cluster ■ Creates the VxSS service group ■ Creates the Authentication Server credentials on each node in the cluster
130 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System using the script-based installer Verifying and updating licenses on the system After you install Storage Foundation Cluster File System, you can verify the licensing information using the vxlicrep program. You can replace the demo licenses with a permanent license. See “Checking licensing information on the system” on page 130.
Configuring Veritas Storage Foundation Cluster File System Configuring Storage Foundation Cluster File System using the Web-based installer Replacing a Storage Foundation Cluster File System demo license with a permanent license When a Storage Foundation Cluster File System demo key license expires, you can replace it with a permanent license using the vxlicinst(1) program. To replace a demo key 1 Make sure you have permissions to log in as root on each of the nodes in the cluster.
132 Configuring Veritas Storage Foundation Cluster File System Configuring Storage Foundation Cluster File System using the Web-based installer Note: If you want to configure server-based I/O fencing, you must either use the script-based installer or manually configure. You can click Quit to quit the Web-installer at any time during the configuration process. To configure Storage Foundation Cluster File System on a cluster 1 Start the Web-based installer.
Configuring Veritas Storage Foundation Cluster File System Configuring Storage Foundation Cluster File System using the Web-based installer 5 On the Set Cluster Name/ID page, specify the following information for the cluster. Cluster Name Enter a unique cluster name. Cluster ID Enter a unique cluster ID. LLT Type Select an LLT type from the list. You can choose to configure LLT over UDP or over Ethernet. If you choose Auto detect over Ethernet, the installer auto-detects the LLT links over Ethernet.
134 Configuring Veritas Storage Foundation Cluster File System Configuring Storage Foundation Cluster File System using the Web-based installer 7 In the Confirmation dialog box that appears, choose whether or not to configure the cluster in secure mode using Symantec Product Authentication Service (AT). To configure the cluster in secure mode, click Yes. If you want to perform this task later, click No. You can use the installsfcfs -security command. Go to step 9.
Configuring Veritas Storage Foundation Cluster File System Configuring Storage Foundation Cluster File System using the Web-based installer SMTP ■ Select the Configure SMTP check box. ■ ■ If each system uses a separate NIC, select the Configure NICs for every system separately check box. If all the systems use the same NIC, select the NIC for the VCS Notifier to be used on all systems. If not, select the NIC to be used by each system.
136 Configuring Veritas Storage Foundation Cluster File System Configuring Storage Foundation Cluster File System using the Web-based installer 10 On the Stop Processes page, click Next after the installer stops all the processes successfully. 11 On the Start Processes page, click Next after the installer performs the configuration based on the details you provided and starts all the processes successfully. If you did not choose to configure I/O fencing in step 4, then skip to step 14.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System manually 137 Configuring Veritas Storage Foundation Cluster File System manually You can manually configure different products within Veritas Storage Foundation Cluster File System. Configuring Veritas Volume Manager Use the following procedures to configure Veritas Volume Manager.
138 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System manually 6 You are now asked questions regarding the frequency of VVR statistics collection. 7 The next phase of the configuration procedure consists of setting up a centrally managed host: Enable Centralized Management? [y,n,q] 8 If you selected centralized management, you will be asked a series of questions relating to hostnames.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System manually If the disk is not currently in use by any volume or volume group, but has been initialized by pvcreate, you must still use the pvremove command to remove LVM disk headers. If you want to mirror the root disk across multiple disks, make sure that all the disks are free from LVM control.
140 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System manually Starting and enabling the configuration daemon The VxVM configuration daemon (vxconfigd) maintains VxVM disk and disk group configurations. The vxconfigd communicates configuration changes to the kernel and modifies configuration information stored on disk. Startup scripts usually invoke vxconfigd at system boot time.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System manually below describes how to verify that the vxiod daemons are running, and how to start them if necessary. To verify that vxiod daemons are running, enter the following command: # vxiod The vxiod daemon is a kernel thread and is not visible using the ps command.
142 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System manually Converting existing VxVM disk groups to shared disk groups Use this procedure if you are upgrading from VxVM 3.x to VxVM 5.1 SP1 (or Storage Foundation 3.x to a Storage Foundation product at the 5.1 SP1 level) and you want to convert existing disk groups to shared disk groups.
Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System manually Configuring shared disks This section describes how to configure shared disks. If you are installing VxVM for the first time or adding disks to an existing cluster, you need to configure new shared disks. If you are upgrading VxVM, verify that your shared disks still exist. The shared disks should be configured from one node only.
144 Configuring Veritas Storage Foundation Cluster File System Configuring Veritas Storage Foundation Cluster File System manually To upgrade in a clustered environment when FastResync is set 1 You should run this procedure from the master node; to find out if you are on the master node, enter the command: # vxdctl -c mode 2 On the master node, list which disk groups are shared by entering: # vxdg -s list 3 Using the diskgroup names displayed by the previous command, list the disk groups that have v
Configuring Veritas Storage Foundation Cluster File System Configuring the SFDB repository database after installation system options. Database administrators can be granted permission to change default file system behavior in order to enable and disable Cached Quick I/O.
146 Configuring Veritas Storage Foundation Cluster File System Configuring the SFDB repository database after installation
Chapter 9 Configuring SFCFS for data integrity This chapter includes the following topics: ■ Setting up disk-based I/O fencing using installsfcfs ■ Setting up disk-based I/O fencing manually ■ Setting up server-based I/O fencing using installsfcfs ■ Setting up non-SCSI3 server-based I/O fencing using installsfcfs ■ Setting up server-based I/O fencing manually ■ Setting up non-SCSI3 fencing in virtual environments manually ■ Enabling or disabling the preferred fencing policy Setting up disk-b
148 Configuring SFCFS for data integrity Setting up disk-based I/O fencing using installsfcfs To initialize disks as VxVM disks 1 List the new external disks or the LUNs as recognized by the operating system. On each node, enter: # ioscan -nfC disk # insf -e Warning: The HP-UX man page for the insf command instructs you to run the command in single-user mode only. You can run insf -e in multiuser mode only when no other user accesses any of the device files.
Configuring SFCFS for data integrity Setting up disk-based I/O fencing using installsfcfs To set up disk-based I/O fencing using the installsfcfs 1 Start the installsfcfs with -fencing option. # /opt/VRTS/install/installsfcfs -fencing The installsfcfs starts with a copyright message and verifies the cluster information. Note the location of log files which you can access in the event of any problem with the configuration process.
150 Configuring SFCFS for data integrity Setting up disk-based I/O fencing using installsfcfs Symantec recommends that you use three disks as coordination points for disk-based I/O fencing. 6 ■ Enter the numbers corresponding to the disks that you want to use as coordinator disks. ■ Enter the disk group name. Verify that the coordinator disks you chose meet the I/O fencing requirements.
Configuring SFCFS for data integrity Setting up disk-based I/O fencing using installsfcfs Checking shared disks for I/O fencing Make sure that the shared storage you set up while preparing to configure SFCFS meets the I/O fencing requirements. You can test the shared disks using the vxfentsthdw utility. The two nodes must have ssh (default) or remsh communication. To confirm whether a disk (or LUN) supports SCSI-3 persistent reservations, two nodes must simultaneously have access to the same disks.
152 Configuring SFCFS for data integrity Setting up disk-based I/O fencing using installsfcfs To verify Array Support Library (ASL) 1 If the Array Support Library (ASL) for the array that you add is not installed, obtain and install it on each node before proceeding. The ASL for the supported storage device that you add is available from the disk array vendor or Symantec technical support. 2 Verify that the ASL for the disk array is installed on each of the nodes.
Configuring SFCFS for data integrity Setting up disk-based I/O fencing using installsfcfs 153 Revision : 5567 Serial Number : 42031000a The same serial number information should appear when you enter the equivalent command on node B using the /dev/rdsk/c2t1d0 path.
154 Configuring SFCFS for data integrity Setting up disk-based I/O fencing manually 3 The script warns that the tests overwrite data on the disks. After you review the overview and the warning, confirm to continue the process and enter the node names. Warning: The tests overwrite and destroy data on the disks unless you use the -r option.
Configuring SFCFS for data integrity Setting up disk-based I/O fencing manually Table 9-1 Tasks to set up I/O fencing manually (continued) Task Reference Identifying disks to use as coordinator disks See “Identifying disks to use as coordinator disks” on page 155. Checking shared disks for I/O fencing See “Checking shared disks for I/O fencing” on page 151. Setting up coordinator disk groups See “Setting up coordinator disk groups” on page 156.
156 Configuring SFCFS for data integrity Setting up disk-based I/O fencing manually Setting up coordinator disk groups From one node, create a disk group named vxfencoorddg. This group must contain three disks or LUNs. You must also set the coordinator attribute for the coordinator disk group. VxVM uses this attribute to prevent the reassignment of coordinator disks to other disk groups.
Configuring SFCFS for data integrity Setting up disk-based I/O fencing manually To update the I/O fencing files and start I/O fencing 1 On each nodes, type: # echo "vxfencoorddg" > /etc/vxfendg Do not use spaces between the quotes in the "vxfencoorddg" text. This command creates the /etc/vxfendg file, which includes the name of the coordinator disk group. 2 Update the /etc/vxfenmode file to specify to use the SCSI-3 dmp disk policy. On all cluster nodes, type: # cp /etc/vxfen.
158 Configuring SFCFS for data integrity Setting up disk-based I/O fencing manually 4 Make a backup copy of the main.cf file: # cd /etc/VRTSvcs/conf/config # cp main.cf main.orig 5 On one node, use vi or another text editor to edit the main.cf file. To modify the list of cluster attributes, add the UseFence attribute and assign its value as SCSI3. cluster clus1( UserNames = { admin = "cDRpdxPmHpzS.
Configuring SFCFS for data integrity Setting up server-based I/O fencing using installsfcfs Verifying I/O fencing configuration Verify from the vxfenadm output that the SCSI-3 disk policy reflects the configuration in the /etc/vxfenmode file.
160 Configuring SFCFS for data integrity Setting up server-based I/O fencing using installsfcfs With server-based fencing, you can have the coordination points in your configuration as follows: ■ Combination of CP servers and SCSI-3 compliant coordinator disks ■ CP servers only Symantec also supports server-based fencing with a single highly available CP server that acts as a single coordination point. See “About planning to configure I/O fencing” on page 88.
Configuring SFCFS for data integrity Setting up server-based I/O fencing using installsfcfs 3 Confirm that you want to proceed with the I/O fencing configuration at the prompt. The program checks that the local node running the script can communicate with remote nodes and checks whether Storage Foundation Cluster File System 5.1 SP1 is configured properly. 4 Review the I/O fencing configuration options that the program presents. Type 1 to configure server-based I/O fencing.
162 Configuring SFCFS for data integrity Setting up server-based I/O fencing using installsfcfs would be listening on or simply accept the default port suggested: [b] (14250) 8 Provide the following coordinator disks-related details at the installer prompt: ■ Enter the I/O fencing disk policy for the coordinator disks. Enter fencing mechanism for the disk(s) (raw/dmp): [b,q,?] raw ■ Choose the coordinator disks from the list of available disks that the installer displays.
Configuring SFCFS for data integrity Setting up server-based I/O fencing using installsfcfs 9 163 Verify and confirm the coordination points information for the fencing configuration. For example: Total number of coordination points being used: 3 CP Server (Port): 1. 10.209.80.197 (14250) SCSI-3 disks: 1. c1t1d0 2.
164 Configuring SFCFS for data integrity Setting up server-based I/O fencing using installsfcfs After the installer establishes trust between the authentication brokers of the CP servers and the application cluster nodes, press Enter to continue. 11 Verify and confirm the I/O fencing configuration information.
Configuring SFCFS for data integrity Setting up server-based I/O fencing using installsfcfs 14 Review the output as the installer stops and restarts the VCS and the fencing processes on each application cluster node, and completes the I/O fencing configuration. 15 Note the location of the configuration log files, summary files, and response files that the installer displays for later use.
166 Configuring SFCFS for data integrity Setting up server-based I/O fencing using installsfcfs 6 Enter the total number of coordination points as 1. Enter the total number of co-ordination points including both CP servers and disks: [b] (3) 1 Read the installer warning carefully before you proceed with the configuration. 7 Provide the following CP server details at the installer prompt: ■ Enter the virtual IP address or the host name of the virtual IP address for the CP server.
Configuring SFCFS for data integrity Setting up server-based I/O fencing using installsfcfs ■ 167 If the Storage Foundation Cluster File System (application cluster) nodes and the CP server use different AT root brokers, enter y at the installer prompt and provide the following information: ■ Hostname for the authentication broker for any one of the CP servers ■ Port number where the authentication broker for the CP server is listening for establishing trust ■ Hostname for the authentication broker
168 Configuring SFCFS for data integrity Setting up server-based I/O fencing using installsfcfs 11 Review the output as the installer updates the application cluster information on each of the CP servers to ensure connectivity between them. The installer then populates the /etc/vxfenmode file with the appropriate details in each of the application cluster nodes. The installer also populates the /etc/vxfenmode file with the entry single_cp=1 for such single CP server fencing configuration.
Configuring SFCFS for data integrity Setting up non-SCSI3 server-based I/O fencing using installsfcfs Setting up non-SCSI3 server-based I/O fencing using installsfcfs If Storage Foundation Cluster File System cluster is configured to run in secure mode, then verify that the configuration is correct before you configure non-SCSI3 server-based I/O fencing.
170 Configuring SFCFS for data integrity Setting up server-based I/O fencing manually Table 9-3 Sample values in procedure CP server configuration component Sample name CP server mycps1.symantecexample.com Node #1 - SFCFS cluster galaxy Node #2 - SFCFS cluster nebula Cluster name clus1 Cluster UUID {f0735332-1dd1-11b2} To manually configure CP servers for use by the SFCFS cluster 1 Determine the cluster name and uuid on the SFCFS cluster.
Configuring SFCFS for data integrity Setting up server-based I/O fencing manually 3 Add the SFCFS cluster and nodes to each CP server. For example, issue the following command on the CP server (mycps1.symantecexample.com) to add the cluster: # cpsadm -s mycps1.symantecexample.com -a add_clus\ -c clus1 -u {f0735332-1dd1-11b2} Cluster clus1 added successfully Issue the following command on the CP server (mycps1.symantecexample.com) to add the first node: # cpsadm -s mycps1.symantecexample.
172 Configuring SFCFS for data integrity Setting up server-based I/O fencing manually 5 Add the users to the CP server. First, determine the user@domain to be added on the SFCFS cluster (application cluster). The user for fencing should be of the form _HA_VCS_short-hostname and domain name is that of HA_SERVICES user in the output of command: # /opt/VRTScps/bin/cpsat listpd -t local Next, issue the following commands on the CP server (mycps1.symantecexample.com): # cpsadm -s mycps1.symantecexample.
Configuring SFCFS for data integrity Setting up server-based I/O fencing manually 6 Authorize the CP server user to administer the SFCFS cluster. You must perform this task for the CP server users corresponding to each node in the SFCFS cluster. For example, issue the following command on the CP server (mycps1.symantecexample.com) for SFCFS cluster clus1 with two nodes galaxy and nebula: # cpsadm -s mycps1.symantecexample.
174 Configuring SFCFS for data integrity Setting up server-based I/O fencing manually Note: Whenever coordinator disks are used as coordination points in your I/O fencing configuration, you must create a disk group (vxfendg). You must specify this disk group in the /etc/vxfenmode file. See “Setting up coordinator disk groups” on page 156.
Configuring SFCFS for data integrity Setting up server-based I/O fencing manually vxfen_mode=customized # vxfen_mechanism determines the mechanism for customized I/O # fencing that should be used. # # available options: # cps - use a coordination point server with optional script # controlled scsi3 disks # vxfen_mechanism=cps # # scsi3_disk_policy determines the way in which I/O Fencing # communicates with the coordination disks. This field is # required only if customized coordinator disks are being used.
176 Configuring SFCFS for data integrity Setting up server-based I/O fencing manually # # # # # # # # # # # # # # # # # # # # # # # # # # # Examples: cps1=[192.168.0.23]:14250 cps2=[mycps.company.com]:14250 SCSI-3 compliant coordinator disks are specified as: vxfendg= Example: vxfendg=vxfencoorddg Examples of different configurations: 1. All CP server coordination points cps1= cps2= cps3= 2.
Configuring SFCFS for data integrity Setting up server-based I/O fencing manually Table 9-4 vxfenmode file parameters (continued) vxfenmode File Parameter Description security Security parameter 1 indicates that Symantec Product Authentication Service is used for CP server communications. Security parameter 0 indicates that communication with the CP server is made in non-secure mode. The default security value is 1.
178 Configuring SFCFS for data integrity Setting up server-based I/O fencing manually To configure Configuration Point agent to monitor coordination points 1 Ensure that your SFCFS cluster has been properly installed and configured with fencing enabled.
Configuring SFCFS for data integrity Setting up server-based I/O fencing manually 3 Verify the status of the agent on the SFCFS cluster using the hares commands. For example: # hares -state coordpoint The following is an example of the command and output: # hares -state coordpoint # Resource coordpoint coordpoint 4 Attribute State State System galaxy nebula Value ONLINE ONLINE Access the engine log to view the agent log. The agent log is written to the engine log.
180 Configuring SFCFS for data integrity Setting up non-SCSI3 fencing in virtual environments manually To verify the server-based I/O fencing configuration 1 Verify that the I/O fencing configuration was successful by running the vxfenadm command. For example, run the following command: # vxfenadm -d Note: For troubleshooting any server-based I/O fencing configuration issues, refer to the Veritas Storage Foundation Cluster File System Administrator's Guide.
Configuring SFCFS for data integrity Setting up non-SCSI3 fencing in virtual environments manually 5 Enter the following command to change the vxfen_min_delay parameter value: # /usr/sbin/kctune vxfen_vxfnd_tmt=25 6 On each node, edit the /etc/vxfenmode file as follows: loser_exit_delay=55 vxfen_script_timeout=25 Refer to the sample /etc/vxfenmode file.
182 Configuring SFCFS for data integrity Setting up non-SCSI3 fencing in virtual environments manually ■ Make the VCS configuration file read-only # haconf -dump -makero 9 Make sure that the UseFence attribute in the VCS configuration file main.cf is set to SCSI3. 10 To make these VxFEN changes take effect, stop and restart VxFEN and the dependent modules ■ On each node, run the following command to stop VCS: /sbin/init.
Configuring SFCFS for data integrity Setting up non-SCSI3 fencing in virtual environments manually # # # # # # # # # # 183 scsi3_disk_policy determines the way in which I/O Fencing communicates wi the coordination disks. This field is required only if customized coordinator disks are being used.
184 Configuring SFCFS for data integrity Enabling or disabling the preferred fencing policy # brackets ([]), followed by ":" and CPS port number. # # Examples: # cps1=[192.168.0.23]:14250 # cps2=[mycps.company.com]:14250 # # SCSI-3 compliant coordinator disks are specified as: # # vxfendg= # Example: # vxfendg=vxfencoorddg # # Examples of different configurations: # 1. All CP server coordination points # cps1= # cps2= # cps3= # # 2.
Configuring SFCFS for data integrity Enabling or disabling the preferred fencing policy To enable preferred fencing for the I/O fencing configuration 1 Make sure that the cluster is running with I/O fencing set up. # vxfenadm -d 2 Make sure that the cluster-level attribute UseFence has the value set to SCSI3. # haclus -value UseFence 3 To enable system-based race policy, perform the following steps: ■ Make the VCS configuration writable.
186 Configuring SFCFS for data integrity Enabling or disabling the preferred fencing policy # hagrp -modify service_group Priority 1 Make sure that you assign a parent service group an equal or lower priority than its child service group. In case the parent and the child service groups are hosted in different subclusters, then the subcluster that hosts the child service group gets higher preference. ■ Save the VCS configuration.
Section 4 Upgrading Storage Foundation Cluster File System ■ Chapter 10. Preparing to upgrade Veritas Storage Foundation Cluster File System ■ Chapter 11. Performing a typical SFCFS upgrade using the installer ■ Chapter 12. Performing a phased upgrade ■ Chapter 13. Upgrading the operating system ■ Chapter 14. Upgrading Veritas Volume Replicator ■ Chapter 15. Migrating from SFHA to SFCFS or SFCFSHA ■ Chapter 16.
188
Chapter 10 Preparing to upgrade Veritas Storage Foundation Cluster File System This chapter includes the following topics: ■ About upgrading ■ About the different ways that you can upgrade ■ Supported upgrade paths ■ About using the installer to upgrade when the root disk is encapsulated ■ Preparing to upgrade About upgrading You have many types of upgrades available. Before you start to upgrade, review the types of upgrades for the Veritas products.
190 Preparing to upgrade Veritas Storage Foundation Cluster File System About the different ways that you can upgrade If you want to upgrade CP server systems that use VCS or SFHA to 5.1 SP1, make sure you upgraded all application clusters to 5.1 SP1. Then, upgrade VCS or SFHA on the CP server systems. About the different ways that you can upgrade Symantec offers you several different ways to upgrade.
Preparing to upgrade Veritas Storage Foundation Cluster File System Supported upgrade paths Table 10-2 HP-UX upgrades using the script- or Web-based installer Veritas software versions 11.11 3.5 (SF/SFCFS) Upgrade OS to11.23, N/A upgrade OS to11.31, then upgrade directly to 5.1SP1 using the installer script (SFCFS requires additional manual changes as mentioned in IG) N/A 3.5 (VCS/DBEDs/ VVR/SFRAC) Upgrade OS to11.23, N/A upgrade to 4.1, upgrade OS to11.31, then upgrade directly to 5.
192 Preparing to upgrade Veritas Storage Foundation Cluster File System About using the installer to upgrade when the root disk is encapsulated About using the installer to upgrade when the root disk is encapsulated When you use the installer to upgrade from a previous version of SFCFS and the system where you plan to upgrade has an encapsulated root disk, you may have to unecapsulate it.
Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade 4 Run the vxlicrep, vxdisk list, and vxprint -ht commands and record the output. Use this information to reconfigure your system after the upgrade. 5 If you are installing the high availability version of the Veritas Storage Foundation 5.
194 Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade To determine which release of VxVM that you have installed ◆ To determine which release of VxVM that you have installed, enter the following command: # swlist -l product VRTSvxvm If you have the 5.0 release installed, the command output includes the following information: VRTSvxvm 5.0.31.1 Veritas Volume Manager by Symantec If you have the 5.0.
Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade 3 If you are upgrading Veritas Storage Foundation for Oracle, resynchronize all existing snapshots before upgrading. # /opt/VRTS/bin/dbed_vmsnap -S $ORACLE_SID -f SNAPPLAN -o resync 4 Use the vxlicrep command to make a record of the currently installed Veritas licenses. Print the output or save it on a different system. 5 Stop activity to all VxVM volumes.
196 Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade 10 (Optional) If a file system is not clean, enter the following commands for that file system: # fsck -F vxfs filesystem # mount -F vxfs filesystem mountpoint # umount mountpoint This should complete any extended operations that were outstanding on the file system and unmount the file system cleanly.
Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade ■ Disassociate the SRL. Preupgrade planning for Veritas Volume Replicator Before installing or upgrading Veritas Volume Replicator (VVR): ■ Confirm that your system has enough free disk space to install VVR. ■ Make sure you have root permissions. You must have root permissions to perform the install and upgrade procedures.
198 Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade does not calculate the checksum. Instead, it relies on the TCP checksum mechanism. Table 10-4 VVR versions and checksum calculations VVR prior to 5.1 SP1 VVR 5.
Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade ■ VVR supports replication of a shared disk group only when all the nodes in the cluster that share the disk group are at IPv4 or IPv6 Preparing to upgrade VVR when VCS agents are configured To prepare to upgrade VVR when VCS agents for VVR are configured, perform the following tasks in the order presented: ■ Freezing the service groups and stopping all the applications ■ Preparing for the upgrade when VCS agents
200 Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade 5 On any node in the cluster, list the groups in your configuration: # hagrp -list 6 On any node in the cluster, freeze all service groups except the ClusterService group by typing the following command for each group name displayed in the output from step 5. # hagrp -freeze group_name -persistent Note: Write down the list of frozen service groups for future use.
Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade 10 For private disk groups, determine and note down the hosts on which the disk groups are imported. See “Determining the nodes on which disk groups are online” on page 201. 11 For shared disk groups, run the following command on any node in the CVM cluster: # vxdctl -c mode Note the master and record it for future use.
202 Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade To prepare a configuration with VCS agents for an upgrade 1 List the disk groups on each of the nodes by typing the following command on each node: # vxdisk -o alldgs list The output displays a list of the disk groups that are under VCS control and the disk groups that are not under VCS control. Note: The disk groups that are not locally imported are displayed in parentheses.
Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade When you upgrade Storage Foundation products with the product installer, the installer automatically upgrades the array support. If you upgrade Storage Foundation products with manual steps, you should remove any external ASLs or APMs that were installed previously on your system. The installation of the VRTSvxvm depot exits with an error if external ASLs or APMs are detected.
204 Preparing to upgrade Veritas Storage Foundation Cluster File System Preparing to upgrade 4 On the node selected in 1, after the disk layout has been successfully upgraded, unmount the file system. # umount /mnt1 5 This file system can be mounted on all nodes of the cluster using cfsmount.
Chapter 11 Performing a typical SFCFS upgrade using the installer This chapter includes the following topics: ■ Performing a full upgrade from SFCFS versions on HP-UX 11i v2 to SFCFS 5.1 SP1 ■ Performing a full upgrade from SFCFS 5.x on HP-UX 11iv3 to 5.1 SP1 HP-UX 11iv3 ■ Performing a full upgrade from SFCFS 5.x on HP-UX 11iv3 to 5.1 SP1 on the latest HP-UX 11iv3 Performing a full upgrade from SFCFS versions on HP-UX 11i v2 to SFCFS 5.1 SP1 Use these steps to perform a full upgrade from SFCFS 4.
206 Performing a typical SFCFS upgrade using the installer Performing a full upgrade from SFCFS versions on HP-UX 11i v2 to SFCFS 5.1 SP1 3 If you created local VxFS mount points on VxVM volumes, added them to the /etc/fstab file, and comment out the mount point entries in the /etc/fstab file. 4 Stop all applications that use VxFS or VxVM disk groups, whether local or CFS.
Performing a typical SFCFS upgrade using the installer Performing a full upgrade from SFCFS versions on HP-UX 11i v2 to SFCFS 5.1 SP1 11 If the cluster-wide attribute “UseFence” is set to SCSI3, then reset the value to NONE in the /etc/VRTSvcs/conf/config/main.cf file. 12 On each node, edit the /etc/vxfenmode file to configure I/O fencing in disabled mode. # cat /etc/vxfenmode vxfen_mode=disabled 13 On each node, change LLT_START=0 in the file /etc/rc.config.d/lltconf.
208 Performing a typical SFCFS upgrade using the installer Performing a full upgrade from SFCFS 5.x on HP-UX 11iv3 to 5.1 SP1 HP-UX 11iv3 22 Set the clusterwide attribute "UseFence" to use SCSI3. Add the following line to the /etc/VRTSvcs/conf/config/main.cf file: UseFence=SCSI3 23 Start the VCS engine on each system: # hastart Performing a full upgrade from SFCFS 5.x on HP-UX 11iv3 to 5.1 SP1 HP-UX 11iv3 Use this full upgrade procedure if the operating system upgrade is not required.
Performing a typical SFCFS upgrade using the installer Performing a full upgrade from SFCFS 5.x on HP-UX 11iv3 to 5.1 SP1 on the latest HP-UX 11iv3 To perform a full upgrade from SFCFS 5.x on HP-UX 11iv3 to 5.1 SP1 on the latest HP-UX 11iv3 1 Log in as superuser to one of the nodes in the cluster. 2 If you have created VxFS mount points on VxVM volumes, added them to the /etc/fstab file, and comment out the mount point entries in the /etc/fstab file.
210 Performing a typical SFCFS upgrade using the installer Performing a full upgrade from SFCFS 5.x on HP-UX 11iv3 to 5.1 SP1 on the latest HP-UX 11iv3 8 Freeze all the VCS service groups by running the following commands: # haconf -makerw # hagrp -freeze servicegroup -persistent # haconf -dump -makero 9 Stop VCS on all the nodes: # hastop -all 10 If the cluster-wide attribute “UseFence” is set to SCSI3, then reset the value to NONE in the /etc/VRTSvcs/conf/config/main.
Performing a typical SFCFS upgrade using the installer Performing a full upgrade from SFCFS 5.x on HP-UX 11iv3 to 5.1 SP1 on the latest HP-UX 11iv3 18 Execute the following steps on all the nodes: # cp /etc/vxfen.d/vxfenmode_scsi3_dmp /etc/vxfenmode # /sbin/init.d/vxfen stop # /sbin/init.d/vxfen start 19 Set the clusterwide attribute "UseFence" to use SCSI3. Add the following line to the /etc/VRTSvcs/conf/config/main.
212 Performing a typical SFCFS upgrade using the installer Performing a full upgrade from SFCFS 5.x on HP-UX 11iv3 to 5.
Chapter 12 Performing a phased upgrade This chapter includes the following topics: ■ Performing a phased upgrade from version 5.x on HP-UX 11i v3 to Veritas Storage Foundation Cluster File System 5.1 SP1 ■ Performing phased upgrade of SFCFS from versions 4.x or 5.x on HP-UX 11i v2 to 5.1SP1 ■ Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 Performing a phased upgrade from version 5.
214 Performing a phased upgrade Performing a phased upgrade from version 5.x on HP-UX 11i v3 to Veritas Storage Foundation Cluster File System 5.1 SP1 Note: Your downtime ends after you bring the first half of the cluster online. ■ Upgrading the second half of the cluster, system03 and system04. Perform the following steps on the first half of the cluster, system01 and system02. To upgrade the first half of the cluster 1 Stop all the applications on the nodes that are not under VCS control.
Performing a phased upgrade Performing a phased upgrade from version 5.x on HP-UX 11i v3 to Veritas Storage Foundation Cluster File System 5.1 SP1 9 If the cluster-wide attribute UseFence is set to SCSI3, then reset the value to NONE in the /etc/VRTSvcs/conf/config/main.cf file. 10 Stop all the modules on the first half of the cluster. # /sbin/init.d/odm stop # /sbin/init.
216 Performing a phased upgrade Performing a phased upgrade from version 5.x on HP-UX 11i v3 to Veritas Storage Foundation Cluster File System 5.1 SP1 To stop the second half of the cluster 1 Stop all the applications on the node that are not under VCS control. Use native application commands to stop the applications. 2 Stop all VCS service groups.
Performing a phased upgrade Performing a phased upgrade from version 5.x on HP-UX 11i v3 to Veritas Storage Foundation Cluster File System 5.1 SP1 9 Stop all the modules on the second half of the cluster: # /sbin/init.d/odm stop # /sbin/init.
218 Performing a phased upgrade Performing a phased upgrade from version 5.x on HP-UX 11i v3 to Veritas Storage Foundation Cluster File System 5.1 SP1 6 Reboot the first half of the cluster: # /usr/sbin/shutdown -r now 7 After the nodes come up, seed the cluster membership: # gabconfig -x The first half of the cluster is now up and running. Note: The downtime ends here. Perform the following steps on the second half of the cluster, system03 and system04, to upgrade the second half of the cluster.
Performing a phased upgrade Performing phased upgrade of SFCFS from versions 4.x or 5.x on HP-UX 11i v2 to 5.1SP1 Performing phased upgrade of SFCFS from versions 4.x or 5.x on HP-UX 11i v2 to 5.1SP1 The phased upgrade involves the following steps: ■ Upgrading the first half of the cluster, system01 and system02. Note: Your downtime starts after you complete the upgrade of the first half of the cluster. ■ Stopping the second half of the cluster, system03 and system04.
220 Performing a phased upgrade Performing phased upgrade of SFCFS from versions 4.x or 5.x on HP-UX 11i v2 to 5.1SP1 5 Stop VCS on the first half of the cluster: # hastop -local -force 6 If you created local VxFS mount points on VxVM volumes, added them to /etc/fstab file, and comment out the mount point entries in the /etc/fstab file. 7 Set the LLT_START attribute to 0 in the /etc/rc.config.
Performing a phased upgrade Performing phased upgrade of SFCFS from versions 4.x or 5.x on HP-UX 11i v2 to 5.1SP1 12 Upgrade the operating system choosing three Base Bundles "Base-VxFS-50, Base-VxVM-50, Base-VxTools-50".. 13 Upgrade SFCFS: # ./installer Choose the upgrade option "G" when the installer prompts you. Note: DO NOT reboot the cluster. Perform the following steps on the second half of the cluster, system03 and system04, to stop the second half of the cluster. Note: The downtime starts now.
222 Performing a phased upgrade Performing phased upgrade of SFCFS from versions 4.x or 5.x on HP-UX 11i v2 to 5.1SP1 7 On each node of the second half of the cluster, edit the /etc/vxfenmode file to configure I/O fencing in disabled mode: # cat /etc/vxfenmode vxfen_mode=disabled 8 If the cluster-wide attribute UseFence is set to SCSI3, then reset the value to NONE in the /etc/VRTSvcs/conf/config/main.cf file. 9 Stop all the modules on the second half of the cluster: # /sbin/init.
Performing a phased upgrade Performing phased upgrade of SFCFS from versions 4.x or 5.x on HP-UX 11i v2 to 5.1SP1 5 Set the clusterwide attribute UseFence to use SCSI3. Add the following line to the /etc/VRTSvcs/conf/config/main.cf file: UseFence=SCSI3 6 Reboot the first half of the cluster: # /usr/sbin/shutdown -r now 7 After the nodes come up, seed the cluster membership: # gabconfig -x The first half of the cluster is now up and running. Note: The downtime ends here.
224 Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 This section contains procedures for the Veritas Storage Foundation Cluster File System upgrade.
Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 6 225 You are prompted to enter the system names on which the software is to be installed. Enter the system name or names and then press Return. Depending on your existing configuration, various messages and prompts may appear. Answer the prompts appropriately. 7 You are prompted to agree with the End User License Agreement. Enter y and press Return.
226 Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 After successful completion of the upgrade, any disk groups that were created in Storage Foundation 4.1, 4.1 MP1, 4.1 MP2, 5.0. 5.0 MP1, or 5.0 MP2 are accessible by Storage Foundation 5.1 SP1.
Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 9 The installler lists the packages that will be installed or updated. You are prompted to confirm that you are ready to stop SF processes. Do you want to stop SF processes now? [y,n,q,?] (y) y If you select y, the installer stops the product processes and makes some configuration updates before upgrading.
228 Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 3 Upgrade from HP-UX 11i v2 to the latest available HP-UX 11i v3 fusion release, using practices recommended by HP. The HP-UX 11i v3 fusion release includes VxVM 5.0 by default. 4 If patches to HP-UX 11i v3 are required, apply the patches before upgrading the product. 5 Install Storage Foundation 5.1 SP1 for HP-UX 11i v3. 6 Start Storage Foundation 5.
Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 After successful completion of the upgrade, any disk groups that were created in Storage Foundation High Availability 5.0 are accessible by Storage Foundation High Availability 5.1 SP1. To upgrade from SFHA or SFORA HA 5.0 on 11.31 to 5.1 SP1 on 11.31 1 Stop activity to all SFHA volumes.
230 Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 8 Enter G to upgrade and press Return. 9 You are prompted to enter the system names on which the software is to be installed. Enter the system name or names and then press Return. Depending on your existing configuration, various messages and prompts may appear. Answer the prompts appropriately. 10 You are prompted to agree with the End User License Agreement.
Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 15 Start Storage Foundation High Availability 5.1 SP1 for HP-UX 11i v3 using the following command: # ./ installsfha -start 16 Check if the VEA service was restarted: # /opt/VRTS/bin/vxsvcctrl status 17 If the VEA service is not running, restart it: # /opt/VRTS/bin/vxsvcctrl start 18 Disk groups that were created using VxVM 5.
232 Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 To prepare for upgrading SFHA or SFORAHA on HP-UX 11i v2 to HP-UX 11iv3 1 Perform the necessary pre-upgrade steps before upgrading the product stack to SFHA 5.1SP1. 2 Take all the service groups offlline. # hagrp -offline 3 servicegroup1 -sys host1 Unmount all the file systems from all the nodes that are not under VCS control.
Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 Upgrading HP-UX Upgrade the HP-UX operating system to the latest available HP-UX 11i v3 fusion release. The Base-VxFS-50, Base-VxVM-50 and Base-VxTools-50 bundles need to be selected while upgrading using update-ux(1M). If patches to HP-UX 11i v3 are required, apply the patches before upgrading the Veritas product.
234 Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 8 You are prompted to enter the system names on which the software is to be installed. Enter the system name or names and then press Return. Depending on your existing configuration, various messages and prompts may appear. Answer the prompts appropriately. 9 You are prompted to agree with the End User License Agreement. Enter y and press Return.
Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.1 SP1 15 After reboot, check if the VEA service has restarted: # /opt/VRTS/bin/vxsvcctrl status 16 If the VEA service is not running, restart it: # /opt/VRTS/bin/vxsvcctrl start 17 Disk groups that were created using VxVM 4.1 or VxVM 5.0 can be imported after upgrading to VxVM 5.1.100. However, we recommend upgrading the VxVM disk groups to the latest version.
236 Performing a phased upgrade Upgrading Storage Foundation Cluster File System and High Availability software from a release prior to 5.
Chapter 13 Upgrading the operating system This chapter includes the following topics: ■ Upgrading the HP-UX operating system Upgrading the HP-UX operating system If you are on an unsupported version of the operating system, you need to upgrade it to HP-UX B.11.31.1009, HP-UX 11i Version 3 September 2010 Operating Environments Update Release or later.
238 Upgrading the operating system Upgrading the HP-UX operating system # update-ux -s os_path HPUX11i-DC-OE where os_path is the full path of the directory containing the operating system depots. For detailed instructions on upgrading the operating system, see the operating system documentation.
Chapter 14 Upgrading Veritas Volume Replicator This chapter includes the following topics: ■ Upgrading Veritas Volume Replicator Upgrading Veritas Volume Replicator If a previous version of Veritas Volume Replicator (VVR) is configured, the product installer upgrades VVR automatically when you upgrade the Storage Foundation products. See “Upgrading VVR without disrupting replication” on page 239.
240 Upgrading Veritas Volume Replicator Upgrading Veritas Volume Replicator Note: If you have a cluster setup, you must upgrade all the nodes in the cluster at the same time. Upgrading VVR on the Secondary Follow these instructions to upgrade the Secondary hosts. To upgrade the Secondary 1 Stop replication to the Secondary host by initiating a Primary pause using the following command: # vradmin -g diskgroup pauserep local_rvgname 2 Upgrade from VVR 4.1 or later to VVR 5.1 SP1 on the Secondary.
Chapter 15 Migrating from SFHA to SFCFS or SFCFSHA This chapter includes the following topics: ■ Migrating from SFHA to SFCFS or SFCFS HA 5.1 SP1 Migrating from SFHA to SFCFS or SFCFS HA 5.1 SP1 This section describes how to migrate Storage Foundation High Availability (SFHA) 5.1 SP1 to Storage Foundation Cluster File System (SFCFS) or Storage Foundation Cluster File System High Availability (SFCFSHA) 5.1 SP1.
242 Migrating from SFHA to SFCFS or SFCFSHA Migrating from SFHA to SFCFS or SFCFS HA 5.1 SP1 4 Unmount all the VxFS file systems which are not under VCS control. If the local file systems are under VCS control, then VCS unmounts the file systems when the failover service group is brought offline. On the nodes that have any mounted VxFS local file systems that are not under VCS control: # umount -F vxfs -a 5 Stop all the activity on the volumes and deport the local disk groups.
Migrating from SFHA to SFCFS or SFCFSHA Migrating from SFHA to SFCFS or SFCFS HA 5.1 SP1 11 Find out which node in the cluster, is the master node: # /opt/VRTS/bin/vxclustadm nidmap 12 On the master node, import disk groups: # vxdg -s import dg_name This release supports certain commands to be executed from the slave node such as vxdg -s import dg_name. See the Storage Foundation Cluster File System Administor's Guide for more information.
244 Migrating from SFHA to SFCFS or SFCFSHA Migrating from SFHA to SFCFS or SFCFS HA 5.
Chapter 16 Post-upgrade tasks This chapter includes the following topics: ■ Configuring Powerfail Timeout after upgrade Configuring Powerfail Timeout after upgrade When you install SFCFS, SFCFS configures Powerfail Timeout (PFTO) using tunable parameters. Starting with SFCFS 5.0.1, the Powerfail Timeout (PFTO) has the following default values: ■ disabled for devices using the HP-UX native multi-pathing ■ enabled for devices using DMP After installation, you can override the defaults, if required.
246 Post-upgrade tasks Configuring Powerfail Timeout after upgrade For more information about controlling Powerfail Timeout, see the Veritas Volume Manager Administrator's Guide.
Section 5 Verification of the installation or the upgrade ■ Chapter 17.
248
Chapter 17 Verifying the Storage Foundation Cluster File System installation This chapter includes the following topics: ■ Verifying that the products were installed ■ Installation log files ■ About enabling LDAP authentication for clusters that run in secure mode ■ Starting and stopping processes for the Veritas products ■ Checking Veritas Volume Manager processes ■ Checking Veritas File System installation ■ Verifying agent configuration for Storage Foundation Cluster File System ■ Synchr
250 Verifying the Storage Foundation Cluster File System installation Verifying that the products were installed Verifying that the products were installed Verify that the SFCFS products are installed. You can use the swlist command to check which depots have been installed: # swlist -l product | grep VRTS Use the following sections to further verify the product installation.
Verifying the Storage Foundation Cluster File System installation About enabling LDAP authentication for clusters that run in secure mode authentication broker. AT supports all common LDAP distributions such as Sun Directory Server, Netscape, OpenLDAP, and Windows Active Directory. For a cluster that runs in secure mode, you must enable the LDAP authentication plug-in if the VCS users belong to an LDAP domain. See “Enabling LDAP authentication for clusters that run in secure mode” on page 252.
252 Verifying the Storage Foundation Cluster File System installation About enabling LDAP authentication for clusters that run in secure mode ■ The type of LDAP schema used (the default is RFC 2307) ■ UserObjectClass (the default is posixAccount) ■ UserObject Attribute (the default is uid) ■ User Group Attribute (the default is gidNumber) ■ Group Object Class (the default is posixGroup) ■ GroupObject Attribute (the default is cn) ■ Group GID Attribute (the default is gidNumber) ■ Group Membe
Verifying the Storage Foundation Cluster File System installation About enabling LDAP authentication for clusters that run in secure mode 253 To enable OpenLDAP authentication for clusters that run in secure mode 1 Add the LDAP domain to the AT configuration using the vssat command. The following example adds the LDAP domain, MYENTERPRISE: # /opt/VRTSat/bin/vssat addldapdomain \ --domainname "MYENTERPRISE.symantecdomain.com"\ --server_url "ldap://my_openldap_host.symantecexample.
254 Verifying the Storage Foundation Cluster File System installation About enabling LDAP authentication for clusters that run in secure mode 3 Add the LDAP user to the main.cf file. # haconf makerw # hauser -add "CN=vcsadmin1/CN=people/\ DC=symantecdomain/DC=myenterprise/\ DC=com@myenterprise.symantecdomain.
Verifying the Storage Foundation Cluster File System installation About enabling LDAP authentication for clusters that run in secure mode 6 Verify that you can log on to VCS. For example # halogin vcsadmin1 password # hasys -state VCS NOTICE #System galaxy nebula V-16-1-52563 VCS Login:vcsadmin1 Attribute Value Attribute RUNNING Attribute RUNNING Similarly, you can use the same LDAP user credentials to log on to the SFCFS node using the VCS Cluster Manager (Java Console).
256 Verifying the Storage Foundation Cluster File System installation About enabling LDAP authentication for clusters that run in secure mode To enable Windows Active Directory authentication for clusters that run in secure mode 1 Run the LDAP configuration tool atldapconf using the -d option. The -d option discovers and retrieves an LDAP properties file which is a prioritized attribute list.
Verifying the Storage Foundation Cluster File System installation About enabling LDAP authentication for clusters that run in secure mode 4 List the LDAP domains to verify that the Windows Active Directory server integration is complete.
258 Verifying the Storage Foundation Cluster File System installation Starting and stopping processes for the Veritas products 6 Verify that you can log on to VCS. For example # halogin vcsadmin1 password # hasys -state VCS NOTICE #System galaxy nebula V-16-1-52563 VCS Login:vcsadmin1 Attribute Value Attribute RUNNING Attribute RUNNING Similarly, you can use the same LDAP user credentials to log on to the SFCFS node using the VCS Cluster Manager (Java Console).
Verifying the Storage Foundation Cluster File System installation Checking Veritas File System installation To confirm that key Volume Manager processes are running ◆ Type the following command: # ps -ef | grep vx Entries for the vxiod, vxconfigd, vxnotify, vxesd, vxrelocd, vxpal, vxcached, vxconfigbackupd, and vxsvc processes should appear in the output from this command. If you disable hot-relocation, the vxrelocd and vxnotify processes are not displayed.
260 Verifying the Storage Foundation Cluster File System installation Synchronizing time on Cluster File Systems To verify the agent configuration ◆ Enter the cluster status command from any node in the cluster: # cfscluster status Output resembles: Node : system01 Cluster Manager : running CVM state : running No mount point registered with cluster configuration Node : system02 Cluster Manager : running CVM state : running No mount point registered with cluster configuration Synchronizing time on Clust
Verifying the Storage Foundation Cluster File System installation Configuring VCS for Storage Foundation Cluster File System from the command line. Changes made by editing the configuration files take effect when the cluster is restarted. The node on which the changes were made should be the first node to be brought back online. main.cf file The VCS configuration file main.cf is created during the installation procedure. After installation, the main.
262 Verifying the Storage Foundation Cluster File System installation Configuring VCS for Storage Foundation Cluster File System CVMTransport = gab CVMTimeout = 200 ) CVMVxconfigd cvm_vxconfigd ( Critical = 0 CVMVxconfigdArgs = { syslog } ) cvm_clus requires cvm_vxconfigd vxfsckd requires cvm_clus // resource dependency tree // // group cvm // { // CVMCluster cvm_clus // { // CVMVxconfigd cvm_vxconfigd // } // } Storage Foundation Cluster File System HA Only If you configured VCS Cluster Manager (Web Con
Verifying the Storage Foundation Cluster File System installation About the cluster UUID To configure the cluster UUID when you create a cluster manually ◆ On one node in the cluster, perform the following command to populate the cluster UUID on each node in the cluster. # /opt/VRTSvcs/bin/uuidconfig.pl -clus -configure nodeA nodeB ... nodeN Where nodeA, nodeB, through nodeN are the names of the cluster nodes. About the cluster UUID You can verify the existence of the cluster UUID.
264 Verifying the Storage Foundation Cluster File System installation About the LLT and GAB configuration files Table 17-1 LLT configuration files (continued) File Description /etc/llthosts The file llthosts is a database that contains one entry per system. This file links the LLT system ID (in the first column) with the LLT host name. This file must be identical on each node in the cluster. A mismatch of the contents of the file can cause indeterminate behavior in the cluster.
Verifying the Storage Foundation Cluster File System installation Verifying the LLT, GAB, and VCS configuration files Table 17-2 GAB configuration files File Description /etc/rc.config.d/ gabconf This file stores the start and stop environment variables for GAB: GAB_START—Defines the startup behavior for the GAB module after a system reboot. Valid values include: 1—Indicates that GAB is enabled to start up. 0—Indicates that GAB is disabled to start up.
266 Verifying the Storage Foundation Cluster File System installation Verifying LLT, GAB, and cluster operation ■ VCS /etc/VRTSvcs/conf/config/main.cf 2 Verify the content of the configuration files. See “About the LLT and GAB configuration files” on page 263. Verifying LLT, GAB, and cluster operation Verify the operation of LLT, GAB, and the cluster using the VCS commands. To verify LLT, GAB, and cluster operation 1 Log in to any node in the cluster as superuser.
Verifying the Storage Foundation Cluster File System installation Verifying LLT, GAB, and cluster operation *0 galaxy 1 nebula OPEN OPEN 2 2 Each node has two links and each node is in the OPEN state. The asterisk (*) denotes the node on which you typed the command.
268 Verifying the Storage Foundation Cluster File System installation Verifying LLT, GAB, and cluster operation The command reports the status on the two active nodes in the cluster, galaxy and nebula. For each correctly configured node, the information must show the following: ■ A state of OPEN ■ A status for each link of UP ■ A MAC address for each link However, the output in the example shows different details for the node nebula.
Verifying the Storage Foundation Cluster File System installation Verifying LLT, GAB, and cluster operation f Cluster File System (CFS) h Veritas Cluster Server (VCS: High Availability Daemon) u Cluster Volume Manager (CVM) (to ship commands from slave node to master node) Port u in the gabconfig output is visible with CVM protocol version >= 100. v Cluster Volume Manager (CVM) w vxconfigd (module for CVM) For more information on GAB, refer to the Veritas Cluster Server Administrator's Guide.
270 Verifying the Storage Foundation Cluster File System installation Verifying LLT, GAB, and cluster operation ■ If GAB does not operate, the command does not return any GAB port membership information: GAB Port Memberships =================================== Verifying the cluster Verify the status of the cluster using the hastatus command. This command returns the system state and the group state. Refer to the hastatus(1M) manual page.
Verifying the Storage Foundation Cluster File System installation Verifying LLT, GAB, and cluster operation 271 Verifying the cluster nodes Verify the information of the cluster systems using the hasys -display command. The information for each node in the output should be similar. Refer to the hasys(1M) manual page. Refer to the Veritas Cluster Server Administrator's Guide for information about the system attributes for VCS.
272 Verifying the Storage Foundation Cluster File System installation Verifying LLT, GAB, and cluster operation #System Attribute Value galaxy DiskHbStatus galaxy DynamicLoad 0 galaxy EngineRestarted 0 galaxy EngineVersion 5.0.31.
Verifying the Storage Foundation Cluster File System installation Verifying LLT, GAB, and cluster operation #System Attribute Value galaxy TRSE 0 galaxy UpDownState Up galaxy UserInt 0 galaxy UserStr galaxy VCSFeatures DR galaxy VCSMode VCS_CFS_VRTS 273
274 Verifying the Storage Foundation Cluster File System installation Verifying LLT, GAB, and cluster operation
Section 6 Adding and removing nodes ■ Chapter 18. Adding a node to a cluster ■ Chapter 19.
276
Chapter 18 Adding a node to a cluster This chapter includes the following topics: ■ About adding a node to a cluster ■ Before adding a node to a cluster ■ Preparing to add a node to a cluster ■ Adding a node to a cluster ■ Configuring server-based fencing on the new node ■ Updating the Storage Foundation for Databases (SFDB) repository after adding a node About adding a node to a cluster After you install SFCFS and create a cluster, you can add and remove nodes from the cluster.
278 Adding a node to a cluster Before adding a node to a cluster ■ Hardware and software requirements are met. See “Meeting hardware and software requirements” on page 278. ■ Hardware is set up for the new node. See “Setting up the hardware” on page 278. ■ The existing cluster is an SFCFS cluster and that SFCFS is running on the cluster. ■ The new system has the same identical operating system versions and patch levels as that of the existing cluster.
Adding a node to a cluster Before adding a node to a cluster Figure 18-1 Adding a node to a two-node cluster using two switches Public network Private network New node: saturn To set up the hardware 1 Connect the SFCFS private Ethernet controllers. Perform the following tasks as necessary: ■ When you add nodes to a cluster, use independent switches or hubs for the private network connections.
280 Adding a node to a cluster Preparing to add a node to a cluster ■ The network interface names used for the private interconnects on the new node must be the same as that of the existing nodes in the cluster. Preparing to add a node to a cluster Complete the following preparatory steps on the new node before you add the node to an existing SFCFS cluster. To prepare the new node 1 Verify that the new node meets installation requirements. # .
Adding a node to a cluster Adding a node to a cluster Adding a node to a cluster using the SFCFS installer You can add a node using the –addnode option with the SFCFS installer. The SFCFS installer performs the following tasks: ■ Verifies that the node and the existing cluster meet communication requirements. ■ Verifies the products and packages installed on the new node. ■ Discovers the network interfaces on the new node and checks the interface settings.
282 Adding a node to a cluster Adding a node to a cluster Note: If you have configured server-based fencing on the existing cluster, make sure that the CP server does not contain entries for the new node. If the CP server already contains entries for the new node, remove these entries before adding the node to the cluster, otherwise the process may fail with an error. To add the node to an existing cluster using the installer 1 Log in as the root user on one of the nodes of the existing cluster.
Adding a node to a cluster Adding a node to a cluster 7 Enter y to configure a second private heartbeat link. Note: At least two private heartbeat links must be configured for high availability of the cluster. Would you like to configure a second private heartbeat link? [y,n,q,b,?] (y) 8 Enter the name of the network interface that you want to configure as the second private heartbeat link.
284 Adding a node to a cluster Adding a node to a cluster 13 If the existing cluster uses server-based fencing in secure mode, provide responses to the following installer prompts. If you are using different root brokers for the CP server and the client SFCFS cluster, enter y to confirm the use of different root brokers. The installer attempts to establish trust between the new node being added to the cluster and the authentication broker of the CP server.
Adding a node to a cluster Adding a node to a cluster To add a node to a cluster using the Web-based installer 1 From the Task pull-down menu, select Add a Cluster node. From the product pull-down menu, select the product. Click the Next button. 2 In the System Names field enter a name of a node in the cluster where you plan to add the node. The installer program checks inter-system communications and compatibility. If the node fails any of the checks, review the error and fix the issue.
286 Adding a node to a cluster Adding a node to a cluster 4 If the existing cluster is configured to use server-based I/O fencing, configure server-based I/O fencing on the new node. See “Starting fencing on the new node” on page 291. 5 Start VCS. See “To start VCS on the new node” on page 292. 6 Configure CVM and CFS. See “Configuring CVM and CFS on the new node” on page 292. 7 If the ClusterService group is configured on the existing cluster, add the node to the group.
Adding a node to a cluster Adding a node to a cluster To configure LLT and GAB on the new node 1 Edit the /etc/llthosts file on the existing nodes. Using vi or another text editor, add the line for the new node to the file. The file resembles: 0 galaxy 1 nebula 2 saturn 2 Copy the /etc/llthosts file from one of the existing systems over to the new system. The /etc/llthosts file must be identical on all nodes in the cluster. 3 Create an /etc/llttab file on the new system.
288 Adding a node to a cluster Adding a node to a cluster 7 Create the Unique Universal Identifier file /etc/vx/.uuids/clusuuid on the new node: # uuidconfig.pl -remsh -clus -copy \ -from_sys galaxy -to_sys saturn 8 Start the LLT, GAB, and ODM drivers on the new node: # /sbin/init.d/llt start # sbin/init.d/gab start # /sbin/init.d/vxfen start # kcmodule vxgms=loaded # kcmodule odm=loaded # /sbin/init.d/odm stop # /sbin/init.
Adding a node to a cluster Adding a node to a cluster Table 18-1 The command examples definitions (continued) Name Fully-qualified host name Function (FQHN) RB1 RB1.brokers.example.com The root broker for the cluster RB2 RB2.brokers.example.com Another root broker, not the cluster's RB To verify the existing security setup on the node 1 If node saturn is configured as an authentication broker (AB) belonging to a root broker, perform the following steps.
290 Adding a node to a cluster Adding a node to a cluster To configure the authentication broker on node saturn 1 Create a principal for node saturn on root broker RB1. Execute the following command on root broker RB1. # vssat addprpl --pdrtype root --domain domainname \ --prplname prplname --password password \ --prpltype service For example: # vssat addprpl --pdrtype root \ --domain root@RB1.brokers.example.com \ --prplname saturn.nodes.example.
Adding a node to a cluster Adding a node to a cluster 3 Add SFCFS and webserver principal to AB on node saturn. # vssat addprpl --pdrtype ab --domain HA_SERVICES --prplname webserver_VCS_prplname --password new_password --prpltype service --can_proxy 4 Create /etc/VRTSvcs/conf/config/.secure file. # touch /etc/VRTSvcs/conf/config/.secure Starting fencing on the new node Perform the following steps to start fencing on the new node.
292 Adding a node to a cluster Adding a node to a cluster To start VCS on the new node 1 Start VCS on the new node: # hastart VCS brings the CVM and CFS groups online. 2 Verify that the CVM and CFS groups are online: # hagrp -state Configuring CVM and CFS on the new node Modify the existing cluster configuration to configure CVM and CFS for the new node. To configure CVM and CFS on the new node 1 Make a backup copy of the main.cf file on the existing node, if not backed up in previous procedures.
Adding a node to a cluster Adding a node to a cluster 5 On the remaining nodes of the existing cluster, run the following commands: # /etc/vx/bin/vxclustadm -m vcs reinit # /etc/vx/bin/vxclustadm nidmap 6 Copy the configuration files from one of the nodes in the existing cluster to the new node: # rcp /etc/VRTSvcs/conf/config/main.cf \ saturn:/etc/VRTSvcs/conf/config/main.cf # rcp /etc/VRTSvcs/conf/config/CFSTypes.cf \ saturn:/etc/VRTSvcs/conf/config/CFSTypes.cf # rcp /etc/VRTSvcs/conf/config/CVMTypes.
294 Adding a node to a cluster Configuring server-based fencing on the new node Configuring server-based fencing on the new node Perform this step if your existing cluster uses server-based I/O fencing. To configure server-based fencing on the new node 1 Log in to each CP server as the root user.
Adding a node to a cluster Configuring server-based fencing on the new node 295 # /opt/VRTSvcs/bin/hastart -onenode 2 Verify that the VCS user and the domain are created on the new node: # /opt/VRTScps/bin/cpsat showcred | grep _HA_VCS_ # /opt/VRTScps/bin/cpsat listpd -t local | grep HA_SERVICES 3 Stop VCS if the VCS user and domain are created successfully on the new node: # /opt/VRTSvcs/bin/hastop 4 If the root broker for the CP server and the new node are different, run the following command to e
296 Adding a node to a cluster Updating the Storage Foundation for Databases (SFDB) repository after adding a node Updating the Storage Foundation for Databases (SFDB) repository after adding a node If you are using Database Checkpoints, Database Flashsnap, or Adding a Node in your configuration, update the SFDB repository to enable access for the new node after it is added to the cluster.
Chapter 19 Removing a node from Storage Foundation Cluster File System clusters This chapter includes the following topics: ■ About removing a node from a cluster ■ Removing a node from a cluster ■ Modifying the VCS configuration files on existing nodes ■ Removing the node configuration from the CP server ■ Removing security credentials from the leaving node ■ Updating the Storage Foundation for Databases (SFDB) repository after removing a node ■ Sample configuration file for removing a node
298 Removing a node from Storage Foundation Cluster File System clusters Removing a node from a cluster ■ Unmounting the File System and Cluster File System file systems not configured under VCS. ■ Uninstalling SFCFS from the node. Modifying the VCS configuration files on the existing nodes. ■ Removing the node configuration from the CP server if it is configured. ■ Removing the security credentials from the node if it is part of a secure cluster.
Removing a node from Storage Foundation Cluster File System clusters Modifying the VCS configuration files on existing nodes 5 Uninstall SFCFS from the node using the SFCFS installer. # cd /opt/VRTS/install # ./uninstallsfcfs saturn The installer stops all SFCFS processes and uninstalls the SFCFS packages. 6 Modify the VCS configuration files on the existing nodes to remove references to the deleted node. See “Modifying the VCS configuration files on existing nodes” on page 299.
300 Removing a node from Storage Foundation Cluster File System clusters Modifying the VCS configuration files on existing nodes Editing the /etc/gabtab file Modify the following command in the /etc/gabtab file to reflect the number of systems after the node is removed: /sbin/gabconfig -c -nN where N is the number of remaining nodes in the cluster.
Removing a node from Storage Foundation Cluster File System clusters Removing the node configuration from the CP server 4 Remove the node from the SystemList attribute of the service group: # hagrp -modify cvm SystemList -delete saturn 5 Remove the node from the CVMNodeId attribute of the service group: # hares -modify cvm_clus CVMNodeId -delete saturn 6 If you have the other service groups (such as the database service group or the ClusterService group) that have the removed node in their configurati
302 Removing a node from Storage Foundation Cluster File System clusters Removing security credentials from the leaving node Note: The cpsadm command is used to perform the steps in this procedure. For detailed information about the cpsadm command, see the Veritas Storage Foundation Cluster File System Administrator's Guide. To remove the node configuration from the CP server 1 Log into the CP server as the root user.
Removing a node from Storage Foundation Cluster File System clusters Updating the Storage Foundation for Databases (SFDB) repository after removing a node 303 To remove the security credentials 1 Kill the /opt/VRTSat/bin/vxatd process. 2 Remove the root credentials on node saturn.
304 Removing a node from Storage Foundation Cluster File System clusters Sample configuration file for removing a node from the cluster ■ The database is managed by a VCS database agent. The agent starts, stops, and monitors the database. Note: The following sample file shows in bold the configuration information that is removed when the node system3 is removed from the cluster. include "types.cf" include "CFSTypes.cf" include "CVMTypes.
Removing a node from Storage Foundation Cluster File System clusters Sample configuration file for removing a node from the cluster App app1 ( Critical = 0 Sid @system1 = vrts1 Sid @system2 = vrts2 Sid @system3 = vrts3 ) CFSMount appdata_mnt ( Critical = 0 MountPoint = "/oradata" BlockDevice = "/dev/vx/dsk/appdatadg/appdatavol" ) CVMVolDg appdata_voldg ( Critical = 0 CVMDiskGroup = appdatadg CVMVolume = { appdatavol } CVMActivation = sw ) requires group cvm online local firm app1 requires appdata_mnt appda
306 Removing a node from Storage Foundation Cluster File System clusters Sample configuration file for removing a node from the cluster CVMTransport = gab CVMTimeout = 200 ) CVMVxconfigd cvm_vxconfigd ( Critical = 0 CVMVxconfigdArgs = { syslog } ) vxfsckd requires cvm_clus cvm_clus requires cvm_vxconfigd
Section 7 Setting up and configuring replicated global cluster ■ Chapter 20. Setting up a replicated global cluster ■ Chapter 21.
308
Chapter 20 Setting up a replicated global cluster This chapter includes the following topics: ■ Replication in the SFCFS environment ■ Requirements for SFCFS global clusters ■ About setting up a global cluster in an SFCFS environment ■ Configuring a cluster at the primary site ■ Configuring a cluster at the secondary site ■ Configuring replication on clusters at both sites ■ Modifying the ClusterService group for global clusters ■ Defining the remote cluster and heartbeat objects ■ Config
310 Setting up a replicated global cluster Requirements for SFCFS global clusters ■ Veritas Volume Replicator (VVR), which provides host-based volume replication. Using VVR you can replicate data volumes on a shared disk group in SFCFS. ■ Supported hardware-based replication technologies. Using hardware-based replication you can replicate data from a primary array to a secondary array. ■ Using SFCFS with VVR you can run a fire drill to verify the disaster recovery capability of your configuration.
Setting up a replicated global cluster Requirements for SFCFS global clusters Table 20-1 Supported replication options for SFCFS global clusters Replication technology Supported modes Supported software Veritas Volume Replicator (VVR) ■ Asynchronous replication ■ Synchronous replication Host-based replication Supporting agents ■ RVGShared ■ RVGSharedPri ■ RVGLogOwner EMC SRDF ■ Asynchronous replication Supporting agent: SRDF ■ Synchronous replication All versions of Solutions Enabler Hit
312 Setting up a replicated global cluster About setting up a global cluster in an SFCFS environment You can use the Veritas replication agents listed in the table above for global clusters that run SFCFS. The Veritas replication agents provide application failover and recovery support to your replication configuration. The agents provide this support for environments where data is replicated between clusters. VCS agents control the direction of replication.
Setting up a replicated global cluster Configuring a cluster at the primary site ■ Test the HA/DR configuration ■ Upon successful testing, bring the environment into production Some SFCFS HA/DR configuration tasks may require adjustments depending upon your particular starting point, environment, and configuration. Review the installation requirements and sample cluster configuration files for primary and secondary clusters.
314 Setting up a replicated global cluster Configuring a cluster at the primary site 4 Install and configure SFCFS. Prepare for your installation according to your configuration needs. For preparation: See “Prerequisites for Veritas Storage Foundation Cluster File System” on page 39. For installation: See “About the Web-based installer” on page 65. 5 For a multi-node cluster, configure I/O fencing. ■ Verify the shared storage on the secondary site supports SCSI-3 reservations.
Setting up a replicated global cluster Configuring a cluster at the secondary site 10 Create the database on the file system you created in the previous step. 11 Configure the VCS service groups for the database. 12 Verify that all VCS service groups are online.
316 Setting up a replicated global cluster Configuring a cluster at the secondary site ■ Verify the shared storage on the secondary site supports SCSI-3 reservations. ■ Set up coordinator disks ■ Configure I/O fencing For instructions for setting up fencing: See “About planning to configure I/O fencing” on page 88. 6 For a single-node cluster, do not enable I/O fencing. Fencing will run in disabled mode. 7 Prepare systems and storage for a global cluster.
Setting up a replicated global cluster Configuring replication on clusters at both sites Create the directories for the CFS mount points as they are on the primary site. These will be used to host the database and control files when the failover occurs and the secondary is promoted to become the primary site. 2 Copy the init$ORACLE_SID.ora file from $ORACLE_HOME/dbs at the primary to $ORACLE_HOME/dbs at the secondary. 3 Create subdirectories for the database as you did on the primary site.
318 Setting up a replicated global cluster Defining the remote cluster and heartbeat objects ■ Validates the ability of the current configuration to support a global cluster environment. ■ Creates the components that enable the separate clusters, each of which contains a different set of GAB memberships, to connect and operate as a single unit. ■ Creates the ClusterService group, or updates an existing ClusterService group.
Setting up a replicated global cluster Defining the remote cluster and heartbeat objects To define the remote cluster and heartbeat 1 On the primary site, enable write access to the configuration: # haconf -makerw 2 Define the remote cluster and its virtual IP address. In this example, the remote cluster is clus2 and its IP address is 10.11.10.102: # haclus -add clus2 10.11.10.102 3 Complete step 1 and step 2 on the secondary site using the name and IP address of the primary cluster.
320 Setting up a replicated global cluster Defining the remote cluster and heartbeat objects 7 Complete step 4-6 on the secondary site using appropriate values to define the cluster on the primary site and its IP as the remote cluster for the secondary cluster. 8 Verify cluster status with the hastatus -sum command on both clusters.
Setting up a replicated global cluster Defining the remote cluster and heartbeat objects 9 Display the global setup by executing haclus -list command. # haclus -list clus1 clus2 Example of heartbeat additions to the main.cf file on the primary site: . . remotecluster clus2 ( Cluster Address = "10.11.10.102" ) heartbeat Icmp ( ClusterList = { clus2 } Arguments @clus2 = { "10.11.10.102" } ) system galaxy ( ) . . Example heartbeat additions to the main.cf file on the secondary site: . .
322 Setting up a replicated global cluster Configuring the VCS service groups for global clusters Configuring the VCS service groups for global clusters To configure VCS service groups for global clusters 1 2 Configure and enable global groups for databases and resources. ■ Configure VCS service groups at both sites. ■ Configure the replication agent at both sites.
Chapter 21 Configuring a global cluster using VVR This chapter includes the following topics: ■ About configuring global clustering using VVR ■ Setting up replication using VVR on the primary site ■ Setting up replication using VVR on the secondary site ■ Starting replication of application database volume ■ Configuring VCS to replicate the database volume using VVR ■ Using VCS commands on SFCFS global clusters ■ Using VVR commands on SFCFS global clusters About configuring global clustering
324 Configuring a global cluster using VVR Setting up replication using VVR on the primary site ■ Setting up both clusters as part of a global cluster environment. See “About setting up a global cluster in an SFCFS environment” on page 312. ■ Setting up replication for clusters at both sites. See “Setting up replication using VVR on the primary site” on page 324. See “Setting up replication using VVR on the secondary site” on page 327. ■ Starting replication of the database.
Configuring a global cluster using VVR Setting up replication using VVR on the primary site 325 To create the SRL volume on the primary site 1 On the primary site, determine the size of the SRL volume based on the configuration and amount of use. See the Veritas Volume Replicator documentation for details.
326 Configuring a global cluster using VVR Setting up replication using VVR on the primary site To review the status of replication objects on the primary site 1 Verify the volumes you intend to include in the group are active. 2 Review the output of the hagrp -state cvm command to verify that the CVM group is online.
Configuring a global cluster using VVR Setting up replication using VVR on the secondary site 327 Setting up replication using VVR on the secondary site To create objects for replication on the secondary site, use the vradmin command with the addsec option. To set up replication on the secondary site, perform the following tasks: ■ If you have not already done so, create a disk group to hold data volume, SRL, and RVG on the storage on the secondary site.
328 Configuring a global cluster using VVR Setting up replication using VVR on the secondary site 2 Create the volume for the SRL, using the same name and size of the equivalent volume on the primary site. Create the volume on different disks from the disks for the database volume, but on the same disk group that has the data volume: # vxassist -g oradatadg make rac1_srl 1500M nmirror=2 disk4 disk6 Editing the /etc/vx/vras/.rdg files Editing the /etc/vx/vras/.
Configuring a global cluster using VVR Setting up replication using VVR on the secondary site 2 ■ The net mask is 255.255.255.0 ■ # ifconfig lan0:1 plumb # ifconfig lan0:1 inet 10.10.9.101 netmask 255.255.255.0 # ifconfig lan0:1 up Use the same commands with appropriate values for the interface, IP address, and net mask on the secondary site. The example assumes for the secondary site: 3 ■ The public network interface is lan0:1 ■ virtual IP address is 10.11.9.102 ■ net mask is 255.255.255.
330 Configuring a global cluster using VVR Setting up replication using VVR on the secondary site ■ pri_host is the virtual IP address or resolvable virtual host name of the cluster on the primary site. For example: clus1_1 ■ sec_host is the virtual IP address or resolvable virtual host name of the cluster on the secondary site.
Configuring a global cluster using VVR Starting replication of application database volume HostName: 10.190.99.197 RvgName: rac1_rvg DgName: oradatadg datavol_cnt: 1 vset_cnt: 0 srl: rac1_srl RLinks: name=rlk_clus1_1_rac1_rvg, detached=on, synchronous=off Note: Once the replication is started the value of the detached flag will change the status from OFF to ON.
332 Configuring a global cluster using VVR Starting replication of application database volume To start replication using automatic synchronization ◆ From the primary site, use the following command to automatically synchronize the RVG on the secondary site: vradmin -g disk_group -a startrep pri_rvg sec_host where: ■ disk_group is the disk group on the primary site that VVR will replicate ■ pri_rvg is the name of the RVG on the primary site ■ sec_host is the virtual host name for the secondary site
Configuring a global cluster using VVR Configuring VCS to replicate the database volume using VVR # vradmin -g oradatadg -c rac1_ckpt syncrvg rac1_rvg clus2 2 To start replication after full synchronization, enter the following command: # vradmin -g oradatadg -c rac1_ckpt startrep rac1_rvg clus2 Verifying replication status Verify that replication is properly functioning.
334 Configuring a global cluster using VVR Configuring VCS to replicate the database volume using VVR About modifying the VCS configuration for replication The following resources must be configured or modified for replication: ■ Log owner group ■ RVG group ■ CVMVolDg resource ■ RVGSharedPri resource ■ application database service group For more information on service replication resources: See the Veritas Cluster Server Agents for Veritas Volume Replicator Configuration Guide.
Configuring a global cluster using VVR Configuring VCS to replicate the database volume using VVR For a detailed description of the CVMVolDg agent in this guide: See “ CVMVolDg agent” on page 429. RVGSharedPri resource Add the RVGSharedPri resource to the existing application database service group. The CVMVolDg resource must be removed from the existing application database service group.
336 Configuring a global cluster using VVR Configuring VCS to replicate the database volume using VVR 4 Use vi or another text editor to edit the main.cf file. Review the sample configuration file after the SFCFS installation. Add a failover service group using the appropriate values for your cluster and nodes. Include the following resources: ■ RVGLogowner resource.
Configuring a global cluster using VVR Configuring VCS to replicate the database volume using VVR 5 Add the RVG service group using the appropriate values for your cluster and nodes.
338 Configuring a global cluster using VVR Configuring VCS to replicate the database volume using VVR oradata_mnt requires oradata_voldg The following is an example of an application database service group configured for replication: group database_grp ( SystemList = { galaxy = 0, nebula = 1 } ClusterList = { clus1 = 0, clus2 = 1 } Parallel = 1 ClusterFailOverPolicy = Manual Authority = 1 AutoStartList = { galaxy,nebula } ) CFSMount oradata_mnt ( MountPoint = "/oradata" BlockDevice = "/dev/vx/dsk/oradata
Configuring a global cluster using VVR Configuring VCS to replicate the database volume using VVR Modifying the VCS Configuration on the Secondary Site The following are highlights of the procedure to modify the existing VCS configuration on the secondary site: ■ Add the log owner and RVG service groups. ■ Add a service group to manage the application database and the supporting resources.
340 Configuring a global cluster using VVR Configuring VCS to replicate the database volume using VVR group rlogowner ( SystemList = { mercury = 0, jupiter = 1 } AutoStartList = { mercury, jupiter } ) IP logowner_ip ( Device = lan0 Address = "10.11.9.102" NetMask = "255.255.255.0" ) NIC nic ( Device = lan0 NetworkHosts = { "10.10.8.
Configuring a global cluster using VVR Configuring VCS to replicate the database volume using VVR 6 Add the RVG service group using the appropriate values for your cluster and nodes.
342 Configuring a global cluster using VVR Configuring VCS to replicate the database volume using VVR group database_grp ( SystemList = { mercury = 0, jupiter = 1 } ClusterList = { clus2 = 0, clus1 = 1 } Parallel = 1 OnlineRetryInterval = 300 ClusterFailOverPolicy = Manual Authority = 1 AutoStartList = { mercury, jupiter } ) RVGSharedPri ora_vvr_shpri ( RvgResourceName = racdata_rvg OnlineRetryLimit = 0 ) CFSMount oradata_mnt ( MountPoint = "/oradata" BlockDevice = "/dev/vx/dsk/oradatadg/racdb_vol" Criti
Configuring a global cluster using VVR Using VCS commands on SFCFS global clusters 10 Stop and restart VCS. # hastop -all -force Wait for port h to stop on all nodes, and then restart VCS with the new configuration on all primary nodes: # hastart 11 Verify that VCS brings all resources online. On one node, enter the following command: # hagrp -display The application, RVG, and CVM groups are online on both nodes of the primary site. The RVGLogOwner group is online on one node of the cluster.
344 Configuring a global cluster using VVR Using VVR commands on SFCFS global clusters ■ Migration of the role of the primary site to the remote site ■ Takeover of the primary site role by the secondary site About migration and takeover of the primary site role Migration is a planned transfer of the role of primary replication host from one cluster to a remote cluster. This transfer enables the application on the remote cluster to actively use the replicated data.
Configuring a global cluster using VVR Using VVR commands on SFCFS global clusters To migrate the role of primary site to the remote site 1 From the primary site, use the following command to take the Oracle service group offline on all nodes. # hagrp -offline database_grp -any Wait for VCS to take all Oracle service groups offline on the primary site. 2 Verify that the RLINK between the primary and secondary is up to date.
346 Configuring a global cluster using VVR Using VVR commands on SFCFS global clusters To migrate the role of new primary site back to the original primary site 1 Make sure that all CRS resources are online, and switch back the group database_grp to the original primary site. Issue the following command on the remote site: # hagrp -offline database_grp -any 2 Verify that the RLINK between the primary and secondary is up to date.
Configuring a global cluster using VVR Using VVR commands on SFCFS global clusters ■ Disconnect ■ Replica Disaster When the cluster on the primary site is inaccessible and appears dead, the administrator declares the failure type as "disaster." For example, fire may destroy a data center, including the primary site and all data in the volumes. After making this declaration, the administrator can bring the service group online on the secondary site, which now has the role as "primary" site.
348 Configuring a global cluster using VVR Using VVR commands on SFCFS global clusters Replica In the rare case where the current primary site becomes inaccessible while data is resynchronized from that site to the original primary site using the fast fail back method, the administrator at the original primary site may resort to using a data snapshot (if it exists) taken before the start of the fast fail back operation. In this case, the failure type is designated as "replica".
Configuring a global cluster using VVR Using VVR commands on SFCFS global clusters To resynchronize after an outage 1 On the original primary site, create a snapshot of the RVG before resynchronizing it in case the current primary site fails during the resynchronization. Assuming the disk group is data_disk_group and the RVG is rac1_rvg, type: # vxrvg -g data_disk_group -F snapshot rac1_rvg See the Veritas Volume Replicator Administrator’s Guide for details on RVG snapshots. 2 Resynchronize the RVG.
350 Configuring a global cluster using VVR Using VVR commands on SFCFS global clusters Troubleshooting CVM and VVR components of SFCFS The following topic is useful for troubleshooting the VVR component of SFCFS. Updating the rlink If the rlink is not up to date, use the hares -action command with the resync action token to synchronize the RVG.
Configuring a global cluster using VVR Using VVR commands on SFCFS global clusters RVGPrimary agent The RVGPrimary agent attempts to migrate or take over a Secondary to a Primary following an application failover. The agent has no actions associated with the offline and monitor routines.
352 Configuring a global cluster using VVR Using VVR commands on SFCFS global clusters
Section 8 Uninstallation of Storage Foundation Cluster File System ■ Chapter 22.
354
Chapter 22 Uninstalling Storage Foundation Cluster File System This chapter includes the following topics: ■ Shutting down cluster operations ■ Disabling the agents on a system ■ Removing the Replicated Data Set ■ Uninstalling SFCFS with the Veritas Web-based installer ■ Uninstalling SFCFS depots using the script-based installer ■ Uninstalling Storage Foundation Cluster File System ■ Removing license files (Optional) ■ Removing the CP server configuration using the removal script ■ Removi
356 Uninstalling Storage Foundation Cluster File System Disabling the agents on a system To take all service groups offline and shutdown VCS ◆ Use the hastop command as follows: # /opt/VRTSvcs/bin/hastop -all Warning: Do not use the -force option when executing hastop. This will leave all service groups online and shut down VCS, causing undesired results during uninstallation of the packages. Disabling the agents on a system This section explains how to disable a VCS agent for VVR on a system.
Uninstalling Storage Foundation Cluster File System Removing the Replicated Data Set 3 Stop the agent on the system by entering: # haagent -stop agent_name -sys system_name When you get the message Please look for messages in the log file, check the file /var/VRTSvcs/log/engine_A.log for a message confirming that each agent has stopped. You can also use the ps command to confirm that the agent is stopped. 4 Remove the system from the SystemList of the service group.
358 Uninstalling Storage Foundation Cluster File System Uninstalling SFCFS with the Veritas Web-based installer 3 Remove the Secondary from the RDS by issuing the following command on any host in the RDS: # vradmin -g diskgroup delsec local_rvgname sec_hostname The argument local_rvgname is the name of the RVG on the local host and represents its RDS. The argument sec_hostname is the name of the Secondary host as displayed in the output of the vradmin printrvg command.
Uninstalling Storage Foundation Cluster File System Uninstalling SFCFS with the Veritas Web-based installer To uninstall SFCFS 1 Perform the required steps to save any data that you wish to preserve. For example, take back-ups of configuration files. 2 In an HA configuration, stop VCS processes on either the local system or all systems. To stop VCS processes on the local system: # hastop -local To stop VCS processes on all systems: # hastop -all 3 Start the Web-based installer.
360 Uninstalling Storage Foundation Cluster File System Uninstalling SFCFS depots using the script-based installer Uninstalling SFCFS depots using the script-based installer Use the following procedure to remove SFCFS products. Not all depots may be installed on your system depending on the choices that you made when you installed the software. See “About configuring secure shell or remote shell communication modes before installing products” on page 405.
Uninstalling Storage Foundation Cluster File System Uninstalling Storage Foundation Cluster File System 8 The uninstall script prompts for the system name. Enter one or more system names, separated by a space, from which to uninstall SFCFS, for example, host1: Enter the system names separated by spaces from which to uninstall Storage Foundation: host1 9 The uninstall script prompts you to select Storage Foundation Cluster File System or Storage Foundation Cluster File System High Availability.
362 Uninstalling Storage Foundation Cluster File System Removing license files (Optional) 5 Enter the system names to uninstall SFCFS. Enter the system names separated by spaces on which to uninstall SFCFS: system01 system02 6 Enter y to stop the SFCFS process: Do you want to stop SFCFS processes now? [y, n, q] (y) 7 After the uninstall completes, the installer displays the location of the log and summary files. If required, view the files to confirm the status of the removal.
Uninstalling Storage Foundation Cluster File System Removing the CP server configuration using the removal script 363 A configuration utility that is part of VRTScps package is used to remove the CP server configuration.
364 Uninstalling Storage Foundation Cluster File System Removing the CP server configuration using the removal script 4 A warning appears and prompts you to confirm the action to unconfigure the Coordination Point Server. Enter "y" to proceed. WARNING: Unconfiguring Coordination Point Server stops the vxcpserv process. VCS clusters using this server for coordination purpose will have one less coordination point.
Uninstalling Storage Foundation Cluster File System Removing the Storage Foundation for Databases (SFDB) repository after removing the product 6 365 You are then prompted to delete the CP server database. Enter "y" to delete the database. For example: Do you want to delete the CP Server database? (y/n) (Default:n) : 7 Enter "y" at the prompt to confirm the deletion of the CP server database. Warning: This database won't be available if CP server is reconfigured on the cluster.
366 Uninstalling Storage Foundation Cluster File System Removing the Storage Foundation for Databases (SFDB) repository after removing the product To remove the SFDB repository 1 Change directories to the location of the local lookup information for the Oracle SID. For example: # cd /var/vx/vxdba/$ORACLE_SID 2 Identify the SFDB repository file and any associated links: For example: # ls -al lrwxrwxrwx 1 oracle oinstall /ora_data1/TEST/.sfdb_rept 26 Jul 21 13:58 .
Section 9 Installation reference ■ Appendix A. Installation scripts ■ Appendix B. Response files ■ Appendix C. Configuring I/O fencing using a response file ■ Appendix D. Configuring the secure shell or the remote shell for communications ■ Appendix E. Storage Foundation Cluster File System components ■ Appendix F. High availability agent information ■ Appendix G. Troubleshooting information ■ Appendix H. Troubleshooting cluster installation ■ Appendix I.
368
Appendix A Installation scripts This appendix includes the following topics: ■ About installation scripts ■ Installation script options About installation scripts Veritas Storage Foundation and High Availability Solutions 5.1 SP1 provides several installation scripts. An alternative to the installer script is to use a product-specific installation script.
370 Installation scripts Installation script options To use the installation script, enter the script name at the prompt. For example, to install Veritas Storage Foundation, type ./installsf at the prompt. Installation script options Table A-1 shows command line options for the installation script. For an initial install or upgrade, options are not usually required. The installation script options apply to all Veritas Storage Foundation product scripts, except where otherwise noted.
Installation scripts Installation script options Table A-1 Available command line options (continued) Command Line Option Function -copyinstallscripts Use this option when you manually install products and want to use the intallation scripts that are stored on the system to perform product configuration, uninstallation, and licensing tasks without the product media. Use this option to copy the installation scripts to an alternate rootpath when you use it with the -rootpath option.
372 Installation scripts Installation script options Table A-1 Available command line options (continued) Command Line Option Function –hostfile full_path_to_file Specifies the location of a file that contains a list of hostnames on which to install. –ignorepatchreqs The -ignorepatchreqs option is used to allow installation or upgrading even if the prerequisite depots or patches are missed on the system. –install The -install option is used to install products on systems.
Installation scripts Installation script options Table A-1 Available command line options (continued) Command Line Option Function –patchpath patch_path Designates the path of a directory that contains all patches to install. The directory is typically an NFS-mounted location and must be accessible by all specified installation systems. –pkginfo Displays a list of depots and the order of installation in a human-readable format. This option only applies to the individual product installation scripts.
374 Installation scripts Installation script options Table A-1 Available command line options (continued) Command Line Option Function –requirements The -requirements option displays required OS version, required patches, file system space, and other system requirements in order to install the product. –responsefile response_file Automates installation and configuration by using system and configuration information stored in a specified file instead of prompting for information.
Installation scripts Installation script options Table A-1 Available command line options (continued) Command Line Option Function –stop Stops the daemons and processes for the specified product. –tmppath tmp_path Specifies a directory other than /var/tmp as the working directory for the installation scripts. This destination is where initial logging is performed and where depots are copied on remote systems before installation.
376 Installation scripts Installation script options
Appendix B Response files This appendix includes the following topics: ■ About response files ■ Installing SFCFS using response files ■ Configuring SFCFS using response files ■ Upgrading SFCFS using response files ■ Uninstalling SFCFS using response files ■ Syntax in the response file ■ Response file variables to install, upgrade, or uninstall Storage Foundation Cluster File System ■ Response file variables to configure Storage Foundation Cluster File System ■ Sample response file for SFC
378 Response files Installing SFCFS using response files You can generate a response file using the makeresponsefile option. See “Installation script options” on page 370. Installing SFCFS using response files Typically, you can use the response file that the installer generates after you perform SFCFS installation on one cluster to install SFCFS on other clusters. You can also create a response file using the -makeresponsefile option of the installer.
Response files Upgrading SFCFS using response files 379 To configure SFCFS using response files 1 Make sure the SFCFS depots are installed on the systems where you want to configure SFCFS. 2 Copy the response file to one of the cluster systems where you want to configure SFCFS. 3 Edit the values of the response file variables as necessary. To configure optional features, you must define appropriate values for all the response file variables that are related to the optional feature.
380 Response files Uninstalling SFCFS using response files 5 Mount the product disk, and navigate to the folder that contains the installation program. 6 Start the upgrade from the system to which you copied the response file. For example: # ./installer -responsefile /tmp/response_file # ./installsfcfs -responsefile /tmp/response_file Where /tmp/response_file is the response file’s full path name.
Response files Response file variables to install, upgrade, or uninstall Storage Foundation Cluster File System $CFG{List_variable}=["value", "value", "value"]; Response file variables to install, upgrade, or uninstall Storage Foundation Cluster File System Table B-1 lists the response file variables that you can define to configure SFCFS. Table B-1 Response file variables specific to installing, upgrading, or uninstalling SFCFS Variable Description CFG{opt}{install} Installs SFCFS depots.
382 Response files Response file variables to install, upgrade, or uninstall Storage Foundation Cluster File System Table B-1 Response file variables specific to installing, upgrading, or uninstalling SFCFS (continued) Variable Description CFG{opt}{keyfile} Defines the location of an ssh keyfile that is used to communicate with all remote systems. List or scalar: scalar Optional or required: optional CFG{at_rootdomain} Defines the name of the system where the root broker is installed.
Response files Response file variables to configure Storage Foundation Cluster File System Table B-1 Response file variables specific to installing, upgrading, or uninstalling SFCFS (continued) Variable Description CFG{donotinstall} {depot} Instructs the installation to not install the optional depots in the list. List or scalar: list Optional or required: optional CFG{donotremove} {depot} Instructs the uninstallation to not remove the optional depots in the list.
384 Response files Response file variables to configure Storage Foundation Cluster File System Table B-2 Response file variables specific to configuring Storage Foundation Cluster File System Variable List or Scalar Description CFG{opt}{configure} Scalar Performs the configuration if the depots are already installed. (Required) CFG{accepteula} Scalar Specifies whether you agree with EULA.pdf on the media. (Required) CFG{systems} List List of systems on which the product is to be configured.
Response files Response file variables to configure Storage Foundation Cluster File System Table B-2 Response file variables specific to configuring Storage Foundation Cluster File System (continued) Variable List or Scalar Description $CFG{uploadlogs} Scalar Defines Boolean value 0 or 1. The value 1 indicates that the installation logs are uploaded to the Symantec Web site. The value 0 indicates that the installation logs are not uploaded to the Symantec Web site.
386 Response files Response file variables to configure Storage Foundation Cluster File System Table B-3 Response file variables specific to configuring a basic Storage Foundation Cluster File System cluster (continued) Variable List or Scalar Description $CFG{fencingenabled} Scalar In a Storage Foundation Cluster File System configuration, defines if fencing is enabled. Valid values are 0 or 1.
Response files Response file variables to configure Storage Foundation Cluster File System Table B-5 lists the response file variables that specify the required information to configure LLT over UDP. Table B-5 Response file variables specific to configuring LLT over UDP Variable List or Scalar Description CFG{lltoverudp}=1 Scalar Indicates whether to configure heartbeat link using LLT over UDP.
388 Response files Response file variables to configure Storage Foundation Cluster File System Table B-5 Response file variables specific to configuring LLT over UDP (continued) Variable List or Scalar Description CFG{vcs_udplink_netmask} Scalar Stores the netmask (prefix for IPv6) that the heartbeat link uses on node1. {} You can have four heartbeat links and for this response file variable can take values 1 to 4 for the respective heartbeat links.
Response files Response file variables to configure Storage Foundation Cluster File System Table B-7 Response file variables specific to configuring Storage Foundation Cluster File System cluster in secure mode Variable List or Scalar Description CFG{at_rootdomain} Scalar Defines the name of the system where the root broker is installed. (Optional) CFG{at_rootbroker} Scalar Defines the root broker's name.
390 Response files Response file variables to configure Storage Foundation Cluster File System Table B-8 Response file variables specific to configuring VCS users Variable List or Scalar Description CFG{vcs_userenpw} List List of encoded passwords for VCS users. The value in the list can be "Administrators Operators Guests." Note: The order of the values for the vcs_userenpw list must match the order of the values in the vcs_username list.
Response files Response file variables to configure Storage Foundation Cluster File System Table B-9 Response file variables specific to configuring VCS notifications using SMTP (continued) Variable List or Scalar Description CFG{vcs_smtprsev} List Defines the minimum severity level of messages (Information, Warning, Error, and SevereError) that listed SMTP recipients are to receive. Note that the ordering of severity levels must match that of the addresses of SMTP recipients.
392 Response files Sample response file for SFCFS install Table B-11 Response file variables specific to configuring Storage Foundation Cluster File System global clusters Variable List or Scalar Description CFG{vcs_gconic} Scalar Defines the NIC for the Virtual IP that the Global Cluster Option uses. You can enter ‘all’ as a system value if the same NIC is used on all systems. {system} (Optional) CFG{vcs_gcovip} Scalar Defines the virtual IP address to that the Global Cluster Option uses.
Response files Sample response file for SFCFS configure 393 Sample response file for SFCFS configure The following example shows a response file for configuring Storage Foundation Cluster File System.
394 Response files Sample response file for SFCFS configure
Appendix C Configuring I/O fencing using a response file This appendix includes the following topics: ■ Configuring I/O fencing using response files ■ Response file variables to configure disk-based I/O fencing ■ Sample response file for configuring disk-based I/O fencing ■ Response file variables to configure server-based I/O fencing ■ Sample response file for configuring server-based I/O fencing ■ Response file variables to configure non-SCSI3 server-based I/O fencing ■ Sample response file
396 Configuring I/O fencing using a response file Response file variables to configure disk-based I/O fencing 3 Copy the response file to one of the cluster systems where you want to configure I/O fencing. See “Sample response file for configuring disk-based I/O fencing” on page 397. See “Sample response file for configuring server-based I/O fencing” on page 400. 4 Edit the values of the response file variables as necessary. See “Response file variables to configure disk-based I/O fencing” on page 396.
Configuring I/O fencing using a response file Sample response file for configuring disk-based I/O fencing Table C-1 Response file variables specific to configuring disk-based I/O fencing (continued) Variable List or Scalar Description CFG {vxfen_config _fencing_mechanism} Scalar Specifies the I/O fencing mechanism. This variable is not required if you had configured fencing in disabled mode.
398 Configuring I/O fencing using a response file Response file variables to configure server-based I/O fencing See “Response file variables to configure disk-based I/O fencing” on page 396.
Configuring I/O fencing using a response file Response file variables to configure server-based I/O fencing Disk-based fencing with the disk group to be created means that the disk group does not exist yet, but will be created with the disks mentioned as coordination point. Table C-2 lists the fields in the response file that are relevant for server-based customized I/O fencing.
400 Configuring I/O fencing using a response file Sample response file for configuring server-based I/O fencing Table C-2 CP server response file definitions (continued) Response file field Definition fencing_cpc_diffab This response field indicates whether the CP servers and SFCFS clusters use different root brokers. Entering a "1" indicates that they are using different root brokers. Entering a "0" indicates that they are not using different root brokers.
Configuring I/O fencing using a response file Response file variables to configure non-SCSI3 server-based I/O fencing $CFG{fencing_cpc_config_cpagent}=0; $CFG{fencing_cpc_cps}=[ qw(10.200.117.145) ]; $CFG{fencing_cpc_dgname}="vxfencoorddg"; $CFG{fencing_cpc_diffab}=0; $CFG{fencing_cpc_disks}=[ qw(emc_clariion0_37 emc_clariion0_13) ]; $CFG{fencing_cpc_mechanism}="raw"; $CFG{fencing_cpc_ncps}=3; $CFG{fencing_cpc_ndisks}=2; $CFG{fencing_cpc_ports}{"10.200.117.
402 Configuring I/O fencing using a response file Response file variables to configure non-SCSI3 server-based I/O fencing Table C-3 Non-SCSI3 server-based I/O fencing response file definitions (continued) Response file field Definition CFG {fencing_cpc_config_cpagent} Enter '1' or '0' depending upon whether you want to configure the Coordination Point agent using the installer or not. Enter "0" if you do not want to configure the Coordination Point agent using the installer.
Configuring I/O fencing using a response file Sample response file for configuring non-SCSI3 server-based I/O fencing Table C-3 403 Non-SCSI3 server-based I/O fencing response file definitions (continued) Response file field Definition CFG {fencing_cpc_security} This field indicates whether security is enabled or not Entering a "1" indicates that security is enabled. Entering a "0" indicates that security has not been enabled.
404 Configuring I/O fencing using a response file Sample response file for configuring non-SCSI3 server-based I/O fencing
Appendix D Configuring the secure shell or the remote shell for communications This appendix includes the following topics: ■ About configuring secure shell or remote shell communication modes before installing products ■ Configuring and enabling ssh ■ Enabling remsh About configuring secure shell or remote shell communication modes before installing products Establishing communication between nodes is required to install Veritas software from a remote system, or to install and configure a cluster.
406 Configuring the secure shell or the remote shell for communications Configuring and enabling ssh Configuring and enabling ssh The ssh program enables you to log into and execute commands on a remote system. ssh enables encrypted communications and an authentication process between two untrusted hosts over an insecure network. In this procedure, you first create a DSA key pair. From the key pair, you append the public key from the source system to the authorized_keys file on the target systems.
Configuring the secure shell or the remote shell for communications Configuring and enabling ssh 4 When the program asks you to enter the passphrase, press the Enter key twice. Enter passphrase (empty for no passphrase): Do not enter a passphrase. Press Enter. Enter same passphrase again: Press Enter again. 5 Make sure the /.ssh directory is on all the target installation systems (system2 in this example).
408 Configuring the secure shell or the remote shell for communications Configuring and enabling ssh 3 From the source system (system1), move the public key to a temporary file on the target system (system2). Use the secure file transfer program. In this example, the file name id_dsa.pub in the root directory is the name for the temporary file for the public key.
Configuring the secure shell or the remote shell for communications Configuring and enabling ssh 8 To begin the ssh session on the target system (system2 in this example), type the following command on system1: system1 # ssh system2 Enter the root password of system2 at the prompt: password: 9 After you log in to system2, enter the following command to append the id_dsa.pub file to the authorized_keys file: system2 # cat /id_dsa.pub >> /.ssh/authorized_keys 10 After the id_dsa.
410 Configuring the secure shell or the remote shell for communications Enabling remsh To verify that you can connect to a target system 1 On the source system (system1), enter the following command: system1 # ssh -l root system2 uname -a where system2 is the name of the target system. 2 The command should execute from the source system (system1) to the target system (system2) without the system requesting a passphrase or password. 3 Repeat this procedure for each target system.
Appendix E Storage Foundation Cluster File System components This appendix includes the following topics: ■ Veritas Storage Foundation Cluster File System installation depots ■ Veritas Cluster Server installation depots ■ Veritas Cluster File System installation depots ■ Veritas Storage Foundation obsolete and reorganized installation depots Veritas Storage Foundation Cluster File System installation depots Table E-1 shows the depot name and contents for each English language depot for Veritas Sto
412 Storage Foundation Cluster File System components Veritas Storage Foundation Cluster File System installation depots Table E-1 Veritas Storage Foundation Cluster File System depots depots Contents Configuration VRTSaslapm Veritas Array Support Library (ASL) Minimum and Array Policy Module(APM) binaries Required for the support and compatibility of various storage arrays.
Storage Foundation Cluster File System components Veritas Cluster Server installation depots Table E-1 Veritas Storage Foundation Cluster File System depots (continued) depots Contents Configuration VRTSodm ODM Driver for VxFS Recommended Veritas Extension for Oracle Disk Manager is a custom storage interface designed specifically for Oracle9i and 10g. Oracle Disk Manager allows Oracle 9i and 10g to improve performance and manage system bandwidth.
414 Storage Foundation Cluster File System components Veritas Cluster File System installation depots Table E-2 VCS installation depots depot Contents Configuration VRTSgab Veritas Cluster Server group membership and atomic broadcast services Minimum VRTSllt Veritas Cluster Server low-latency transport Minimum VRTSamf Veritas Cluster Server Asynchronous Monitoring Framework Minimum VRTSvcs Veritas Cluster Server Minimum VRTSvcsag Veritas Cluster Server Bundled Agents Minimum VRTSvxfen
Storage Foundation Cluster File System components Veritas Storage Foundation obsolete and reorganized installation depots Table E-3 CFS installation depots depot Contents Configuration VRTScavf Veritas Cluster Server Agents for Minimum Storage Foundation Cluster File System VRTSglm Veritas Group Lock Manager for Minimum Storage Foundation Cluster File System VRTSgms Veritas Group Messaging Services for Recommended Storage Foundation Cluster File System Veritas Storage Foundation obsolete and reo
416 Storage Foundation Cluster File System components Veritas Storage Foundation obsolete and reorganized installation depots Table E-4 Veritas Storage Foundation obsolete and reorganized depots (continued) depot Description VRTSweb Obsolete Product depots VRTSacclib Obsolete The following information is for installations, upgrades, and uninstallations using the script- or Web-based installer. For fresh installations VRTSacclib is not installed. ■ For upgrades, VRTSacclib is not uninstalled.
Storage Foundation Cluster File System components Veritas Storage Foundation obsolete and reorganized installation depots Table E-4 Veritas Storage Foundation obsolete and reorganized depots (continued) depot Description VRTSvcsvr Included in VRTSvcs VRTSvdid Obsolete VRTSvmman Included in mainpkg VRTSvmpro Included in VRTSsfmh VRTSvrpro Included in VRTSob VRTSvrw Obsolete VRTSvxmsa Obsolete Documentation All Documentation depots obsolete 417
418 Storage Foundation Cluster File System components Veritas Storage Foundation obsolete and reorganized installation depots
Appendix F High availability agent information This appendix includes the following topics: ■ About agents ■ Enabling and disabling intelligent resource monitoring ■ CVMCluster agent ■ CVMVxconfigd agent ■ CVMVolDg agent ■ CFSMount agent ■ CFSfsckd agent About agents An agent is defined as a process that starts, stops, and monitors all configured resources of a type, and reports their status to Veritas Cluster Server (VCS). Agents have both entry points and attributes.
420 High availability agent information Enabling and disabling intelligent resource monitoring Attributes are either optional or required, although sometimes the attributes that are optional in one configuration may be required in other configurations. Many optional attributes have predefined or default values, which you should change as required. A variety of internal use only attributes also exist. Do not modify these attributes—modifying them can lead to significant problems for your clusters.
High availability agent information Enabling and disabling intelligent resource monitoring To enable intelligent resource monitoring 1 Make the VCS configuration writable. # haconf -makerw 2 Run the following command to enable intelligent resource monitoring.
422 High availability agent information Enabling and disabling intelligent resource monitoring To disable intelligent resource monitoring 1 Make the VCS configuration writable.
High availability agent information CVMCluster agent CVMCluster agent The CVMCluster agent controls system membership on the cluster port that is associated with Veritas Volume Manager (VxVM). The CVMCluster agent performs the following functions: ■ Joins a node to the CVM cluster port. ■ Removes a node from the CVM cluster port. ■ Monitors the node's cluster membership state. Entry points for CVMCluster agent Table F-1 describes the entry points used by the CVMCluster agent.
424 High availability agent information CVMCluster agent Table F-2 CVMCluster agent attributes (continued) Attribute Dimension Description CVMTransport string-scalar Specifies the cluster messaging mechanism. Default = gab Note: Do not change this value. PortConfigd integer-scalar The port number that is used by CVM for vxconfigd-level communication. PortKmsgd integer-scalar The port number that is used by CVM for kernel-level communication.
High availability agent information CVMVxconfigd agent CVMCluster agent sample configuration The following is an example definition for the CVMCluster service group: CVMCluster cvm_clus ( Critical = 0 CVMClustName = clus1 CVMNodeId = { galaxy = 0, nebula = 1 } CVMTransport = gab CVMTimeout = 200 ) CVMVxconfigd agent The CVMVxconfigd agent starts and monitors the vxconfigd daemon.
426 High availability agent information CVMVxconfigd agent Table F-3 CVMVxconfigd entry points (continued) Entry Point Description imf_init Initializes the agent to interface with the AMF kernel module. This function runs when the agent starts up. imf_getnotification Gets notification about the vxconfigd process state. This function runs after the agent initializes with the AMF kernel module. This function continuously waits for notification.
High availability agent information CVMVxconfigd agent Table F-4 CVMVxconfigd agent attribute (continued) Attribute Dimension IMF integer-association Description 427
428 High availability agent information CVMVxconfigd agent Table F-4 Attribute CVMVxconfigd agent attribute (continued) Dimension Description This resource-type level attribute determines whether the CVMVxconfigd agent must perform intelligent resource monitoring. You can also override the value of this attribute at resource-level. This attribute includes the following keys: ■ Mode: Define this attribute to enable or disable intelligent resource monitoring.
High availability agent information CVMVolDg agent Table F-4 Attribute CVMVxconfigd agent attribute (continued) Dimension Description RegisterRetyLimit key determines the number of times the agent must retry registration for a resource. If the agent cannot register the resource within the limit that is specified, then intelligent monitoring is disabled until the resource state changes or the value of the Mode key changes. Default: 3.
430 High availability agent information CVMVolDg agent ■ Imports the shared disk group from the CVM master node ■ Starts the volumes and volume sets in the disk group ■ Monitors the disk group, volumes, and volume sets ■ Optionally, deports the disk group when the dependent applications are taken offline. The agent deports the disk group only if the appropriate attribute is set. Configure the CVMVolDg agent for each disk group used by a Oracle service group.
High availability agent information CVMVolDg agent Table F-5 CVMVolDg agent entry points (continued) Entry Point Description Monitor Determines whether the disk group, the volumes, and the volume sets are online. The agent takes a volume set offline if the file system metadata volume of a volume set is discovered to be offline in a monitor cycle.
432 High availability agent information CVMVolDg agent Table F-6 CVMVolDg agent attributes (continued) Attribute Dimension Description CVMVolumeIoTest(optional) string-keylist List of volumes and volume sets that will be periodically polled to test availability. The polling is in the form of 4 KB reads every monitor cycle to a maximum of 10 of the volumes or volume sets in the list. For volume sets, reads are done on a maximum of 10 component volumes in each volume set.
High availability agent information CFSMount agent 433 CVMVolDg agent type definition The CVMTypes.
434 High availability agent information CFSMount agent This agent is IMF-aware and uses asynchronous monitoring framework (AMF) kernel driver for IMF notification. For more information about the Intelligent Monitoring Framework (IMF) and intelligent resource monitoring, refer to the Veritas Cluster Server Administrator’s Guide. Entry points for CFSMount agent Table F-7 provides the entry points for the CFSMount agent.
High availability agent information CFSMount agent Table F-8 CFSMount Agent attributes (continued) Attribute Dimension Description NodeList string-keylist List of nodes on which to mount. If NodeList is NULL, the agent uses the service group system list.
436 High availability agent information CFSMount agent Table F-8 CFSMount Agent attributes (continued) Attribute Dimension IMF integer-association Description
High availability agent information CFSMount agent Table F-8 Attribute CFSMount Agent attributes (continued) Dimension Description Resource-type level attribute that determines whether the CFSMount agent must perform intelligent resource monitoring. You can also override the value of this attribute at resource-level. This attribute includes the following keys: ■ Mode: Define this attribute to enable or disable intelligent resource monitoring.
438 High availability agent information CFSMount agent Table F-8 Attribute CFSMount Agent attributes (continued) Dimension Description After every (MonitorFreq x OfflineMonitorInterval) number of seconds for offline resources ■ RegisterRetryLimit: If you enable intelligent resource monitoring, the agent invokes the oracle_imf_register agent function to register the resource with the AMF kernel driver.
High availability agent information CFSMount agent Table F-8 CFSMount Agent attributes (continued) Attribute Dimension Description Primary string-scalar Information only. Stores the primary node name for a VxFS file system. The value is automatically modified in the configuration file when an unmounted file system is mounted or another node becomes the primary node. The user does not set this attribute and user programs do not rely on this attribute.
440 High availability agent information CFSfsckd agent To see CFSMount defined in a more extensive example: CFSfsckd agent The CFSfsckd agent starts, stops, and monitors the vxfsckd process. The CFSfsckd agent executable is /opt/VRTSvcs/bin/CFSfsckd/CFSfsckdAgent. The type definition is in the /etc/VRTSvcs/conf/config/CFSTypes.cf file. The configuration is added to the main.cf file after running the cfscluster config command.
High availability agent information CFSfsckd agent Attribute definition for CFSfsckd agent Table F-10 lists user-modifiable attributes of the CFSfsckd Agent resource type.
442 High availability agent information CFSfsckd agent Table F-10 CFSfsckd Agent attributes Attribute Dimension IMF integer-association Description
High availability agent information CFSfsckd agent Table F-10 Attribute CFSfsckd Agent attributes (continued) Dimension Description Resource-type level attribute that determines whether the CFSfsckd agent must perform intelligent resource monitoring. You can also override the value of this attribute at resource-level. This attribute includes the following keys: ■ Mode: Define this attribute to enable or disable intelligent resource monitoring.
444 High availability agent information CFSfsckd agent Table F-10 Attribute CFSfsckd Agent attributes (continued) Dimension Description After every (MonitorFreq x OfflineMonitorInterval) number of seconds for offline resources ■ RegisterRetryLimit: If you enable intelligent resource monitoring, the agent invokes the oracle_imf_register agent function to register the resource with the AMF kernel driver.
Appendix G Troubleshooting information This appendix includes the following topics: ■ Restarting the installer after a failed connection ■ What to do if you see a licensing reminder ■ Storage Foundation Cluster File System installation issues ■ Storage Foundation Cluster File System problems ■ Upgrading Veritas Storage Foundation for Databases (SFDB) tools from 5.0MP2 to 5.
446 Troubleshooting information Storage Foundation Cluster File System installation issues WARNING V-365-1-1 This host is not entitled to run Veritas Storage Foundation/Veritas Cluster Server.As set forth in the End User License Agreement (EULA) you must complete one of the two options set forth below. To comply with this condition of the EULA and stop logging of this message, you have days to either: - make this host managed by a Management Server (see http://go.symantec.
Troubleshooting information Storage Foundation Cluster File System installation issues Failed to setup rsh communication on 10.198.89.241: 'rsh 10.198.89.241 ' failed Trying to setup ssh communication on 10.198.89.241. Failed to setup ssh communication on 10.198.89.241: Login denied Failed to login to remote system(s) 10.198.89.241. Please make sure the password(s) are correct and superuser(root) can login to the remote system(s) with the password(s).
448 Troubleshooting information Storage Foundation Cluster File System installation issues fork() failed: Resource temporarily unavailable The value of nkthread tunable parameter may not be large enough. The nkthread tunable requires a minimum value of 600 on all systems in the cluster. To determine the current value of nkthread, enter: # kctune -q nkthread If necessary, you can change the value of nkthread using the SAM (System Administration Manager) interface, or by running the kctune command.
Troubleshooting information Storage Foundation Cluster File System problems Storage Foundation Cluster File System problems If there is a device failure or controller failure to a device, the file system may become disabled cluster-wide. To address the problem, unmount file system on all the nodes, then run a full fsck. When the file system check completes, mount all nodes again. Unmount failures The umount command can fail if a reference is being held by an NFS server.
450 Troubleshooting information Storage Foundation Cluster File System problems ■ If mount fails with an error message: vxfs mount: device already mounted, ... The device is in use by mount, mkfs or fsck on the same node. This error cannot be generated from another node in the cluster. ■ If this error message displays: mount: slow The node may be in the process of joining the cluster.
Troubleshooting information Storage Foundation Cluster File System problems Performance issues Quick I/O File system performance is adversely affected if a cluster file system is mounted with the qio option enabled, but the file system is not used for Quick I/O files. Because qio is enabled by default, if you do not intend to use a shared file system for Quick I/O, explicitly specify the noqio option when mounting. High availability issues This section describes high availability issues.
452 Troubleshooting information Upgrading Veritas Storage Foundation for Databases (SFDB) tools from 5.0MP2 to 5.1SP1 (2003131) A similar situation may occur if the values in the /etc/llttab files on all cluster nodes are not correct or identical. Upgrading Veritas Storage Foundation for Databases (SFDB) tools from 5.0MP2 to 5.
Appendix H Troubleshooting cluster installation This appendix includes the following topics: ■ Installer cannot create UUID for the cluster ■ The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails ■ Troubleshooting server-based I/O fencing ■ Troubleshooting server-based fencing on the SFCFS cluster nodes ■ Troubleshooting server-based I/O fencing in mixed mode Installer cannot create UUID for the cluster The installer displays the following error message if the installer cannot fi
454 Troubleshooting cluster installation The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails While running the vxfentsthdw utility, you may see a message that resembles as follows: Issuing SCSI TEST UNIT READY to disk reserved by other node FAILED. Contact the storage provider to have the hardware configuration fixed.
Troubleshooting cluster installation Troubleshooting server-based I/O fencing See “Troubleshooting issues related to the CP server service group” on page 455. See “Checking the connectivity of CP server” on page 455. See “Issues during fencing startup on SFCFS cluster nodes set up for server-based fencing” on page 456. See “Issues during online migration of coordination points” on page 458. See “Troubleshooting server-based I/O fencing in mixed mode” on page 459.
456 Troubleshooting cluster installation Troubleshooting server-based fencing on the SFCFS cluster nodes Troubleshooting server-based fencing on the SFCFS cluster nodes The file /var/VRTSvcs/log/vxfen/vxfend_[ABC].log contains logs and text files that may be useful in understanding and troubleshooting fencing-related issues on a SFCFS cluster (client cluster) node.
Troubleshooting cluster installation Troubleshooting server-based fencing on the SFCFS cluster nodes Table H-1 Fencing startup issues on SFCFS cluster (client cluster) nodes (continued) Issue Description and resolution Authentication failure If you had configured secure communication between the CP server and the SFCFS cluster (client cluster) nodes, authentication failure can occur due to the following causes: Symantec Product Authentication Services (AT) is not properly configured on the CP server a
458 Troubleshooting cluster installation Troubleshooting server-based fencing on the SFCFS cluster nodes Table H-1 Fencing startup issues on SFCFS cluster (client cluster) nodes (continued) Issue Description and resolution Preexisting split-brain Assume the following situations to understand preexisting split-brain in server-based fencing: There are three CP servers acting as coordination points. One of the three CP servers then becomes inaccessible.
Troubleshooting cluster installation Troubleshooting server-based I/O fencing in mixed mode ■ The coordination points listed in the /etc/vxfenmode file on the different SFCFS cluster nodes are not the same. If different coordination points are listed in the /etc/vxfenmode file on the cluster nodes, then the operation fails due to failure during the coordination point snapshot check. ■ There is no network connectivity from one or more SFCFS cluster nodes to the CP server(s).
460 Troubleshooting cluster installation Troubleshooting server-based I/O fencing in mixed mode Any keys other than the valid keys used by the cluster nodes that appear in the command output are spurious keys.
Troubleshooting cluster installation Troubleshooting server-based I/O fencing in mixed mode To troubleshoot server-based I/O fencing configuration in mixed mode 1 Review the current I/O fencing configuration by accessing and viewing the information in the vxfenmode file. Enter the following command on one of the SFCFS cluster nodes: # cat /etc/vxfenmode vxfen_mode=customized vxfen_mechanism=cps scsi3_disk_policy=dmp security=0 cps1=[10.140.94.
462 Troubleshooting cluster installation Troubleshooting server-based I/O fencing in mixed mode 3 Review the SCSI registration keys for the coordinator disks used in the I/O fencing configuration. The variables disk_7 and disk_8 in the following commands represent the disk names in your setup. Enter the vxfenadm -s command on each of the SFCFS cluster nodes.
Troubleshooting cluster installation Troubleshooting server-based I/O fencing in mixed mode 4 Review the CP server information about the cluster nodes. On the CP server, run the cpsadm list nodes command to review a list of nodes in the cluster. # cpsadm -s cp_server -a list_nodes where cp server is the virtual IP address or virtual hostname on which the CP server is listening. 5 Review the CP server list membership. On the CP server, run the following command to review the list membership.
464 Troubleshooting cluster installation Troubleshooting server-based I/O fencing in mixed mode
Appendix I Sample SFCFS cluster setup diagrams for CP server-based I/O fencing This appendix includes the following topics: ■ Configuration diagrams for setting up server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing The following CP server configuration diagrams can be used as guides when setting up CP server within your configuration: ■ Two unique client clusters that are served by 3 CP servers: See Figure I-1 on page 466.
Sample SFCFS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing In the vxfenmode file on the client nodes, vxfenmode is set to customized with vxfen mechanism set to cps. Two unique client clusters served by 3 CP servers VLAN Private network Cluster-2 node 2 C 3 NI C NI A A vxfenmode= customized B B H H vxfen_mechanism = cps cps1=[mycps1.company.com]=14250 cps2=[mycps2company.com]=14250 cps3=[mycps3.company.
Sample SFCFS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing In the vxfenmode file on the client nodes, vxfenmode is set to customized with vxfen mechanism set to cps. The two SCSI-3 disks are part of the disk group vxfencoorddg. The third coordination point is a CP server hosted on an SFHA cluster, with its own shared database and coordinator disks.
468 Sample SFCFS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing Two node campus cluster served by remote CP server and 2 SCSI-3 disks Figure I-3 displays a configuration where a two node campus cluster is being served by one remote CP server and 2 local SCSI-3 LUN (disks). In the vxfenmode file on the client nodes, vxfenmode is set to customized with vxfen mechanism set to cps.
Sample SFCFS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing Two node campus cluster served by remote CP server and 2 SCSI-3 Client Applications Client Applications SITE 2 et ern Eth itch Sw et ern Eth itch LAN Sw et ern Eth itch Sw Cluster node 4 3 NIC 1NIC 2 HBA 1HBA 2 Cluster node 3 NIC 1NIC 2 HBA 1HBA 2 Cluster node 2 3 IC N N IC N 3 IC 3 NIC 1NIC 2 HBA 1HBA 2 Cluster node 1 et ern Eth itch Sw NIC 1NIC 2 HBA 1 HBA
470 Sample SFCFS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing Multiple client clusters served by highly available CP server and 2 SCSI-3 disks Figure I-4 displays a configuration where multiple client clusters are being served by one highly available CP server and 2 local SCSI-3 LUNS (disks). In the vxfenmode file on the client nodes, vxfenmode is set to customized with vxfen mechanism set to cps.
Sample SFCFS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing 471 Multiple client clusters served by highly available CP server and 2 SCSI-3 disks VLAN Private network et ern Eth witch S et ern Eth witch S et ern Eth witch S disk1 VLAN Private network t et rne ern h h t the h E tc 3 C NI SAN BA H CPS database /etc/VRTScps/ db Data LUNs VIP NIC 2 NI NIC 1 C The coordinator disk group specified in /etc/vxfenmode should hav
472 Sample SFCFS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing
Appendix J Reconciling major/minor numbers for NFS shared disks This appendix includes the following topics: ■ Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks Your configuration may include disks on the shared bus that support NFS. You can configure the NFS file systems that you export on disk partitions or on Veritas Volume Manager volumes. An example disk partition name is /dev/dsk/c1t1d0. An example volume name is /dev/vx/dsk/shareddg/vol3.
474 Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks Checking major and minor numbers for disk partitions The following sections describe checking and changing, if necessary, the major and minor numbers for disk partitions used by cluster nodes. To check major and minor numbers on disk partitions ◆ Use the following command on all nodes exporting an NFS file system. This command displays the major and minor numbers for the block device.
Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks 3 Attempt to change the major number on System B (now 36) to match that of System A (32). Use the command: # haremajor -sd major_number For example, on Node B, enter: # haremajor -sd 32 4 If the command succeeds, go to step 8. 5 If the command fails, you may see a message resembling: Error: Preexisting major number 32 These are available numbers on this system: 128...
476 Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks 3 Type the following command on both nodes to determine the instance numbers that the SCSI driver uses: # grep sd /etc/path_to_inst | sort -n -k 2,2 Output from this command resembles the following on Node A: "/sbus@1f,0/QLGC,isp@0,10000/sd@0,0" 0 "sd" "/sbus@1f,0/QLGC,isp@0,10000/sd@1,0" 1 "sd" "/sbus@1f,0/QLGC,isp@0,10000/sd@2,0" 2 "/sbus@1f,0/QLGC,isp@0,10000/sd@3,0" 3 . .
Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks Checking the major and minor number for VxVM volumes The following sections describe checking and changing, if necessary, the major and minor numbers for the VxVM volumes that cluster systems use. To check major and minor numbers on VxVM volumes 1 Place the VCS command directory in your path.
478 Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks 4 Use the following command on each node exporting an NFS file system. The command displays the major numbers for vxio and vxspec that Veritas Volume Manager uses .
Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks 6 If you receive this report, use the haremajor command on Node A to change the major number (32/33) to match that of Node B (36/37). For example, enter: # haremajor -vx 36 37 If the command fails again, you receive a report similar to the following: Error: Preexisting major number 36 These are available numbers on this node: 126... Check /etc/name_to_major on all systems for available numbers.
480 Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks
Appendix K Configuring LLT over UDP using IPv6 This appendix includes the following topics: ■ Using the UDP layer of IPv6 for LLT ■ Manually configuring LLT over UDP using IPv6 Using the UDP layer of IPv6 for LLT Veritas Storage Foundation Cluster File System 5.1 SP1 provides the option of using LLT over the UDP (User Datagram Protocol) layer for clusters using wide-area networks and routers. UDP makes LLT packets routable and thus able to span longer distances more economically.
482 Configuring LLT over UDP using IPv6 Manually configuring LLT over UDP using IPv6 ■ Make sure the IPv6 addresses in the /etc/llttab files are consistent with the IPv6 addresses of the network interfaces. ■ Make sure that each link has a unique not well-known UDP port. See “Selecting UDP ports” on page 483. ■ For the links that cross an IP router, disable multicast features and specify the IPv6 address of each link manually in the /etc/llttab file.
Configuring LLT over UDP using IPv6 Manually configuring LLT over UDP using IPv6 The set-addr command in the /etc/llttab file The set-addr command in the /etc/llttab file is required when the multicast feature of LLT is disabled, such as when LLT must cross IP routers. See “Sample configuration: links crossing IP routers” on page 485. Table K-2 describes the fields of the set-addr command.
484 Configuring LLT over UDP using IPv6 Manually configuring LLT over UDP using IPv6 udp udp udp udp udp udp udp udp udp udp 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 *.49196 *.* *.snmp *.* *.49153 *.echo *.discard *.daytime *.chargen *.syslog *.* *.* *.* *.* *.* *.* *.* *.* *.* *.* Look in the UDP section of the output; the UDP ports that are listed under Local Address are already in use.
Configuring LLT over UDP using IPv6 Manually configuring LLT over UDP using IPv6 The configuration that the /etc/llttab file for Node 0 represents has directly attached crossover links. It might also have the links that are connected through a hub or switch. These links do not cross routers. LLT uses IPv6 multicast requests for peer node address discovery. So the addresses of peer nodes do not need to be specified in the /etc/llttab file using the set-addr command.
486 Configuring LLT over UDP using IPv6 Manually configuring LLT over UDP using IPv6 A typical configuration of links crossing an IP router Figure K-2 Node0 on site A UDP Port = 50001 IP = fe80::21a:64ff:fe92:1a93 Link Tag = link2 Node1 on site B fe80::21a:64ff:fe92:1b47 Link Tag = link2 Routers UDP Port = 50000 IP = fe80::21a:64ff:fe92:1a92 Link Tag = link1 fe80::21a:64ff:fe92:1b46 Link Tag = link1 The configuration that the following /etc/llttab file represents for Node 1 has links crossing IP r
Configuring LLT over UDP using IPv6 Manually configuring LLT over UDP using IPv6 link link1 /dev/udp6 - udp6 50000 - fe80::21a:64ff:fe92:1b46 link link2 /dev/udp6 - udp6 50001 - fe80::21a:64ff:fe92:1b47 #set address of each link for all peer nodes in the cluster #format: set-addr node-id link tag-name address set-addr 1 link1 fe80::21a:64ff:fe92:1a92 set-addr 1 link2 fe80::21a:64ff:fe92:1a93 set-addr 2 link1 fe80::21a:64ff:fe92:1d70 set-addr 2 link2 fe80::21a:64ff:fe92:1d71 set-addr 3 link1 fe80::209:6bff:
488 Configuring LLT over UDP using IPv6 Manually configuring LLT over UDP using IPv6
Appendix L Configuring LLT over UDP using IPv4 This appendix includes the following topics: ■ Using the UDP layer for LLT ■ Manually configuring LLT over UDP using IPv4 Using the UDP layer for LLT Veritas Storage Foundation Cluster File System 5.1 SP1 provides the option of using LLT over the UDP (User Datagram Protocol) layer for clusters using wide-area networks and routers. UDP makes LLT packets routable and thus able to span longer distances more economically.
490 Configuring LLT over UDP using IPv4 Manually configuring LLT over UDP using IPv4 If the LLT private links are not on different physical networks, then make sure that the links are on separate subnets. Set the broadcast address in /etc/llttab explicitly depending on the subnet for each link. See “Broadcast address in the /etc/llttab file” on page 490. ■ Make sure that each NIC has an IP address that is configured before configuring LLT.
Configuring LLT over UDP using IPv4 Manually configuring LLT over UDP using IPv4 491 set-cluster 1 link link1 /dev/udp - udp 50000 - 192.168.9.2 192.168.9.255 link link2 /dev/udp - udp 50001 - 192.168.10.2 192.168.10.255 Verify the subnet mask using the ifconfig command to ensure that the two links are on separate subnets. nebula # ifconfig lan1 lan1: flags=1843 inet 192.168.9.2 netmask ffffff00 broadcast 192.168.9.
492 Configuring LLT over UDP using IPv4 Manually configuring LLT over UDP using IPv4 Table L-1 Field description for link command in /etc/llttab (continued) Field Description IP address IP address of the link on the local node. bcast-address ■ For clusters with enabled broadcasts, specify the value of the subnet broadcast address. ■ "-" is the default for clusters spanning routers.
Configuring LLT over UDP using IPv4 Manually configuring LLT over UDP using IPv4 Proto Recv-Q Send-Q udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 udp 0 0 Local Address Foreign Address *.ntalk *.* *.* *.* *.49193 *.* *.49152 *.* *.portmap *.* *.* *.* *.135 *.* *.2121 *.* *.xdmcp *.* *.49196 *.* *.* *.* *.snmp *.* *.* *.* *.49153 *.* *.echo *.* *.discard *.* *.daytime *.* *.chargen *.* *.syslog *.
494 Configuring LLT over UDP using IPv4 Manually configuring LLT over UDP using IPv4 ■ For the second network interface on the node galaxy: IP address=192.168.10.1, Broadcast address=192.168.10.255, Netmask=255.255.255.0 For the second network interface on the node nebula: IP address=192.168.10.2, Broadcast address=192.168.10.255, Netmask=255.255.255.
Configuring LLT over UDP using IPv4 Manually configuring LLT over UDP using IPv4 Figure L-1 A typical configuration of direct-attached links that use LLT over UDP Node0 Node1 UDP Endpoint /dev/udp; UDP Port = 50001; IP = 192.1.3.1; Link Tag = link2 /dev/udp; 192.1.3.2; Link Tag = link2 Switch UDP Endpoint /dev/udp; UDP Port = 50000; IP = 192.1.2.1; Link Tag = link1 /dev/udp; 192.1.2.
496 Configuring LLT over UDP using IPv4 Manually configuring LLT over UDP using IPv4 #configure Links #link tag-name device node-range link-type udp port MTU \ IP-address bcast-address link link1 /dev/udp - udp 50000 - 192.1.2.2 192.1.2.255 link link2 /dev/udp - udp 50001 - 192.1.3.2 192.1.3.255 Sample configuration: links crossing IP routers Figure L-2 depicts a typical configuration of links crossing an IP router employing LLT over UDP. The illustration shows two nodes of a four-node cluster.
Configuring LLT over UDP using IPv4 Manually configuring LLT over UDP using IPv4 set-addr set-addr 3 link1 192.1.7.3 3 link2 192.1.8.3 #disable LLT broadcasts set-bcasthb 0 set-arp 0 The /etc/llttab file on Node 0 resembles: set-node Node0 set-cluster 1 link link1 /dev/udp - udp 50000 - 192.1.1.1 link link2 /dev/udp - udp 50001 - 192.1.2.1 #set address of each link for all peer nodes in the cluster #format: set-addr node-id link tag-name address set-addr 1 link1 192.1.3.1 set-addr 1 link2 192.1.4.
498 Configuring LLT over UDP using IPv4 Manually configuring LLT over UDP using IPv4
Index A adding users 123 agents about 419 CFSfsckd 440 CFSMount 433, 440 CVMCluster 423 CVMVolDg 429 CVMVxconfigd 425 disabling 356 of VCS 420 application database replication 331 applications, stopping 201 attributes about agent attributes 419 CFSMount agent 434, 441 CVMCluster agent 423 CVMVolDg agent 423, 431 CVMVxconfigd agent 426 UseFence 157 B block device partitions example file name 473 volumes example file name 473 C cables cross-over Ethernet 279 for SCSI devices 35 CFS mount and unmount failur
500 Index configuring VCS (continued) starting 112 coordinator disks DMP devices 84 for I/O fencing 84 setting up 156 CVM CVMTypes.cf file 424 CVMCluster agent 423 attributes 423 entry points 423 sample configuration 425 type definition 424 CVMTypes.cf definition, CVMCluster agent 424 definition, CVMVolDg agent 433 definition, CVMVxconfigd agent 429 CVMVolDg agent 429 attributes 431 entry points 430 sample configuration 433 type definition 433 CVMVxconfigd agent 425 attributes 426 CVMTypes.
Index J jeopardy 451 K kctune command 448 L license keys adding with vxlicinst 130 replacing demo key 131 licenses information about 130 links private network 263 LLT interconnects 33 verifying 266 lltconfig command 263 llthosts file verifying after installation 263 lltstat command 266 llttab file verifying after installation 263 log files 454 M main.
502 Index primary site (continued) setting up replication objects 325 VCS configuration 335, 339 problems accessing manual pages 450 executing file system commands 450 mounting and unmounting file systems 449 Q Quick I/O performance on CFS 451 R removing the Replicated Data Set 357 removing a node from a cluster 298 remsh 113 configuration 34 Replicated Data Set removing the 357 replication automatic synchronization 331 configuring on both sites 317 full synchronization with Checkpoint 332 modifying VCS
Index Symantec Product Authentication Service 77, 119 system state attribute value 270 T troubleshooting accessing manual pages 450 executing file system commands 450 mounting and unmounting file systems 449 U upgrading clustered environment 143 upgrading VVR planning 197 preparing 201 V VCS command directory path variable 266 configuration, for database volume replication 333 configuring service groups 317 VCS configuration for replication 334 VCS Global cluster option.