HP StorageWorks RAID Manager XP User’s Guide XP48 XP128 XP512 XP1024 XP10000 XP12000 ninth edition (November 2005) part number: T1610-96004 This guide describes HP StorageWorks RAID Manager XP (RM) and provides installation and configuration procedures, RM command usage, and troubleshooting instructions.
© Copyright 2003-2005 by Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents About this guide 9 Intended audience 9 Disk arrays 9 Related documentation 9 HP technical support 10 HP storage website 10 HP sales and authorized resellers 11 Document conventions 11 Revision history 12 Warranty statement 13 HP StorageWorks LUN Security XP Extension disclaimer 1 2 Contents Description 17 RAID Manager features and environment Continuous Access (CA) 19 Business Copy (BC) 20 Pairs and pair management 21 RAID Manager instances 23 RAID Manager command device 25 Manually switching c
Installing RAID Manager on OpenVMS systems 37 Configuring the services and hosts files 39 Directory locations 39 Services file 40 Hosts file 40 Setting up the RM instance configuration file 41 RM instance configuration files 41 Creating an instance configuration file 42 RM instance configuration file parameters 43 HORCM_MON section 44 HORCM_CMD section 46 HORCM_DEV section 49 HORCM_LDEV section 52 HORCM_INST section 53 Starting the instances 54 Environment variables for BC 54 Environment variables for CA 56
RM protection 80 Protection facility specification 81 Permission command 82 Protection facility support 82 Command device configuration 83 Commands controlled by RM protection 86 Permitting operations on protected volumes 87 Environment variables 91 Identifying a command device using protection mode 92 Using RAID Manager on a Windows 2000/2003 system with “user” system privileges 93 Windows System Administrator 93 RAID Manager user 98 Sample BAT file 100 LUN Security Extension 102 Guarding options 102 Comma
raidscan 202 Command Options for Windows NT/2000/2003 drivescan 215 env 217 findcmddev 218 mount 220 portscan 223 setenv 225 sleep 226 sync 227 umount 231 usetenv 233 Data Integrity Check Commands 235 raidvchkset 236 raidvchkdsp 243 raidvchkscan 250 6 214 5 Troubleshooting RAID Manager Error reporting 264 Operational notes 265 Error codes 268 Command return values 270 Command errors 273 263 A Configuration file examples 279 Configuration definition for cascading volumes 280 Correspondence between a c
Two BC mirror configuration 319 Three-host BC configuration 321 Device group configuration 323 B C HA Failover and failback 325 Using RAID Manager in HA environments 326 HA control script state transitions 326 Failback after SVOL-SMPL takeover 330 PVOL-PSUE takeover 335 S-VOL data consistency function 343 Takeover-switch function 346 Swap-takeover function 348 SVOL-takeover function 350 PVOL-takeover function 352 Recovery procedures of HA system configuration Regression and recovery of CA 356 CA recovery
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
About this guide This guide describes HP StorageWorks RAID Manager XP (RM) and provides installation and configuration procedures, RM command usage, and troubleshooting instructions. It also has configuration file examples and information about High Availability failover and failback, Fibre Channel addressing, and standard input (STDIN) file formats.
HP technical support In North America, call technical support at 1-800-633-3600, available 24 hours a day, 7 days a week. Outside North America, call technical support at the location nearest you. The HP web site lists telephone numbers for worldwide technical support at: http://www.hp.com/support. From this web site, select your country.
HP sales and authorized resellers To reach HP sales or find a local authorized reseller of HP products, call 1-800-282-6672 or visit the HP How To Buy web site: http://welcome.hp.com/country/us/en/howtobuy.html You can also find HP sales and resellers at http://www.hp.com. Click Contact HP. Document conventions Convention Element Blue text (Figure 1) Blue text represents a cross-reference. In the online version of this guide, the reference is linked to the target.
Revision history September 1999 OPEN-8 emulation added. January 2000 Content extensively revised and reorganized. September 2000 Content extensively revised. February 2001 Added support of MPE/iX. Content significantly enhanced. March 2001 Added mkconf command. Content enhanced. November 2003 Added Oracle Data Validation. Added OpenVMS. Content significantly enhanced. July 2004 General edit of content, layout, and language.General update to reflect recent changes.
Warranty statement HP warrants that for a period of ninety calendar days from the date of purchase, as evidenced by a copy of the invoice, the media on which the Software is furnished (if any) will be free of defects in materials and workmanship under normal use. DISCLAIMER. EXCEPT FOR THE FOREGOING AND TO THE EXTENT ALLOWED BY LOCAL LAW, THIS SOFTWARE IS PROVIDED TO YOU “AS IS” WITHOUT WARRANTIES OF ANY KIND, WHETHER ORAL OR WRITTEN, EXPRESS OR IMPLIED.
LIMITATION OF LIABILITY.
HP StorageWorks LUN Security XP Extension disclaimer HP StorageWorks LUN Security XP Extension provides the ability to place logical volumes into secure states. In these secure states, data on the volumes can not be modified until the retention time specified when the volume is placed in the secured state has elapsed.
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
1 Description HP StorageWorks RAID Manager XP (RM) enables you to perform operations with HP StorageWorks Continuous Access XP (CA) and HP StorageWorks Business Copy XP (BC) by issuing commands from a host to the disk array. The RM software interfaces with the host system software and host high availability (HA) software, as well as with the BC and CA software on the disk array.
RAID Manager features and environment RAID Manager lets you issue Business Copy (BC) and Continuous Access (CA) commands from a host. These commands can be issued from the command line or built into a script (for example, a ksh, perl script, or an MS-DOS batch file). You can execute a large number of BC and CA commands in a short period of time by using scripts containing RM commands. In MPE/iX, you can create POSIX command scripts.
Continuous Access (CA) CA copies data from a local HP XP disk array to one or more remote HP XP disk arrays. You can use CA for data duplication, migration, and offsite backup. RM displays CA volume or group information and allows you to perform CA operations through either the command line, a script (UNIX), or a batch file (Windows). CA has a number of features that ensure reliable transfers in asynchronous mode, including journaling and protection against link failure.
CA-Journal: CA-Journal is supported on XP10000/XP12000 arrays. CA-Journal works in principal the same as CA-Async, but instead of buffering write I/Os in the more expensive and limited XP array cache (the side file), CA-Journal writes data on special XP LUNS called journal pools. Journal pools can consists of up to 16 physical LDEVs of any size, and can therefore buffer much larger amounts of data.
SnapShot employs two techniques: • creating or mapping a virtual volume (V-VOL) • copy on write to a SnapShot pool volume (pool-VOL) identified by a pool ID. SnapShot does not require any new RM commands, it uses current BC commands with new arguments. Note: SnapShot is used in Unix and Windows environments only. SnapShot does not work in MPE/iX and OpenVMS environments. The following figure illustrates the basic concept.
The relationship between a P-VOL and an S-VOL is called a pair. You can use RM’s paircreate command to establish pairs. Once a pair is established, updates to the P-VOL are automatically and continuously copied to the S-VOL. There are other commands to manage pairs. You can temporarily suspend copy operations, create a SnapShot pair, resync the pair, and delete the pair relationship.
RAID Manager instances Each execution of RM is known as an RM instance. Instances are local or remote and can run on the same host or different hosts. Two RM instances are typically required to manage BC or CA pairs. Local instance The RM instance currently being used, that is, the instance to which commands are issued. Local instances link to remote instances by using UDP socket services.
of data are administered by different hosts, guarding against host and disk failure. This is the configuration used by high availability (HA) software (such as HP MetroCluster) in conjunction with RAID Manager’s horctakeover command (see page 114) allowing for both failover and failback.
RAID Manager command device You must designate a special volume on the disk array as the RAID Manager command device. The command device accepts BC or CA control operations. These are seen as in-band SCSI read and write commands, and are executed by the disk array. The volume designated as the command device is used only by RM and is blocked from other user access. The command device can be any OPEN-x device that the host can access. An RM command device uses a minimum of 16 MB of space.
Manually switching command devices To avoid having commands terminate abnormally during a failure, RM has a command device alternating function, which allows you to manually switch devices. • When the command device switches When RM receives an error notification from the operating system, the RM switches automatically to the alternate device. You can also alternate command devices manually by issuing a RM horcctl command. See “horcctl” (page 109).
2 Installation and configuration This chapter describes how to install and configure RAID Manager for UNIX, Windows, MPE/iX, and OpenVMS systems.
Disk array and host requirements RM requires an activated installation of BC or CA on the disk array.
• Plan the mapping of the CA disk volume pairs. Determine which volumes to access. • Map the paths to be used for each host. Using RAID Manager with Business Copy • Have your HP representative configure the disk array for BC functions. • Install the BC license key on the disk array. • Designate one or more RM command devices using Command View XP, LUN Configuration Manager XP, Remote Web Console XP, or Command View XP Advanced Edition.
Installation and configuration outline RM installation and configuration consists of the following tasks. Task details appear in the subsequent sections. • Installing RAID Manager Install the RM software on the hosts. • Configuring the services and hosts files Add a service name/number to the host services file (for example, /etc/services) for each RM instance. Configure the hosts file. • Setting up the RM instance configuration file Configure paths to one or more RM command devices for each host.
Installing RAID Manager on UNIX systems Follow the steps specific for your UNIX system to install RM. Note: Before performing the installation (upgrade), shut down all active RM instances that are running on the primary host and any secondary hosts it is communicating with. 1. Place the CD-ROM in the CD-ROM drive. 2. Identify the CD-ROM device file to be substituted in the mount commands below (for example, /dev/dsk/c1t1d0). 3. Log in as root. su root 4.
7. From the /opt directory, use cpio to unpack the appropriate archive. Create the HORCM directory if it does not already exist. cd /opt mkdir HORCM (choose the next command according to your OS) cat /cdrom/LINUX/rmxp* | cpio –idum (or) cat /cdrom/AIX/rmxp* | cpio –idum (or) cat /cdrom/DIGITAL/rmxp* | cpio –idum (or) cat /cdrom/HP_UX/rmxp* | cpio -idum (or) cat /cdrom/SOLARIS/rmxp* | cpio –idum 8. Change the directory to /opt/HORCM and verify the contents. cd /opt/HORCM ls Example etc horcmuninstall.
Installing RAID Manager on Windows systems 1. Boot the Windows server and log in with administrator access. 2. Insert the RAID Manager CD in the CD-ROM drive. 3. Under the Start menu, select Run. 4. When the Run window opens, enter D:\WIN_NT\setup.exe (where D is the letter of your CD-ROM drive) in the Open dialog box and click OK. 5. The installation wizard opens. Follow the on-screen instructions to install the RM software.
Installing RAID Manager on MPE/iX systems Note: Before performing the installation (upgrade), shut down all active RM instances that are running on the primary host and any secondary hosts it is communicating with. 1. Update your system with MPE/iX 6.5 or greater, along with that OS version’s latest Power Patch. 2. Install the MPE/iX RAID Manager Patch ID XPMMX65. 3. Verify that at least one logical volume on the disk array is configured to function as a command device.
6. Once the above installation completes successfully, create the device files: Shell/iX> mknod /dev/ldev99 c 31 99 Shell/iX> mknod /dev/ldev100 c 31 100 Shell/iX> mknod /dev/cmddev 31 102 c ← LDEV devices ← Command device The 31 in the above example is called the major number. The 99, 100, 102 are called minor numbers. For RAID Manager, always specify 31 as the major number. The minor number should correspond to the LDEV numbers as configured in sysgen.
DEVICE_FILE /dev/cmddev /dev/ldev407 /dev/ldev408 /dev/ldev409 /dev/ldev410 /dev/ldev411 /dev/ldev412 UID 0 0 0 0 0 0 0 S/F PORT S CL1-D S CL1-E S CL1-E S CL1-E S CL1-E S CL1-E S CL1-E TARG 1 8 9 10 11 12 13 LUN 0 0 0 0 0 0 0 SERIAL 35393 35393 35393 35393 35393 35393 35393 LDEV 22 263 264 265 266 267 268 PROD_ID OPEN-3-CM OPEN-3 OPEN-3 OPEN-3 OPEN-3 OPEN-3 OPEN-3 11. Now fill in the HORCM_DEV and HORCM_INST sections in your /etc/horcm#.conf files.
Installing RAID Manager on OpenVMS systems Installation prerequisites • A user account for RAID Manager must have the same privileges as “SYSTEM” (that is, it must be able to use the SCSI class driver and Mailbox driver directly). Some OpenVMS system administrators may not allow RAID Manager to run from the system account. In this case, create another account on the system, such as “RMadmin” that has the same privileges as “SYSTEM.
Installation Install RAID Manager by using the file HP-AXPVMS-RMXP-V0117-3-1.PCSI 1. Insert and mount the installation media. 2. Execute the following command. $ PRODUCT INSTALL RMXP /source=Device:[PROGRAM.RM.OVMS]/LOG _$ /destination=SYS$POSIX_ROOT:[000000] where Device:[PROGRAM.RMOVMS] is where file HP-AXPVMS-RMXP-V0117-3-1.PCSI exists. 3. Confirm the installation: $ raidqry –h Model : Raid-Manager-XP/OpenVMS Ver&Rev: 01.17.
Configuring the services and hosts files After installing, configuring RM requires editing the services and hosts files on the hosts that run RM instances. Directory locations UNIX The services and hosts files are contained in this directory: /etc Windows NT/2000/2003 The services and hosts files are contained in this directory: %systemroot%\system32\drivers\etc MPE/iX The services and hosts files are contained in the MPE group directory: SERVICES.NET.SYS HOSTS.NET.
Services file To configure the services file: 1. Edit the services file on each system. 2. Add a udp service entry for each RM instance that runs on the host and each RM instance referenced in the configuration file. The service number selected must be unique to the services file and in the range 1024 to 65535. Example horcm0 horcm1 11000/udp 11001/udp #RaidManager instance 0 #RaidManager instance 1 To configure the services file in MPE/iX: 1. Add a service entry for each RM instance in the SERVICES.
Setting up the RM instance configuration file Each BC and CA pair has a primary volume (P-VOL), the volume that contains the data to be copied, and a secondary volume (S-VOL), the volume that receives the data from the primary volume. Each of these volumes is linked to at least one instance of RM for the purpose of pair creation, suspension, and deletion. Each instance of RM can manage multiple volumes (on up to four arrays) and manage either P-VOLs or S-VOLs.
MPE/iX An example horcm.conf file can be found in the /HORCM/etc directory. See Appendix E, Porting notice for MPE/iX (page 367). Open VMS See Appendix F, Porting notice for OpenVMS (page 377). Creating an instance configuration file When you create an RM configuration file, follow this naming convention, where instance is the instance number: horcminstance.conf Example horcm0.
RM instance configuration file parameters The configuration file contains all parameters and values for a RM instance.
HORCM_MON section Description Syntax The HORCM_MON section describes the host name or IP address, the port number, and the paired volume error monitoring interval of the local host. HORCM_MON { host_name | IP_address } { service_name | service_number } poll_value timeout_value } host_name Name of the host on which this RM instance runs. IP_address IP address of the host on which this RM instance runs.
Examples HORCM_MON blue horcm1 1000 3000 The RM instance is running on system blue, service name horcm1, with a poll value of 10 seconds and a timeout value of 30 seconds. HORCM_MON NONE horcm1 1000 3000 The RM instance is running on system NONE, indicating two or more network cards are installed in the server, or several networks (subnets) are configured, and the RM listens on all networks. The service name is horcm1 with a poll value of 10 seconds and a timeout value of 30 seconds.
HORCM_CMD section Description The HORCM_CMD section defines the RM command devices RM uses to communicate with the disk array. A RM command is initiated to write command data to the special disk array command device. The disk array then reads this data and carries out the appropriate actions. Multiple command devices are defined in this section of the configuration file to provide alternate command devices and paths in the event of failure. It is recommended that each host have a unique command device.
This HP-UX example shows multiple disk arrays connected to the host. One RM instance can control multiple disk arrays. To enable this feature, the different command devices have to be specified on different lines. RM uses unit IDs to control multiple disk arrays. A device group can span multiple disk arrays (sync-CA only). The unit ID must be appended for every volume device name in the HORCM_DEV section, as shown in the following figure.
Windows NT/2000/ 2003 HORCM_CMD \\.\PHYSICALDRIVE3 This example shows the path to a shared command device in Windows. \\.\Volume{GUID} This example shows the use of a Volume GUID for the command device in Windows. MPE/iX OpenVMS 48 See Appendix E, Porting notice for MPE/iX (page 367). See Appendix F, Porting notice for OpenVMS (page 377).
HORCM_DEV section Description Syntax The HORCM_DEV section describes the physical volumes corresponding to the paired volume names. Each volume listed in HORCM_DEV is defined on a separate line. HORCM_DEV device_group device_name port target_ID LUN [ mirror_unit ] device_group Each device group contains one or more volumes. This parameter gives you the capability to act on a group of volumes with one RM command. The device group can be any user-defined name up to 31 characters in length.
mirror_unit is omitted, the value of h0 will be assumed. Mirror unit value “h1”, “h2” and “h3” are valid only for CA-Journal operations.
Example HORCM_DEV group1 g1–d1 CL1–A 12 1 0 This example shows a volume defined in device group1 known as device g1–d1. It is accessible through disk array unit 0 and I/O port CL1-A. The SCSI target ID is 12, the LUN is 1, and the BC mirror unit number is 0. You can use RM to control multiple disk arrays with one RM instance by specifying the unit ID appended to the port. This example refers to the example in the HORCM_CMD section (page 46).
HORCM_LDEV section Description The HORCM_LDEV section specifies stable LDEV#’s and Serial#’s of physical volumes that correspond to paired logical volume names. Each group name is unique and typically has a name fitting its use (e.g. database data, Redo log file, UNIX file). The group and paired logical volume name described in this item must also be known to the remote server.
HORCM_INST section Description Syntax Example The HORCM_INST section defines how RM groups link to remote RM instances. HORCM_INST device_group { host_name | IP_address } { service_name | service_number } device_group Defined in the HORCM_DEV section. Each group defined in HORCM_DEV must be represented in the HORCM_INST section only once for every remote RM instance. host_name Host name of the host on which the remote instance runs. The remote instance can run on the same host as the local instance.
Starting the instances After setting up the RM instance configuration files, you can start the instances. HP-UX Run this shell command on each host that runs an RM instance: /usr/bin/horcmstart.sh [ instance_number ] [ instance_number ] . . . If you do not specify an instance number, the command uses the value stored in the HORCM_INST environment variable. The default value is 0.
as in the following environment variable examples, where n is the value of the RM instance. UNIX For UNIX ksh, use the export command: export HORCC_MRCF=1 export HORCMINST=n For UNIX csh, use the setenv command: setenv HORCC_MRCF=1 setenv HORCMINST=n Windows NT/2000/2003 For Windows NT/2000/2003, use the set command: set HORCC_MRCF=1 set HORCMINST=n MPE/iX For MPE/iX, use the setenv command. setenv HORCC_MRCF 1 setenv HORCMINST n OpenVMS For OpenVMS, set the environment variable using symbol.
Environment variables for CA To issue CA commands, the HORCC_MRCF environment variable must be removed and the HORCMINST environment variable must be set. UNIX Setting a null value is not sufficient.
Paired volume configuration Users describe the connection between physical volumes used by the servers and the paired logical (named) volumes (and the names of the remote servers connected to the volumes) in a configuration definition file. See the figure below. HOSTB HOSTA Configuration definition file G1,Oradb1... P1,T1,L1 G1...HOSTB G2,Oradb2... P2,T2,L3 G2,Oradb3... P2,T2,L4 G2...HOSTC P2,T2,L3 Configuration definition file G1,Oradb1...P3,T2,L2 G1...
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
3 Using RAID Manager This chapter discusses pair commands, scripts, definitions, log and user-created files, vairables, protection, and LUN security for RAID Manager (RM).
RAID Manager pair commands To create and manage CA and BC pairs with RM, use the following commands: paircreate Establishes a primary to secondary pair relationship between volumes. See “paircreate” (page 134). pairdisplay Displays the state of volumes. See “pairdisplay” (page 145). pairsplit Suspends or deletes a paired volume. See “pairsplit” (page 173). pairresync Restores a volume from a PSUE/PSUS/SSWS state to a PAIR state. See “pairresync” (page 165).
RAID Manager commands in scripts An RM script is a list of instructions contained in a host file to automate a series of CA and BC operations. The host reads the script file and carries out each command as if it were typed in individually. Using RM host scripting, you can execute a large number of CA and BC commands in rapid sequence.
Paired CA volume status definitions Each pair of CA volumes consists of a primary volume (P-VOL) and secondary volume (S-VOL). Each pair has six possible paired statuses. The major CA pair statuses are: • SMPL • PAIR • COPY • PSUS • PSUE • PFUS The P-VOL controls the status for the pair, which is reflected in the status of the S-VOL. When you issue a CA command, the status usually changes.
If one of the volumes making up an aggregated LUSE volume is PSUE status, the LUSE volume will be reported as PDUB (dubious) status.
Paired BC volume status definitions Each pair of BC volumes consists of a primary volume (P-VOL) and secondary volume (S-VOL). Each volume maintains its own pair status. The major BC pair statuses are: • SMPL • PAIR • COPY • RCPY • PSUS • PSUE The P-VOL controls the pair state that is typically reflected in the status of the S-VOL. The status can be changed when a RM command is issued. A read or write request from the host is allowed or rejected according to the status, as shown in the following figure.
Read/Write Read/Write Primary Volume BC Asynchronous copy Secondary Volume S Restore copy S Status Pairing status Primary Secondary SMPL Unpaired volume R/W enabled R/W enabled PAIR Paired/duplicated volumes. Data in the primary and secondary volumes are not assured to be identical. R/W enabled R enabled (See Note 2) COPY In paired state, but copying to the secondary volume is not completed. The P-VOL/S-VOL are not assured to be identical.
Paired SnapShot volume status definitions Each pair of SnapShot volumes consists of a primary volume (P-VOL) and secondary volume (S-VOL) which is actually a virtual volume (V-VOL). Each volume maintains its own pair status. The supported volume type is OPEN-V only for the P-VOL, and OPEN-0V for the S-VOL. The major SnapShot pair statuses are: • SMPL • PAIR • COPY • RCPY • PSUS • PSUE The P-VOL controls the pair state that is typically reflected in the status of the S-VOL.
Status Pairing Status Primary Secondary SMPL Unpaired (SnapShot) volume R/W enabled R/W disabled (Note 2) PAIR (PFUL) The snapshot available state allocated the resource. R/W enabled R/W disabled COPY The preparing state allocates the resource for the snapshot. R/W enabled R/W disabled RCOPY The copying state from snapshot to the primary volume by using restore option.
File types and structure The RM product includes files supplied for the user, log files created internally, and files created by the user. These files are stored in the server’s local disk. See the following tables. Title File name, Location Executable for Command HORCM (RM) /etc/horcmgr none HORCM_CONF /HORCM/etc/horcm.conf none Takeover /usr/bin/horctakeover horctakeover Make configuration /usr/bin/mkconf.
Title File name, Location Executable for Command Trace control /usr/bin/horcctl horcctl Synchronization waiting command /usr/bin/pairsyncwait pairsyncwait HORCM (RM) activation script /usr/bin/horcmstart.sh horcmstart.sh HORCM shutdown /usr/bin/horcmshutdown.sh horcmshutdown.
Title File name, Location Command file Pair split/suspend \HORCM\etc\pairsplit.exe pairsplit Pair resynchronization \HORCM\etc\pairresync.exe pairresync Event waiting \HORCM\etc\pairevtwait.exe pairevtwait Error notification \HORCM\etc\pairmon.exe pairmon Volume checking \HORCM\etc\pairvolchk.exe pairvolchk Pair configuration confirmation \HORCM\etc\pairdisplay.exe pairdisplay RAID scanning \HORCM\etc\raidscan.exe raidscan RAID activity reporting \HORCM\etc\raidar.
Title File name, Location Command file Connection confirmation \HORCM\usr\bin\raidqry.exe raidqry Oracle validation setting \HORCM\usr\bin\raidvchkset raidvchkset Oracle validation confirmation \HORCM\usr\bin\raidvchkdsp raidvchkdsp Oracle validation confirmation \HORCM\usr\bin\raidvchkscan raidvchkscan Tool \HORCM\Tool\chgacl.exe chgacl Windows NT/2000/2003 command notes: • \HORCM\etc\ commands are used when issuing commands interactively from the console.
Log files RM and RM commands write internal logs and trace information to help the user: • identify causes of RM failures • keep records of the transition history of pairs. Log file format Log files provided include the startup log file, error log file, trace file, and core file, which are located as shown below. HOST denotes the host name, and PID denotes the process ID within that host. UNIX Systems startup log files HORCM startup log $HORCM_LOG/horcm_HOST.log Command log $HORCC_LOG/horcc_HOST.
Windows NT/2000/2003 Systems startup log files HORCM startup log $HORCM_LOG\horcm_HOST_log.txt Command log $HORCC_LOG\horcc_HOST_log.txt error log file HORCM error log $HORCM_LOG\horcmlog_HOST\horcm_log.txt trace files HORCM trace $HORCM_LOG\horcmlog_HOST\horcm_PID_trc.txt Command trace $HORCM_LOG\horcmlog_HOST\horcc_PID_trc.txt core files HORCM core $HORCM_LOG\core_HOST_PID\core Command core $HORCM_LOG\core_HOST_PID\core MPE/iX Systems startup log files HORCM startup log $HORCM_LOG/horcm_HOST.
Log directories The log directories for the RM instance specify the command log files using the environment variables: $HORCM_LOG A trace log file directory specified using the environment variable HORCM_LOG. The HORCM (RM) log file, trace file and core file (as well as the command trace file and core file) are stored in this directory. If you do not specify an environment variable, /HORCM/log/curlog becomes the default.
User-created files When constructing the RM environment, the system administrator should make a copy of the HORCM_CONF file, edit the file for the system environment, and save the file: UNIX /etc/horcm.conf or /etc/horcmn.conf where n is the instance number. Windows NT/2000/2003 \WINNT\horcm.conf or \WINNT\horcmn.conf where n is the instance number. MPE/iX /etc/horcm.conf or /etc/horcmn.conf where n is the instance number. OpenVMS sys$posix_root : [etc]horcmn.conf where n is the instance number.
User-settable environment variables When activating RM or initiating a command, you can specify any of the following environment variables: • RM Environment Variables • RM command Environment Variables • RM instance Environment Variables • environment variable for BC commands RM environment variables $HORCM_CONF Specifies the name of the RM configuration file. Default: /etc/horcm.conf $HORCM_LOG Specifies the name of the RM log directory.
environment variable, data is written to the trace file in nonbuffered mode. If you do not specify it, data is written in buffered mode. The trace mode of RM can be changed in real time by using the horcctl –c –b command. $HORCM_TRCUENV This variable specifies whether to use the trace control parameters (TRCLVL and TRCBUF trace types) as they are when a command is issued. When you specify this environment variable, the latest set trace control parameters are used.
$HORCC_TRCSZ Specifies the size of the command trace file in kilobytes. If you do not specify a size, the default trace size for CA commands is used. This default trace size is the trace size used by CA. The default trace size for CA commands can be changed in real time by using the horcctl –d –s command. $HORCC_TRCLVL Specifies the command trace level (between 0 and 15). If you specify a negative value, the trace mode is canceled.
RM protection The RAID Manager protection facility restricts RM volume control operations to volumes that: • the host is allowed to see, with or without host-based LUN security (Secure LUN XP) • are listed in the RM configuration file. To avoid inconsistency, RM security cannot be controlled within RM itself. RM security is determined by command device definition within the SVP, Remote Console, or via SNMP.
Protection facility specification Only permitted volumes and volumes visible to the host can be listed in the horcm.conf file. A volume must fulfill two requirements to be considered “permitted” by the RM protection facility: • It is host viewable (for example, with the HP supplied Inquiry tool). • It is a volume listed in the horcm.conf file. RM manages volume mirror descriptors (MU# for CA, BC0/BC1/BC2) as a unit.
Permission command To allow initial access to a protected volume, the Permission command must be executed. This command is the –find inst option of raidscan; see “raidscan” (page 202). It is executed by /etc/horcmgr automatically upon RM startup. With security enabled, RM permits operations on a volume only after the Permission command is executed. Operations target volumes listed in the horcm.conf file. The command compares volumes in the horcm.conf file to all host viewable volumes.
MPE/iX Not supported (only SCSI connections). MPE/iX can protect volumes only by using the protection mode of RM. OpenVMS Not supported. If a command device is set to enable protection mode, it is ignored by RM. Command device configuration You can use both protected and unprotected modes in a single array by enabling or disabling the protection facility of each command device. As a minimum configuration, it is possible to have two command devices, one with protection enabled and the other disabled.
horcm.conf on HOST1 volumes for Grp1 volumes for Grp3 Ora1 horcm.
LUN visibility from one host configuration The following figure shows a one host protection mode configuration sharing one array. Ora1 and Ora2 control operations are rejected because of no visibility to Grp2 and Grp4 from HOST1. If HOST1 uses a command device with protection set to OFF at creation time, then Ora1 and Ora2 volume pairs can be controlled. CM* represents a command device with protection ON. Horcm0.conf on HOST1 volumes for Grp1 volumes for Grp3 Ora1 Horcm1.
Commands controlled by RM protection The following commands are controlled by RM protection: • horctakeover, paircurchk, paircreate, pairsplit, pairresync, pairvolchk, pairevtwait, pairsyncwait When these commands are issued to non-permitted volumes, RM rejects the request with an error code of EX_ENPERM. • pairdisplay The pairdisplay command has no RM protection restrictions. Using this command, you can confirm whether volumes are permitted or not.
• raidscan –find inst RM recognizes permitted volumes as a result of executing raidscan –find inst (the Permission command). This command issues a SCSI inquiry to the specified device file to get the array Ser# and volume LDEV# from the XP array. Then, it cross checks volumes in the horcm.conf file against host viewable volumes and stores the result within the RM instance. The following example shows the relationship between device files and the horcm.
Naming the $HORCMPERM file UNIX systems The $HORCMPERM variable is set by default to either /etc/horcmperm.conf or /etc/horcmperm*.conf (where * is the RM instance number). Example (HP-UX) 'cat $HORCMPERM | /HORCM/usr/bin/raidscan -find inst' # The following is an example to show permitted # Volume groups.
Windows NT/2000/2003 systems The $HORCMPERM variable is set by default to either \WINNT\horcmperm.conf or \WINNT\horcmperm*.conf (where * is the instance number). 'type $HORCMPERM | x:\HORCM\etc\raidscan.exe -find inst' # The following is an example to permit DB Volumes. # Note: a numerical value is interpreted as Harddisk#.
d If no $HORCMPERM file exists, then the following commands can be manually executed to permit the use of all volumes the host is currently allowed to see (LUN security products may or may not be in place).
Important This registration process is not without price because it is executed automatically upon /etc/horcmgr startup without checking for protection mode in order to validate the –fd option. Permitted volume registration brings a performance degradation in horcmstart.sh (RM startup), but the RM daemon runs as usual, depending on how many devices a host has. If you want RM to start up faster in non-protection mode, then you can set $HORCMPERM to SIZE 0 byte as a dummy file or set HORCMPERM=MGRNOINST.
(Windows NT/2000/2003 (HP-UX) (Linux) (Solaris) (AIX) (Tru64 UNIX) (Digital UNIX) (DYNIX/ptx) (MPE/iX) (Windows NT/2000/2003) 'type $HORCMPERM | x:\HORCM\etc\raidscan.exe -find inst' • If no RM permission file exists, then /etc/horcmgr executes this built-in command to permit all volumes owned by the host.
Using RAID Manager on a Windows 2000/2003 system with “user” system privileges By default, RAID Manager requires Windows system administrator privileges to execute RM commands. This is because RAID Manager needs to open the command device directly as a physical drive. This section describes how to use the chgacl.exe to use RAID Manager commands without Administrator system privileges.
Example To add a user name to one or more physical drives: 1. Enter: chgacl /A: … Example 1 chgacl /A:RMadmin \\.\PHYSICALDRIVE10 Example 2 chgacl /A:RMadmin \\.\PHYSICALDRIVE10 \\.\PHYSICALDRIVE9 Allowing a user to use the “-x mount/umount” option If the user needs to use the “-x mount/umount” option of RM commands (for example, raidscan -x mount Z: \vol2), add the user name to the volume access control list. By default, chgacl.exe grants read, write and execute permissions.
Example To add a user name to one or more volumes: 1. Enter: chgacl /A: … Example chgacl /A:RMadmin \\.\Volume{7dd3ba6b-2f98-11d7-a48a-806d6172696f} You can also use the \\?\\Volume{GUID} format used by Windows commands such as mountvol. Allowing a user to use the “-x portscan” option If the user needs to use the “-x portscan” option of RM commands (for example, raidscan -x mount portscan port0,20), add the user name to the SCSI port access list.
Example To add a user name to one or more SCSI ports: 1. Enter: chgacl /A: … Example 1 chgacl /A:RMadmin Scsi0 Example 2 chgacl /A:RMadmin Scsi0 Scsi1 Scsi2 Allowing different levels of access to a Device Object chgacl.exe allows you to set a combination of read, write, execute or “all” access rights to a Device Object. If no permission parameter is given, chgacl grants “all” access to the Device Object.
Deleting a user name from the access control list of the Device Object Caution: The first two commands below may delete the user’s privileges to access the system drive (C:\). To delete a user name from all physical drives: 1. Enter: chgacl /D: Phys To delete a user name from all volumes: 1. Enter: chgacl /D: Volume To delete a user name from one or more Device Objects: 1.
You can redirect the output of the batch file by adding redirection in the batch file. Alternately, you can specify redirection in the Scheduled Task item’s Run field in advanced properties (for example, C:\HORCM\add_RM_user.bat > C:\HORCM\logs\add_RM_user.log). Note: If you change the Windows system administrator’s password, this scheduled task will not execute. You will need to modify the task by entering the new password.
Example 2 Starting two instances: Restrictions Restriction 1. A user without system administrator privilege is not allowed to use the Windows mountvol command (although some current Windows 2000 revisions allow a user to mountvol a directory to a volume). Therefore, a user cannot execute the “directory mount” option of RM commands using the mountvol command.
An administrator stated a HORCM instance 5. User A with “user” privileges will not be able to use any RAID Manager commands with HORCM instance 5. This is because even if user A has been added to the access control list for the devices, user A’s RM commands will not be able to communicate with the HORCM instance that was started by another user with different privileges. RM version 01.15.02 and later allow the user to connect to HORCM by setting the “HORCM_EVERYCLI” environment variable.
c:\horcm\tool\chgacl /A:RMadmin \\.\Volume{7dd3ba6b-2f98-11d7-a48a-806d6172696f} rem (3) Allow a user to use the "-x portscan" option of RM commands rem (3a) Add a user name0 to access list of ALL SCSI ports rem usage: chgacl /A: Scsi c:\horcm\tool\chgacl /A:RMadmin Scsi rem (3b) Add the user name to Access List of one or more SCSI ports rem usage: chgacl /A: ...
LUN Security Extension HP StorageWorks LUN Security XP Extension is an optional feature that prevents hosts from writing to protected volumes. This is similar to the ORACLE Data Validation command, setting a protection attribute for a specified LU. Guarding options RAID Manager supports the following guarding options: Hiding from inquiry commands. RM conceals the target volumes from SCSI Inquiry commands by responding “unpopulated volume” (0x7F) to the device type. “SIZE 0” volume.
raidvchkdsp. This command shows the guarding parameters for specified volumes, based on RM configuration file. (page 243) raidvchkscan. This command shows the guarding parameter for specified volumes, based on the raidscan command. (page 250) Notes and Restrictions LUN Security Extension has the following restrictions.
Protect (read-only) mode. LUN Security Extension volumes must use Basic disk only. License The LUN Security Extension license key must be installed on the disk array. Identifying Open LDEV Guard volumes The inquiry page identifies LUN Security Extension volumes so the user does not use them as normal volumes. Use inqraid -fl with the -CLI option.
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
4 RAID Manager command reference This chapter describes the function and syntax for all RM commands.
General commands RM Command Description UNIX Changes and displays the RM internal trace and control parameters. horcmshutdown Stops RM. horcmstart A shell script that starts RAID Manager. horctakeover (CA sync/async only) The host executing horctakeover takes ownership of a pair. inqraid Displays array information. mkconf Makes a configuration file. paircreate Creates a pair. paircurchk (CA sync/async only) Checks the consistency of the data on the secondary volume.
Windows NT/2000/2003 commands Windows NT/20 00/2003 Command Description Page drivescan Displays the relationship between the hard disk number and physical drive. 215 env Displays an environment variable. 217 findcmddev Searches for the command device. 218 mount Mounts a specified device. 220 portscan Displays the physical device on a designated port. 223 setenv Sets an environment variable. 225 sleep Suspends execution.
Data integrity check commands Data Integrity Check Command Page raidvchkset Sets the parameters for validation checking on the specified volumes. 236 raidvchkdsp Displays the parameters for validation checking on the specified volumes, based on the RM configuration file. 243 Displays the parameters for validation checking on the specified volumes, based on the raidscan command.
horcctl Change and display RM internal trace and control parameters Description The horcctl command is used for maintenance (except for the –S, –D, –C, –ND, –NC, and –g arguments) and troubleshooting. When it is issued, the internal trace control parameters of the RM manager and RM commands are changed and displayed. If the arguments –l level, –b m, –s size(KB), or –t type are not specified, the current trace control parameters are displayed.
Level 4 is the default setting and must not be changed unless directed by an HP service representative. Setting a trace level to other than 4 can impact problem resolution if a program failure occurs. Levels 0 to 3 are for troubleshooting. When a change option to the trace control parameter is specified, a warning message is displayed, and the command enters interactive mode. –b m Sets a trace level. y specifies buffered mode. n specifies synchronous mode.
array, check the RM command device name before using this argument. –C Changes and displays the RM command device being used by the RM. If the command device is blocked due to the online maintenance (microprogram replacement) of the disk array, check the RM command device name before using this argument. By using this argument again after completion of the online maintenance (microprogram replacement), the previous command device is reinstated.
horcmshutdown Stop RM instances Description Syntax The horcmshutdown command is an executable for stopping RM instances. horcmshutdown.sh [ inst. . . ] horcmshutdown.exe [ inst. . . ] Argument inst Indicates an instance number corresponding to the RM instance to be shut down. When omitted, the command uses the value stored in the HORCMINST environment variable.
horcmstart Start RAID Manager instance Description Syntax The horcmstart command is a executable which starts RM. If RM instance numbers are specified, this executable sets environment variables (HORCM_CONF, HORCM_LOG, HORCM_LOGS) and it starts RM instances. HP-UX: horcmstart.sh [ instance . . . ] Windows NT/2000/2003 horcmstart.exe [ instance . . . ] MPE/iX MPE/iX POSIX cannot launch a daemon process from a POSIX shell.
horctakeover Take ownership of a pair Description CA only The horctakeover meta command (contains many sub-commands) is used in conjunction with HA software, such as MC/Service Guard and CA. It selects and executes one of four actions, depending on the state of the paired volumes: nop-takeover, swap-takeover, SVOL-takeover, or PVOL-takeover. See “Takeover-switch function” on page 346 for actions taken by horctakeover.
–d[g] seq# LDEV# [ MU# ] Searches the RM instance configuration file (local instance) for a volume that matches the specified sequence number (seq#) and LDEV. If a volume is found, a command is executed on the paired logical volume (–d) or group (–dg). If the volume is contained in two groups, the command is executed on the first volume encountered. If MU# is not specified, it defaults to 0. seq# is the array serial number. seq# LDEV# can be specified in hexadecimal (by addition of 0x) or decimal.
Returned Values –t timeout (Asynchronous paired volumes only) Specifies the maximum time in seconds to wait for a resynchronization of P-VOL to S-VOL delta data. If the timeout occurs, EX_EWSTOT is returned. This option is required for an asynchronous paired volume; it has no effect for a synchronous paired volumes. –z Makes this command enter interactive mode. –zx (Not for use with MPE/iX or OpenVMS) Prevents using RM in interactive mode.
Error Codes The table below lists specific error codes for the horctakeover command.
inqraid Display array information Description Syntax Arguments 118 HP-UX, Linux, Solaris, AIX, and MPE/iX only The inqraid command displays the relationship between a host device special file and an actual physical drive in the disk array. inqraid { –h | quit | –inqdump | –f[x][p][l][g][w][h][c] | –find[c] | special_file | –CLI [W|WP|WN] | –sort[–CM][-CLIB] | –inst | –gvinf | –svinf | –fv | –fp | fg | -gplba }] –h Displays Help/Usage. –quit Terminates interactive mode and exits the command.
-fh Specifies CA/CA Journal for the Bitmap pages when used with -sort -CLIB. -fc used to calculate the Bitmap page of cylinder size for HORC # ls /dev/rdsk/* | inqraid -CLI DEVICE_FILE PORT SERIAL c1t2d10s2 CL2-D 62500 c1t2d11s2 CL2-D 62500 –find[c] -fw LDEV CTG C../B/..
The Seq# and LDEV# are provided via the SCSI Inquiry command. This option requires the HORCMINST variable to be defined. special_file Specifies a device special file name as an argument to the command. If no argument is specified, the command waits for input from STDIN. For STDIN file specification information, see Appendix D, “STDIN file formats” . –CLI Specifies structured output for Command Line Interface parsing. The column data is aligned in each row.
STDINs or special files are specified as follows: • HP-UX: /dev/rdsk/*, Solaris: /dev/rdsk/*s2 or c*s2, • Linux: /dev/sd... or /dev/rd... ,/dev/raw/raw*. • zLinux: /dev/sd... or /dev/dasd… or /dev/rd... ,/dev/raw/raw*. • MPE/iX: /dev/...
Normally, this option is used to save the LUN signature and volume layout information after it has been written created (and before a paircreate). –svinf[=PTN] (Windows NT/2000/2003 only) Uses SCSI Inquiry to get the Serial# and LDEV# created by –gvinf of the RAID for the target device, and sets the signature and volume layout information in file VOLssss_llll.ini to the target device. This option will complete correctly even if the Harddisk# is changed by the operating system.
SSID Displays the Sub System ID of an LDEV in the disk array. CTGID Displays the CT group ID when the LDEV has been specified as an async-CA P-VOL or S-VOL. CHNO (Linux only) Displays the Linux channel number of the device adapter. TID (Linux only) Displays the target ID of the hard disk connected to the device adapter port. See Appendix C, “Fibre Channel addressing” . LUN (Linux only) Displays the logical unit number of the hard disk that connects on the device adapter port.
R:Group Displays the physical position of an LDEV as determined by LDEV mapping in the disk array. LDEV Mapping R: Group RAID group RAID Level 1 = RAID1 5 = RAID5 6 = RAID6 RAID Group number - Sub number SnapShot S-VOL S SNAPS Pool ID number Pool ID number Unmapped U UNMAP 00000 Group 00000 External LUN E External Group number PRODUCT_ID Displays the product ID field in the STD inquiry page. PWWN Displays the port WWN. NWWN Displays the Node WWN.
Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# horc1 dev00(L) (CL2-J , 0, 0-0)61456 192..S-VOL SSUS,----193 ->/dev/rdsk/c23t0d0 Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# horc1 dev10(L) (CL2-J , 2, 3-0)61456 209..S-VOL SSUS,----206 ->/dev/rdsk/c23t2d3 M M - Examples using the –findc option: HP-UX # echo /dev/rdsk/c23t0d0 /dev/rdsk/c23t2d3 | .
#ioscan -fun | grep rdsk | .
D:\HORCM\etc>inqraid $Phys -gvinf -CLI \\.\PhysicalDrive0: # Harddisk0 -> [VOL61459_448_DA7C0D91] [OPEN-3 \\.\PhysicalDrive1: # Harddisk1 -> [VOL61459_449_DA7C0D92] [OPEN-3 \\.\PhysicalDrive2: # Harddisk2 -> [VOL61459_450_DA7C0D93] [OPEN-3 ] ] ] An example using the –svinf=PTN follows. This example writes signature/volume information to LUNs identified by “Harddisk” in the output of the pairdisplay command.
DEVICE_FILE CTG H/M/12 SSID R:Group PRODUCT_ID Volume{cec25efe-d3b8-11d4-aead-00c00d003b1e}\Vol3\Dsk0 - OPEN-3-CVS-CM PORT SERIAL LDEV CL2-D 62496 256 An example using the –fp option: # ls /dev/rdsk/c57t4* | .
Solaris # ls /dev/rdsk/* | ./inqraid /dev/rdsk/c0t2d1 -> [HP] CL2-D Ser = 30053 LDEV = 9 [HP ] [OPEN-3 CA = P-VOL BC[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3 /dev/rdsk/c0t4d0 -> [HP] CL2-D Ser = 30053 LDEV = 14 [HP ] [OPEN-3-CM ] ] MPE/iX shell/iX>ls /dev/* | .
Tru64 # ls /dev/rdisk/dsk* | ./inqraid /dev/rdisk/dsk10c -> [HP] CL2-D Ser = 30053 LDEV = 9 [HP] CA = P-VOL BC[MU#0 = SMPL MU#1 = SMPL RAID5[Group 2- 1] SSID = 0x0008 CTGID /dev/rdisk/dsk11c -> [HP] CL2-D Ser = 30053 LDEV = 14 [HP] [OPEN-3 ] MU#2 = SMPL] = 3 [OPEN-3-CM] DYNIX/ptx # dumpconf -d | grep sd | .
mkconf Make a configuration file Description The mkconf command is used to make a configuration file from a special file (raw device file) provided via STDIN. It executes the following steps: 1. Make a configuration file containing only the HORCM_CMD section by executing inqraid –sort –CM –CLI. 2. Start a RM instance without a HORCM_DEV and HORCM_INST section, which is just enough to execute the raidscan command for the next step. 3.
Example –m MU# Specifies the mirror descriptor for BC/SnapShot volumes. CA volumes do not specify a mirror descriptor. –i inst# Specifies the instance number for RM. –s service Specifies the service name to be used in the newly created configuration file. If not specified, 52323 will be used as a default. –a Used to add a new volume group within the newly created configuration file.
HORCM Shutdown inst 9 !!! Please check '/tmp/test/horcm9.conf','/tmp/test/log9/curlog/horcm_*.log', and modify 'ip_address & service'. # ls horcm9.conf # vi *.conf log9 Configuration file: # Created by mkconf.
paircreate Create a pair relationship Description The paircreate command establishes a primary to secondary pair relationship between volumes. This command generates a new paired volume from SMPL volumes. The default action pairs a logical group of volumes as defined in the RM instance configuration file. HP-UX Caution Before issuing this command, ensure that the secondary volume is not mounted on any HP-UX system.
(HP-UX, Linux, Solaris, MPE/iX, AIX, and Windows NT/2000/2003 only) Searches the RM configuration file (local instance) for a volume that matches the specified raw device. If a volume is found, the command is executed on the paired volume (–d) or group (–dg). This option is effective without specification of the –g group option. If the specified raw_device is listed in multiple device groups, this applies to the first one encountered.
Maximum number: XP256 16 XP512 64 XP1024 128 XP10000 256 XP12000 256 (0-15) (0-63) (0-127) (0-255) (0-255) The CTGID option forces creation of paired volumes for a given CTGID group. –g group Specifies the group to be paired; the group name is specified in the HORCM_DEV section of the RM instance configuration file. The command executes for the entire group unless the –d pair_vol argument is specified. –h Displays Help/Usage, and version information.
RAID CA Volumes Default Bitmap table Others Don't care Cylinder If there is not enough shared memory to maintain track level information, error EX_CMDRJE is returned. dif (BC only) Use at paircreate to cause the S-VOL bitmap table (used to create a differential backup) to designate all tracks changed since paircreate.
–q Terminates interactive mode and exits this command. –split (BC/SnapShot only) Splits the paired volume after completing the pairing process. -split works differently based on the microcode version: XP256 microcode 52-46-xx or over XP512 microcode 01-10-00/xx or over XP1024/XP128 XP10000 XP12000 This option will return immediately with the PVOL_PSUS and SVOL_COPY state changes. The SVOL state will be changed to SVOL_SSUS after all data is copied.
Returned Values –zx (Not for use with MPE/iX or OpenVMS) Prevents using RM in interactive mode. –jp ID (CA-Journal only) Specify a journal group ID for a P-VOL –js ID (CA-Journal only) Specify a journal group ID for an S-VOL –pid (SnapShot only) Identify the SnapShot pool with a pool ID. LDEV’s in a group that has a PID belong to the specified SnapShot pool. If a specific PID is not given, the LDEVs will be designated with the default pool ID (0).
Create a BC group pair out of the group that contains the seq# 35611 and LDEV 35. Use the volumes defined by the local instance as the P-VOLs: paircreate –d 35611 35 –vl In this example, all volumes that are part of the group that contains this LDEV are put into the PAIR state. Because MU# was not specified, it defaulted to 0.
Error Codes The table lists specific error codes for the paircreate command.
paircurchk Check S-VOL data consistency Description CA only The paircurchk command displays pairing status in order to allow the operator to verify the completion of pair generation or pair resynchronization. This command is also used to confirm the paired volume connection path (physical link of paired volume to the host). The granularity of the reported data is based on the volume or group.
This option is effective without specification of the –g group option. If the specified LDEV is listed in multiple device groups, this applies to the first one encountered. seq # LDEV # can be specified in hexadecimal (by the addition of 0x) or decimal. –g group Specifies a group name in the RM instance configuration file. The command executes for the entire group unless the –d pair_vol argument is specified. –h Displays Help/Usage and version information.
Output Fields Group The group name (dev_group) described in the configuration definition file. Pair vol The paired volume name (dev_name) within a group described in the configuration definition file. Port targ# lun# The port number, target ID, and LUN described in the configuration definition file. LDEV# The LDEV number. Volstat The attribute of a volume. Status The status of the paired volume. Fence The fence level of the paired volume. To be The data consistency of the secondary volume.
pairdisplay Confirm pair configuration Description The pairdisplay command displays the pairing status of a volume or group of volumes. This command is also used to confirm the configuration of paired volumes. Volumes are defined in the HORCM_DEV section of the RM instance configuration files.
configuration file (local instance) for a volume that matches the specified raw_device. If a volume is found, the command is executed on the paired volume (–d) or group (–dg). If the volume is contained in two groups, this command executes for the first volume encountered only. If MU# is not specified, it defaults to 0. –d[g] seq# LDEV# [ MU# ] Searches the RM instance configuration file (local instance) for a volume that matches the specified sequence number (seq#) and LDEV.
PFUS) and confirms SSWS state as an indication of SVOL_SSUS-takeover. This option is also used to display the copy operation progress, the Side File percentage or the BITMAP percentage for asynchronous pair volumes. –fd displays the relationship between the Device_File and the paired volumes, based on the group (as defined in the local instance configuration definition file).
JID. The journal group ID for the P-VOL or S-VOL. If the volume is not a CA-Journal volume, “-” will be displayed. AP. The number of active paths in to the P-VOL. If this is not known, “-” will be displayed. CM. Copy mode. “N” is for non-SnapShot. “S” is for SnapShot. “C” is for cruising copy. EM. displays the external connection mode. H = a mapped E-LUN hidden from the host.
The –m mode option cannot be specified. –l Displays the paired volume status of the local host (which issues this command). –m mode Displays the status of mirror descriptors for specified pair logical volumes and volume pair status. The cascading volume mode option can be designated as cas or all. The cas option displays only MU#0 (plus used MU#s). The all option displays all MU#s whether used or not. The mode option displays all cascading mirrors (MU#1-4).
Example # pairdisplay -g VG01 -v jnl JID MU CTG JNLS AP U(%) Q-Marker 001 0 2 PJNN 4 21 43216fde 002 0 2 SJNN 4 95 3459fd43 # pairdisplay -g VG01 -v jnlt JID MU CTG JNLS AP U(%) Q-Marker 001 1 2 PJNN 4 21 43216fde 002 1 2 SJNN 4 95 3459fd43 # pairdisplay -g VG01 -v jnl -FCA 1 JID MU CTG JNLS AP U(%) Q-Marker 003 1 2 PJNN 4 21 43216fde Output Fields 150 Q-CNT 30 52000 Q-CNT 30 52000 Q-CNT 30 D-SZ(BLK) 512345 512345 Seq# Nnm LDEV# 62500 2 265 62538 3 270 D-SZ(BLK) 512345 512345 D-SZ(BLK) 512345 Se
P-LDEV# Displays the LDEV# of a primary pair partner. M = “W” P-VOL and PSUS state: indicates that S-VOL is suspending with R/W enabled. S-VOL and SSUS state: indicates that S-VOL has been altered since entering SSUS state. S-VOL and SSUS state: indicates that S-VOL has NOT been altered since entering SSUS state. M = "-" P-VOL and PSUS state: indicates that S-VOL is suspending with Read only. State Volume PVOL SVOL M = "N" COPY/RCPY/PAIR/PSUE state: indicates that the volume is Read-disabled.
The following is an arithmetic expression using the High Water Mark (HWM) as 100% of a side file space: HWM (%) = 30 / Side File space (30 to 70) * 100 152 HP StorageWorks Disk Array XP RAID Manager: User’s Guide
Examples (BC Only) # pairdisplay –g oradb Group Pair Vol(L/R) (Port#,TID,LU-M), Seq#, LDEV#...P/S, Status, Seq#, P-LDEV# M oradb oradb1(L) (CL1-A, 1, 1-0) 30053 18 ...P-VOL PAIR 30053 19 oradb oradb1(R) (CL1-D, 1, 1-0) 30053 19 ...
The following example uses –m cas. This option displays the cascaded volumes at either end of the designated CA pair that are assigned either BC bitmaps (LU0-0) or CA bitmaps (LU0). # pairdisplay -g oradb –m cas Group PairVol(L/R) (Port#,TID,LU-M), oradb oradev1(L) (CL1-D , 3, 0-0) oradb oradev1(L) (CL1-D , 3, 0) oradb1 oradev11(R) (CL1-D , 3, 2-0) oradb2 oradev21(R) (CL1-D , 3, 2-1) oradb oradev1(R) (CL1-D , 3, 2) Seq#, 30052 30052 30053 30053 30053 LDEV#.P/S, 266...SMPL 266...P-VOL 268...P-VOL 268...
# pairdisplay -g URA -CLI -fd -m all Group PairVol L/R Device_File M Seq# LDEV# MURA MURA_001 L c1t2d7s2 0 62500 263 - L c1t2d7s2 1 62500 263 - L c1t2d7s2 2 62500 263 URA URA_001 L c1t2d7s2 - 62500 263 - L c1t2d7s2 h1 62500 263 URA URA_001 R c1t2d8s2 0 62500 264 - R c1t2d8s2 1 62500 264 - R c1t2d8s2 2 62500 264 URA URA_001 R c1t2d8s2 - 62500 264 - R c1t2d8s2 h1 62500 264 P/S Status Seq# P-LDEV# M S-VOL PAIR 262 SMPL - SMPL - SMPL - SMPL - SMPL - SMPL - SMPL - SMPL - SMPL - - 155
pairevtwait Wait for event completion Description The pairevtwait command waits for completion of the paircreate and pairresync commands. It also checks the status of those commands. It waits (sleeps from the viewpoint of the process) until the paired volume status becomes identical to a specified status. When the desired status has been achieved, or the timeout period has elapsed, the command exits with the appropriate return code. CA Operation The figure below shows the usage of the –FCA option.
BC Operation The figure below shows the usage of the –FBC option. In the example, the command tests the status of the intermediate S-VOL/P-VOL (MU#1) through a specified pair group in a CA environment. CA environment pairevtwait -g ora -s psus -t 10 -FBC 1 PVOL 0 Ora(CA) S/P VOL 0 1 Oradb1(BC) SVOL Oradb2(BC) Seq#30052 SVOL Seq#30053 Syntax pairevtwait –h pairevtwait { –g group | –d pair_vol | –d[g] raw_device [ MU# ] | –d[g] seq# LDEV# [ MU# ] | –FCA [ MU# ] | -FBC [ MU# ] | –h | –s status .
If the volume is contained in two groups, the command is executed on the first volume encountered. If MU# is not specified, it defaults to 0. –d[g] seq# LDEV# [ MU# ] Searches the RM instance configuration file (local instance) for a volume that matches the specified sequence # and LDEV. If a volume is found, the command is executed on the paired logical volume (–d) or group (–dg). This option is effective without specification of the –g group option.
The command executes for the entire group unless the –d pair_vol argument is specified. –h Displays Help/Usage and version information. –l When this command cannot use a remote host because it is down, this option allows execution of this command by a local host only. The target volume of a local host must be SMPL or P-VOL. BC/SnapShot volumes can be specified from the S-VOL. –nomsg Used to suppress messages when this command is executed from a user program.
the interval is specified as greater than 1999999, a warning message is displayed. Returned Values –z Makes this command enter interactive mode. –zx (Not for use with MPE/iX or OpenVMS) Prevents using RM in interactive mode. This command sets one of the following returned values in exit(), which allows you to check the execution results.
Error Codes The table lists specific error codes for the pairevtwait command.
pairmon Report pair transition status Description The pairmon command is sent to the RM (daemon) to report the transition of pairing status. When an error or status transition is detected, this command outputs an error message. Events exist on the pair state transfer queue for RM. Resetting an event correlates to the deletion of one or all events from the pair state transfer queue. If the command does not reset, the pair state transfer queue is maintained.
–D –nowait –resevt –allsnd Actions (continued) Invalid –nowait –resevt Invalid –nowait –resevt Syntax Arguments When RM does not have an event, this option reports “no event” immediately. If multiple events exist, then it reports one event and resets all events. –allsnd When RM does not have an event, this option reports “no event” immediately. If multiple events exist, then it reports all events and resets them. pairmon { –D | –allsnd | –q | –resevt | –nowait | –s status . . .
Output Fields –h Displays Help/Usage and version information. –zx (Not for use with MPE/iX or OpenVMS) Prevents using RM in interactive mode. Group The group name (dev_group) defined in the configuration definition file. Pair vol The paired volume name (dev_name) within the group, defined in the configuration definition file. Port targ# lun# The port number, TargetID, and LUN defined in the configuration definition file. LDEV# The LDEV number.
pairresync Resynchronize a pair Description The pairresync command resumes updating the secondary volume based on the primary volume to reestablish pairing. If no data has been written in the secondary volume, differential P-VOL data is copied. If data has been written in the secondary volume, differential data from the P-VOL is copied to the S-VOL. The changes on the SVOL are overwritten. The –swap option updates the PVOL based on the SVOL so that the PVOL becomes the SVOL and the SVOL becomes the PVOL.
CA Operation The following figure shows the usage of the –FCA option. In the example, the command resynchronizes a CA pair by specifying the name of a cascaded BC group. BC environment pairresync -g oradb1 -FCA Ora(CA) SVOL P/P VOL 0 0 1 Oradb1(BC) SVOL Oradb2(BC) Seq#30052 Seq#30053 SVOL BC Operation The following figure shows the usage of the –FBC option. In the example, the command resynchronizes a BC pair (MU#1) by specifying the MU# and the CA group to which it is cascaded.
Syntax Arguments pairresync { –nomsg | –g group | –d pair_vol | –d[g] raw_device [ MU# ] | –d[g] seq# LDEV# [ MU# ] | –c size | –FCA [MU#] | –FBC | –h | –l | –nomsg | –q | –restore | –swap[s|p] | –z | –zx } –c size Used to specify the number of tracks (1 to 15) copied in parallel. If omitted, the default is the value used at time of paircreate. –d pair_vol Specifies a paired volume name written in the configuration definition file. The command executes only for the specified paired volume.
–FCA [MU#] Used to resync a CA P-VOL that is also a BC P-VOL. If the –l option is specified, this option resynchronizes a cascading CA volume at the local host (near site). If no –l option is specified then this option re-synchronizes a cascading CA volume at the remote host (far site). The target CA volume must be a P-VOL and the –swap[s | p] option cannot be specified. The MU# specifies the cascading mirror descriptor for CA-Journal.
–restore (BC/SnapShot only) (Optional) Copies differential data from the secondary volume to the primary volume. (The S-VOL must not be mounted on any host while this command is executing.) If the –restore option is not specified, the P-VOL is copied to the S-VOL. If the –restore option is used, the P-VOL must not be host mounted while the command is executing. If the target volume is currently under maintenance, this command cannot execute copy rejection in case of trouble.
–swap[s|p] (CA only) The –swaps option is executed from the S-VOL when there is no host on the P-VOL side to help. A remote host must be connected to the S-VOL. Typically executed in PSUS (SSWS) state (after a horctakeover) to facilitate fast failback without requiring a full copy. Unlike –swaps, –swapp requires the cooperation of hosts at both sides. It is the equivalent of –swaps, executed from the original P-VOL side.
Output Fields –z Makes this command enter interactive mode. –zx (Not for use with MPE/iX or OpenVMS) Prevents using Returned Values This command sets either of the following returned values in exit(), which allows you to check the execution resultsThe command returns 0 upon normal termination. A nonzero return indicates abnormal termination. For the error cause and details, see the execution logs. Group The group name (dev_group) described in the configuration definition file.
M=W (Valid for PSUS state only) In the P-VOL case, this designates “suspended” with S-VOL R/W enabled. In the S-VOL case, this designates that the S-VOL can accept writes. M=N Example (Valid for COPY/RCPY/PAIR/PSUE state) A listed volume means that reading is disabled. This example shows a pairresync on group VG01. pairdisplay shows two volumes in the COPY state. The copy% value indicates how much of the P-VOL is in sync with the S-VOL.
pairsplit Split a pair Description The pairsplit command is used to change the status of a paired volume. This command puts the pair into either PSUS or SMPL state. For status change from PAIR to PSUS or PSUS to SMPL: Before these state changes are made, all changes made to the P-VOL, up to the point when the command was issued, are written to the S-VOL. If possible the host system must flash any of the host resident buffer cache before executing this command.
MPE/iX Before you execute this command, the non-written data that remains in the buffer of the host must be given a flush for synchronization. For MPE/iX systems this is VSCOSE of the volume set. CA Operation The following figure shows the usage of the –FCA option. In the example, the command splits (to PSUS) the CA pair by specifying the name of the BC group to which it is cascaded.
CA environment pairsplit -g ora -FBC 1 Ora(CA) S/P VOL PVOL 0 0 1 Oradb1(BC) SVOL Oradb2(BC ) Seq#30052 SVOL Seq#30053 Syntax Arguments pairsplit { | –c size | –nomsg | –g group | –d pair_vol | –d[g] raw_device [ MU# ] | –d[g] seq# LDEV# [ MU# ] | –E | –FBC [MU#] | –FCA [MU#] | –l | –r[w] | –P | –R[S][B] | –S } –c size (BC only) Copies differential data retained in the primary volume into the secondary volume, then enables reading and writing from and to the secondary volume (after completion of
If the specified raw_device is listed in multiple device groups, this applies to the first one encountered. –d[g] seq# LDEV# [ MU# ] Searches the RM instance configuration file (local instance) for a volume that matches the specified sequence # and LDEV. If a volume is found, the command is executed on the paired logical volume (–d) or group (–dg). This option is effective without specification of the –g group option.
then this option splits a cascading BC volume on a remote host (far site). The target BC volume must be a P-VOL, and the –E option cannot be specified. –g group Specifies which group to split. The group names are defined in the HORCM_DEV section of the RM instance configuration file. The command executes for the entire group unless the –d pair_vol argument is specified. –h Displays Help/Usage and version information.
The –r option allows read-only access of the secondary volume, –r is a default option. The –rw option enables reading and writing from and to the secondary volume. –S Returned Values (Optional) Used to bring the primary and secondary volumes into SMPL mode in which pairing is not maintained. Data consistency is only maintained if devices are in a suspend status (PSUS). If devices are in a pair status (PAIR), data on the secondary volume will not be consistent and not usable.
pairsyncwait Synchronization waiting command Description The pairsyncwait command is used to confirm that a mandatory write (and all writes before it) has been stored in the DFW (write) cache area of the RCU. The command gets the latest P-VOL async-CA sequence # of the main control unit (MCU) side file and the sequence # of the most recently received write at the RCU DFW (with the correct CTGID, group or raw_device) and compares them at regular intervals.
The command is executed for the specified group unless the –d pair_vol option is specified. –d pair_vol Used to specify a logical (named) volume that is defined in the configuration definition file. When this option is specified, the command is executed for the specified paired logical volumes. –d[g] raw_device [ MU# ] (HP-UX, Linux, Solaris, Windows NT/2000/2003, AIX, and MPE/iX only) Searches the RM configuration file (local instance) for a volume that matches the specified raw device.
–m marker Used to specify the Q-marker, the async-CA sequence # of the main control unit (MCU) P-VOL. If RM gets the Q-marker from the –nowait option, then it can confirm the completion of asynchronous transfer to that point, by using pairsysncwait with that Q-marker. If a Q-marker is not specified, RM uses the latest sequence # at the time pairsysncwait is executed. It is also possible to wait for completion from the S-VOL side.
If you do not specify –nowait and the display status is “TIMEOUT” QM-Cnt shows the number of remaining Q-markers at timeout. If the status for the Q-market is invalid (“BROKEN” or “CHANGED”) QM-Cnt will show as “-”. To determine the remaining data in the CT group: Remaining data in CT group = Side File capacity * Side File percentage / 100 The side file percentage is the rate shown under the “%” column by the pairdisplay command.
Returned Values This command returns one of the following values in exit (), which allows you to check the execution results. When the –nowait option is specified: Normal termination 0. The status is NOWAIT Abnormal termination Other than 0 to 127. (For the error cause and details, see the execution logs.
Examples Q-Marker The sequence number of MCU P-VOL at the time the command is received. Status The status after execution of the command. Q-Num Number of processes in the queue waiting for synchronization within the CTGID of the unit. Q-Cnt The number of remaining I/Os in the sidefile. CA-Async sends a token called “dummy record set” at regular intervals. Therefore QM-Cnt always shows “2” or “3,” even if the host is doing no writing.
Error Codes The table below lists specific error codes for the pairsyncwait command.
pairvolchk Check volume attribute Description The pairvolchk command reports the attributes of a volume from the perspective of the local or remote host. This command can be applied to each paired logical volume or each group. This is the most important command used by high availability (HA) failover software to determine when a failover or failback is appropriate.
CA environment pairvolchk -g ora -c -s -FBC 1 PVOL Ora(CA) 0 S/P VOL 0 1 Oradb1(BC) SVOL Oradb2(BC) Seq#30052 SVOL Seq#30053 Syntax Arguments pairvolchk { –h | q | z | –g group | –d pair_vol | –d[g] raw_device [ MU# ] | –FCA [MU#] | –FBC [MU#] | –d[g] seq# LDEV# [ MU# ] | –c | –s[s] | –nomsg } –c Checks the conformability of the paired volumes of the local and remote hosts and reports the volume attribute of the remote host.
–d[g] seq# LDEV# [ MU# ] This option searches the RM instance configuration file (local instance) for a volume that matches the specified sequence number (seq#) and LDEV. If a volume is found, the command is executed on the paired logical volume (–d) or group (–dg). This option is effective without specification of the –g group option. If the specified LDEV is listed in multiple device groups, this applies to the first group encountered.
–nomsg Suppresses messages to be displayed when this command is executed. It is used to execute a command from a user program. If used, this argument, must be specified at the beginning of a command argument. -q Terminates interactive mode and exits this command. –s[s] See the status table on page 194. Used to acquire the fine granularity volume state (for example, PVOL_PSUS) of a volume. If it is not specified, the generic volume state (for example, P-VOL) is reported.
216: EX_EXTCTG 214: EX_ENQCTG The table below shows the error messages associated with the above error codes. Error Code Error Message EX_ENORMT No remote host alive to accept commands or Remote RAID Manager might be blocked (sleeping) while performing I/O.
When the –s[s] argument is specified: Normal termination: 11: The status is SMPL For CA/sync and BC Volumes 22: The status is PVOL_COPY or PVOL_RCPY 23: The status is PVOL_PAIR 24: The status is PVOL_PSUS 25: The status is PVOL_PSUE 26: The status is PVOL_PDUB (CA and LUSE volume only) 29: The status is PVOL_INCSTG (Inconsistent status in group) Not returned 32: The status is SVOL_COPY or SVOL_RCPY 33: The status is SVOL_PAIR 34: The status is SVOL_PSUS 35: The status is SVOL_PSUE 36: The status is SVOL_PD
For CA/async and CA Journal Volumes: 42: The status is PVOL_COPY 43: The status is PVOL_PAIR 44: The status is PVOL_PSUS 45: The status is PVOL_PSUE 46: The status is PVOL_PDUB (CA and LUSE volumes only) 47: The status is PVOL_PFUL 48: The status is PVOL_PFUS 52: The status is SVOL_COPY or SVOL_RCPY 53: The status is SVOL_PAIR 54: The status is SVOL_PSUS 55: The status is SVOL_PSUE 56: The status is SVOL_PDUB (CA and LUSE volumes only) 57: The status is SVOL_PFUL 58: The status is SVOL_PFUS For SnapShot Vol
29: The status is PVOL_INCSTG. (Inconsistent status in group) Not returned 32: The status is SVOL_COPY or SVOL_RCPY. 33: The status is SVOL_PAIR. 34: The status is SVOL_PSUS. 35: The status is SVOL_PSUE. 36: The status is SVOL_PDUB. (CA & LUSE volumes only.) 37: The status is SVOL_PFUL. (PAIR closing Full status of the SnapShot Pool.) 38: The status is SVOL_PFUS. (PSUS closing Full status of the SnapShot Pool.) 39: The status is SVOL_INCSTG.
Other than 0 to 127 (For the error cause and details, see the execution logs): 236:EX_ENQVOL 237:EX_CMDIOE 235:EX_EVOLCE . . . When the –c argument is specified 242:EX_ENORMT. . . When the –c argument is specified 16:EX_EXTCTG 214:EX_ENQCTG When a volume group contains volumes in different states, one state will take precedence and will be reported for the group as shown in the following table.
Explanation of Terms Error Codes 1 Status is TRUE. 0 Status is FALSE. x Status is TRUE or FALSE (don’t care). COPY* Status is either COPY or RCPY. PFUL Since the PFUL state refers to the High Water Mark of the Side File in PAIR state, the PFUL state is displayed as PAIR by all commands except pairvolchk and the –fc option of the pairdisplay command.
“MINAP” shows the minimum active paths to a specified group on the P-VOL. If the array firmware does not support tracking the number of active paths, then "MINAP" will not be displayed (as below). “LDEV = BLOCKED indicates failure to link to an E-LUN by CA BC: # pairvolchk -g oradb pairvolchk : Volstat is P-VOL.[status = PAIR ] BC with CT Group: # pairvolchk -g oradb pairvolchk : Volstat is P-VOL.
raidar Report LDEV activity Description The raidar command reports the I/O activity of a port, target or LUN over a specified time interval. It will report any early termination via CNTL-C. This command can be used regardless of the RM instance configuration definitions. I/O activity of an S-VOL that is part of an active CA pair (a pair that is in the COPY or PAIR state) shows internal I/O used to maintain the pair as well as user I/O. For BC, only host I/Os are reported on the P-VOL.
For the XP12000, the expanded ports CL3-A up to CL3-R, or CLG-A up to CLG-R can be selected. Port specification is not case sensitive (CL1-A= cl1-a= CL1-a= cl1-A). lun Specifies a LUN of a specified SCSI/Fibre Channel target. targ Specifies a SCSI/Fibre Channel target ID of a specified port. mun (BC/SnapShot only) Specifies the duplicated mirroring descriptor (MU#) for the identical LU under BC/SnapShot in a range of 0 to 2/63.
Output Fields –zx (Not for use with MPE/iX or OpenVMS) Prevents using RM in interactive mode. IOPS Displays the IO (reads/writes) per second. HIT(%) Displays the cache hit rate for reads. W(%) Displays ratio for writes. IOCNT Displays the number of reads/writes.
raidqry Confirm disk array connection to host Description Syntax The raidqry command displays the configuration of the connected host and disk array. raidqry –h raidqry { –l | –q | –r group | –f | –z | –zx } Arguments Output Fields 200 –f This option is used to display the floatable IP address for the hostname (ip_address) described in a configuration definition file. –h Displays Help/Usage and version information. –l Displays a configuration of the local host connected to the disk array.
Floatable Host When using the –f option, this item displays the first 30 characters of the host name (ip_address) described in the configuration definition file. The –f option interprets the host name as utilizing a floatable IP for the host. Display Example HORCM_ver When the –l option is specified, this shows the version of the CA of the local host. When the –r option is specified, this item shows the version of the CA on the remote host for the specified group.
raidscan Display port status Description Syntax The raidscan command displays, for a given port, the target ID, LDEV (mapped for LUN, and the status of the LDEV), regardless of the configuration definition file.
d displays the Device_File that was registered to the RM Group in the output, based on the LDEV (as defined in the local instance configuration definition file). –f[d] If this option is specified, the –f[f] and –f[g] options are invalid. Displays the serial number and LDEV number of the external LUNs mapped to the LDEV. –f[e] If the external LUN mapped to the LDEV on a specified port does not exist, then this option will do nothing. If this option is specified, the -f[f][g][d] options are not allowed.
This option can be used with the –fx option to display the LDEV numbers in hexadecimal format. –find [op] [MU#] Used to execute the specified [op] using a raw device file provided by STDIN. See next entries. –find conf [MU#] [–g name] Used to display the port, target ID, and LUN in the horcm.conf file by using a special raw device file provided via STDIN. If the target ID and LUN are unknown for the target device file, then you will have to start RM without a description for HORCM_DEV and HORCM_INST.
If the logical drive corresponding to a –g name is open for an application, then the logical drive system buffer is only flushed. This option allows the system buffer to be flushed before a pairsplit without unmounting the PVOL (open state). –find verify [MU#] Used to verify the relationship between a Group in the configuration definition file and a Device_File registered to the LDEV map tables (based on the raw device file name provided via STDIN).
If the –pi strings option is also specified, then this option does not get its “strings” via STDIN. The strings specified in the –pi option will, instead, be used as input. –l lun Specifies a LUN for a specified SCSI/Fibre Channel target. Specifying a LUN without designating the target ID is not allowed. If this option is not specified, the command applies to all LUNs. If this option is specified, the –t option must also be used.
array and scans the port of the disk array (which corresponds with the unit ID) and searches for the unit ID from Seq#. If this option is specified, and then the –s Seq# option is invalid. –pi strings Used to explicitly specify a character string rather than receiving it from STDIN. If this option is specified, then the –find option will be ignored; the strings specified in the –pi option will, instead, be used as input. The specified character string must be limited to 255 characters.
–z Makes this command enter interactive mode. –zx (Not for use with MPE/iX or OpenVMS) Prevents using RM in interactive mode. –m MU# Displays the cascading mirror descriptor. If you specify the –CLI option, raidscan will not display the cascading mirror (MU1-4). -m all will display all cascading mirror descriptors. Output Fields Port# The port name on the disk array. ALPA/C Arbitrated loop physical address of the port on the disk array.
Examples PairVol The paired volume name (dev_name) within the group defined in the configuration definition file. M The MU# defined in the configuration definition file. For CA, the MU# is shown as –. For BC, the MU# is shown as 0, 1, or 2. Device_File The Device_File that is registered to the LDEV map tables within RM. UID The unit ID for multiple array configurations. If UID is displayed as –, a command device (HORCM-CMD) has not been found. S/F Shows whether a port is SCSI or Fibre Channel.
A raidscan on a Fibre Channel port displays ALPA data for the port instead of target ID number. # raidscan –p CL2-P PORT# /ALPA/C,TID#,LU#.Num(LDEV#..)..P/S, Status,LDEV#,P-Seq#,P-LDEV# CL2-P / ef/0, 0, 0-1.0(58).........P-VOL PSUS 58, 35641 61 CL2-P / ef/0, 0, 1-1.0yp(59).......P-VOL PSUS 59, 35641 62 CL2-P / ef/0, 0, 2...0(61).........S-VOL SSUS 61, ----58 CL2-P / ef/0, 0, 3...0(62).........S-VOL SSUS 62, ----59 The following example uses the –find option.
# ERROR [INVALID MUN (2 < 1)] /dev/rdsk/c24t0d3 SER = 61456 LDEV = 195 [ OPEN-3 ] • It mixes different RAID types: # ERROR [MIXING RAID TYPE] /dev/rdsk/c24t0d3 SER = 61456 LDEV = 195 [ OPEN-3 ] The following example flushes the system buffer associated with the ORB group through $Volume. This example uses the echo $Volume | raidscan -find sync -g ORB or raidscan -pi $Volume -find sync -g ORB options.
# ioscan -fun | grep rdsk | raidscan -find verify DEVICE_FILE Group PairVol PORT /dev/rdsk/c0t3d0 oradb oradev1 CL1-D /dev/rdsk/c0t3d1 oradb oradev2 CL1-D /dev/rdsk/c0t3d2 - TARG 3 3 - LUN 0 1 - M 0 0 0 SERIAL 35013 35013 35013 LDEV 17 18 19 The following example uses the –find verify and –fd options.
# raidscan -p cl1-r PORT#/ALPA/C,TID#,LU#..Num(LDEV#...) P/S, Status,Fence, LDEV#, P-Seq# P-LDEV# CL1-R/ ce/15, 15, 7..5(100,101...) P-VOL PAIR NEVER 100, 5678 200 CL1-R/ ce/15, 15, 6..5(200,201...) SMPL ----------------- # raidscan -p cl1-r -f PORT#/ALPA/C,TID#,LU#..Num(LDEV#...) P/S, Status,Fence, LDEV#, Vol.Type CL1-R/ ce/15, 15, 7..5(100,101...) P-VOL PAIR NEVER 100, OPEN-3 CL1-R/ ce/15, 15, 6..5(200,201...
Command Options for Windows NT/2000/2003 RM provides the following commands specific to Windows NT/2000/2003. These commands are built into the RM commands and are executed by using the –x option with any general RM command. For instance, enter: raidscan –x Any general command (not just raidscan) can be used; the –x option overrides the normal operation of the RM command.
drivescan Display disk drive and connection information Description Syntax Arguments Output Fields Windows NT/2000/2003 only The drivescan command displays the relationship between hard disk numbers on Windows NT/2000/2003 and the actual physical drives. RM_command –x drivescan stringx,y RM_command Any general RM command. string Any alphabetic character string; provided for readability. x ,y Specifies a range of disk drive numbers. harddiskn The hard disk number.
Example This example shows drivescan executed from the raidscan command, and displays the connection of the actual physical drive for disk drive number 0 to 10. raidscan –x drivescan harddisk0,10 Harddisk 0..Port[ 1] PhId[ 0] TId[ 0] Lun[ 0] [HITACHI] [DK328H-43WS] Harddisk 1..Port[ 2] PhId[ 4] TId[ 29] Lun[ 0] [HITACHI] [OPEN-3] Port[CL1-J] Ser#[ 30053] LDEV#[ 9(0x009)] HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] Harddisk 2..
env Display environment variable Description Syntax Argument Example Windows NT/2000/2003 only The env command displays an environment variable within a RAID Manager command. RM_command –x env RM_command Any general RM command. This example displays the current value of the HORCC_MRCF environment variable.
findcmddev Search for a command device Description Windows NT/2000/2003 only The findcmddev command searches to see if a command device exists within the range of the specified disk drive numbers. When the command device exists, the command displays the command device in the format described in the RM configuration definition file. This command searches for a command device as a physical drive, a Logical drive, and a Volume{GUID} for Windows 2000/2003.
Example raidscan cmddev of cmddev of cmddev of This example executes findcmddev, searching device numbers 0 to 20. -x findcmddev hdisk0, 20 Ser# 62496 = \\.\PhysicalDrive0 Ser# 62496 = \\.\E: Ser# 62496 = \\.
mount Mount and display a device Description Syntax Windows NT/2000/2003 only The mount command allocates the specified logical drive letter to the specified partition on the disk drive (hard disk). If no arguments are specified, this option displays a list of mounted devices. RM_command –x mount Windows NT: RM_command –x mount D: hdisk# [partition# ] . . . Windows 2000/2003: RM_command –x mount D: volume# RM_command –x mount D: [\directory] volume#] Arguments RM_command Any general RM command.
RAID Manager supports the mount command specifying the device object name (such as “\Device\Harddiskvolume X”). However, Windows 2003 will change the device number for the device object name when it recovers from a failure of the PhysicalDrive. So, the mount command specifying the device object name may fail due to this change. To overcome this, specify a Volume{GUID} as well as the device object name. If a Volume{GUID} is specified, it will be converted to a device object name during execution.
Port number, path ID, target ID, and LUN on the device adapter mounted to the logical drive. For information on Fibre Channel connection on the port, see Appendix , “Fibre Channel addressing” . Port PathID Targ Lun Examples Windows NT This Windows NT example executes mount from the pairsplit command option, mounting the F:\ drive to partition 1 on disk drive 2, and mounting the G:\ drive to partition 1 on disk drive 1. Then a list of mounted devices is displayed.
portscan Display devices on designated ports Description Syntax Windows NT/2000/2003 only The portscan command displays the physical devices that are connected to the designated port. RM_command –x portscan stringx,y –x portscan port0,n Arguments Output Fields RM_command Any general RM command. string Any alphabetic character string; provided for readability. x,y Specifies a range of port numbers. Port The port number on the Windows NT/2000/2003 device adapter.
Example This example executes portscan from the raidscan command option, and displays the connection of the physical device from port number 0 to 20. raidscan –x portscan port0,20 PORT[ 0] IID [ 7] SCSI Devices PhId[ 0] TId[ 3] Lun[ 0] [MATSHIT] [CD-ROM CR-508 ] ...Claimed PhId[ 0] TId[ 4] Lun[ 0] [HP ] [C1537A ] ...Claimed PORT[ 1] IID [ 7] SCSI Devices PhId[ 0] TId[ 0] Lun[ 0] [HITACHI] [DK328H-43WS ] ...Claimed PORT[ 2] IID [ 7] SCSI Devices PhId[ 0] TId[ 5] Lun[ 0] [HITACHI ] [OPEN-3 ] ...
setenv Set environment variable Description Syntax Arguments Restrictions Windows NT/2000/2003 only The setenv command sets an environment variable within a RAID Manager command. RM_command –x setenv variable value RM_command Any general RM command. variable Specifies the environment variable to be set or deleted. value Specifies the value or character string of the environment variable to be set. Set environment variable prior to starting RM, unless you are using interactive mode.
sleep Suspend execution Description Syntax Arguments 226 Windows NT/2000/2003 only The sleep command suspends execution for a specified period of time. RM_command –x sleep time RM_command Any general RM command. time Specifies the sleep time in seconds.
sync Write data to drives Description Windows NT/2000/2003 only The sync command writes unwritten data remaining on the Windows NT/2000/2003 system to the logical and physical drives. If the logical drives designated as the objects of the sync command is not opened to any applications, then sync flushes the system buffer to a drive and performs a dismount.
If the specified logical drive has directory mount volumes, then SYNC is executed for all of the volumes on the logical drive. [\directory|\directory pattern] (Windows 2000/2003 only) Specifies the directory mount point on the logical drive. If directory is specified, then SYNC is executed for the specified directory mounted volume only. If a directory pattern is specified then SYNC is executed for the directory mounted volumes identified by directory pattern.
The following example executes SYNC for specified directory mounted volume. pairsplit -x sync D:\hd1 [SYNC] D:\hd1 HarddiskVolume8 The following example executes SYNC for the directory mounted volumes identified by the directory pattern “D:\h”. pairsplit -x sync D:\h [SYNC] D:\hd1 HarddiskVolume8 [SYNC] D:\hd2 HarddiskVolume9 The following example executes SYNC for all of the volumes on the logical drives with directory mount volumes.
This following example flushes the system buffer before the pairsplit without unmounting the PVOL (open state), and provides a warning. pairsplit -x sync C: WARNING: Only flushed to [\\.\C:] drive due to be opening.
umount Unmount a device Description Syntax Windows NT/2000/2003 only The umount command unmounts a logical drive and deletes the drive letter. Before deleting the drive letter, the command automatically executes the sync command for the specified logical drive (flushes unwritten buffer data to the disk). RM_command –x umount D: Windows 2000/2003 RM_command –x umount D: [\directory] Arguments Restriction Output Fields RM_command Any general RM command.
For information on Fibre Channel connection on the port, see Appendix , “Fibre Channel addressing” . Examples Windows 2000/2003 This Windows 2000/2003 example shows the specification of a directory mount point on the logical drive. pairsplit D:\hd1 <-> pairsplit D:\hd2 <-> -x umount D:\hd1 HarddiskVolume8 -x umount D:\hd2 HarddiskVolume9 This example executes umount from the pairsplit command option, after unmounting the F:\ drive and G:\ drive.
usetenv Delete environment variable Description Syntax Arguments Restrictions Example Windows NT/2000/2003 only The usetenv command deletes an environment variable within a RAID Manager command. RM_command –x usetenv variable RM_command Any general RM command. variable Specifies the environment variable to be deleted. Changing an environment variable after an execution error of a RAID Manager command is invalid.
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
Data Integrity Check Commands To set and verify the validation check parameters for Data Integrity Check, RM provides the following commands.
raidvchkset Integrity checking command Description Data Integrity Check only The raidvchkset command sets the parameters for integrity checking to the specified volumes and can also be used to turn off all integrity checking without specifying type when the time is specified in [rtime] and when the integrity checking that was originally set (or later extended) has elapsed. The unit for the protection checking is based on a group in the RAID Manager configuration file.
–d pair_vol Specifies a paired logical volume name from the configuration definition file. The command is executed only for the specified paired logical volume. –d[g] raw_device [MU#] Searches the RM configuration file (local instance) for a volume that matches the specified raw device. If a volume is found, the command is executed on the paired volume (–d) or group (–dg). This option is effective without specification of the –g group option.
Valid values for type: redo8 Sets the parameter for validation checking as Oracle redo log files (including archive logs) prior to Oracle9i. This option sets bsize to 1 (512 bytes) for Solaris or 2 (1024 bytes) for HP-UX. data8 Sets the parameter for validation checking as Oracle data files prior to Oracle9i. redo9 Sets the parameter for validation checking as Oracle redo log files for Oracle9iR2 or later. This option sets bsize to 1 (512 bytes) for Solaris or 2 (1024 bytes) for HP-UX.
If this option is not specified, then a region for a target volume is set as all blocks (SLBA=0; ELBA=0). -vg [type][rtime] Specifies the following guard types to the target volumes for HP StorageWorks LUN Security XP Extension. If [type] is not specified, then this option disables all guarding. If no guard type has been specified, then the volume will be unguarded (read and write operations from the host as well as use as an S-VOL will be allowed).
wtd Disables the target volumes from writing. The volumes cannot be used as an S-VOL or written by a host. svd Disables the target volumes so they cannot become an S-VOL. Read and Write operations from hosts are still allowed. [rtime] Specifies the data retention time, in days. If [rtime] is not specified, then the data retention time never expires. Disk array microcode versions 21-06-xx and 21-07-xx ignore this option and always set the retention time to never expire.
This example disables all writing to volumes for the oralog group: raidvchkset –g oralog –vg wtd This example disables all writing and retention time for the oralog group: raidvchkset –g oralog –vg wtd 365 This example disables guarding for the oralog group: raidvchkset –g oralog –vg This example disables writing for the oralog group. raidvchkset -g oralog -vg wtd This example disables writing and sets as retention time of 365 days.
Flags 242 The command sets the following four flags each for the guarding types: Type INQ RCAP READ WRITE Inv 1 1 1 1 Sz0 0 1 1 1 Rwd 0 0 1 1 Wtd 0 0 0 1 HP StorageWorks Disk Array XP RAID Manager: User’s Guide
raidvchkdsp Integrity checking confirmation command Description Data Integrity Check only The raidvchkdsp command displays the parameters for protection checking of the specified volumes. The unit of checking for the protection is based on the RM configuration file group. A nonpermitted volume is shown without LDEV# information (LDEV# information is - ).
This option is effective without specification of the –g group option. If the volume is contained in two groups, the command is executed on the first volume encountered. If MU# is not specified, it defaults to 0. –d[g] seq# LDEV# [MU#] Searches the RM instance configuration file (local instance) for a volume that matches the specified sequence # and LDEV. If a volume is found, the command is executed on the paired logical volume (–d) or group (–dg).
-fe displays the serial numbers and LDEV numbers of the external LUNs mapped to the LDEV for the target volume. This option displays the information above by adding to last column, and then ignores the 80-column format. Example: # raidvchkdsp -g horc0 -v gflag -fe Group ... TID LU Seq# LDEV# GI-C-R-W-S PI-C-R-W-S R-Time EM E-Seq# E-LDEV# horc0 ... 0 20 63528 65 E E E E E E E E E E 0 horc0 ... 0 20 63528 66 E E E E E E E E E E 0 - EM displays the external connection mode.
BR-W-E-E displays the flags for checking data block size. R=Read → E=Enable and D=Disable W=Write → E=Enable and D=Disable E=Endian format → L=Little and B=Big E=Not rejected when validation error → W=Write and R=Read MR-W-B displays the flags for checking block header information. R=Read → E=Enable and D=Disable W=Write → E=Enable and D=Disable B=Block #0 → E=Enable and D=Disable BR-W-B displays the flags for checking data block number information.
BNM displays whether this validation is disabled or enabled. If BNM is 0 then this validation is disabled. –v gflag Displays the flags for guarding the target volumes. Example raidvchkdsp -g vg01 -fd -v gflag Group vg01 vg01 PairVol Device_File oradb1 c4t0d2 oradb2 c4t0d3 Seq# LDEV# 2332 2 2332 3 GI-C-R-W-S E E D D E E E D D E PI-C-R-W-S R-Time E E D D E 365 E E D D E - GI-C-R-W-S displays the protection flags for the target volume. The flags are “E” for enabled and “D” for disabled.
Example raidvchkdsp -g vg01 -v pool Group PairVol Port# TID LU Seq# LDEV# Vg01 oradb1 CL2-D 2 7 62500 167 Vg01 oradb2 CL2-D 2 10 62500 170 Bsize 2048 2048 Available 100000 100000 Capacity 1000000000 1000000000 Bsize: This displays the data block size of the pool, in units of block(512bytes). Available(Bsize): This displays the available capacity for the volume data on the SnapShot pool in units of Bsize. Capacity(Bsize): This displays the total capacity in the SnapShot pool in units of Bsize.
Examples Group vg01 vg01 # raidvchkdsp -g vg01 -fd -v cflag PairVol Device_File oradb1 Unknown oradb2 c4t0d3 Seq# LDEV# 2332 2332 3 BR-W-E-E - - - D E B R MR-W-B - - D D D BR-W-B - - D E E SR-W-B-S - - - D E D D # raidvchkdsp -g vg01 -fd -v offset Group vg01 vg01 PairVol Device_File oradb1 c4t0d2 oradb2 c4t0d3 Seq# LDEV# 2332 2 2332 3 Bsize 1024 1024 STLBA 1 1 ENLBA BNM 102400 9 102400 9 # raidvchkdsp -g vg01 -fd -v cflag Group vg01 vg01 PairVol Device_File oradb1 c4t0d2 oradb2 c4t0d3 Seq# L
raidvchkscan Integrity checking confirmation command Description Syntax Arguments Data Integrity Check only The raidvchkscan command sets the parameters for protection checking to the specified volumes.The unit of checking for the protection is based on the raidscan command. raidvchkscan { –h | –q | –z[x] | –p port [hgrp] | –pd[g] raw_device | –s seq# | –t target | –l LUN | –fx | –v operation } –h Displays Help/Usage and version information. –q Terminates interactive mode and exits this command.
This option always must be specified if the –find or –p port option is not specified. If this option is specified, the –s seq# option is invalid. -pdg specifies the LUNs displayed in host view by locating a host group for XP 128 and XP 1024 arrays. –s seq# Specifies the serial number of the disk array on multiple disk array connections when you cannot specify the unit ID that is contained in the –p port option or the –v jnl option.
–v cflag Displays all flags for checking regarding data block validation for target volumes.
–v offset Displays the range setting for data block size of Oracle I/O and a region on a target volume for validation checking.
Available(Bsize): This displays the available capacity for the volume data on the SnapShot pool in units of Bsize. Capacity(Bsize): This displays the total capacity in the SnapShot pool in units of Bsize. [Note] This command will be rejected with EX_ERPERM by connectivity checking between Raid Manager and Array.
Capacity(MB): Displays the total capacity in the SnapShot pool. Seq#: Displays the serial number of the RAID. Num: Displays the number of LDEVs configured for the SnapShot pool. LDEV#: Displays the number of the first LDEV configured for the SnapShot pool. –v errcnt Displays the statistical information about errors on the target volumes. Statistical information is cleared when the individual flag for integrity checking is disabled.
# raidvchkscan -p CL1-A -v gflag PORT# /ALPA/C TID# LU# Seq# Num LDEV# CL1-A / ef/ 0 0 0 2332 1 0 CL1-A / ef/ 0 0 1 2332 1 1 CL1-A / ef/ 0 0 2 2332 1 2 GI-C-R-W-S E E D D E E E D D E E E D D E PI-C-R-W-S R-Time E E D D E 365 E E D D E E E E E E 0 GI-C-R-W-S displays the protection flags for the target volume. The flags are “E” for enabled and “D” for disabled. I. Inquiry command C. Read Capacity command R. Read command W. Write command S.
Example # raidvchkscan -v jnl 0 JID MU CTG JNLS AP U(%) 001 0 1 PJNN 4 21 002 1 2 PJNF 4 95 003 0 3 PJSN 4 0 004 0 4 PJSF 4 45 005 0 5 PJSE 0 0 006 - SMPL 007 0 6 SMPL 4 5 Q-Marker 43216fde 3459fd43 1234f432 345678ef Q-CNT 30 52000 78 66 D-SZ(BLK) 512345 512345 512345 512345 512345 512345 512345 Seq# Nnm LDEV# 62500 2 265 62500 3 270 62500 1 275 62500 1 276 62500 1 277 62500 1 278 62500 1 278 JID displays the journal group ID. MU displays the mirror descriptions on CA-Journal.
completely. If AP is 1, all data were passed. If not, all data were not passed from S-JNL (S-VOL). U(%) displays the usage rate of the journal data. Q-Marker displays the sequence number of the journal group ID, called the Q-marker. In the case of pair status PJNL, Q-Marker shows the latest sequence number on the PJNL volume. In the of pair status SJNL, Q-Marker shows the latest sequence number on the cache (DFW). Q-CNT displays the number of remaining Q-Markers in each journal volume.
The table below shows the meanings of JNLS status when combined with other information.
–v jnlt Displays three timer values for the journal volume. DOW = “Data Overflow Watch” timer (in seconds) for the journal group PBW = “Path Blockade Watch” timer (in seconds) for the journal group.
Check whether the data validations are disabled for LVM configuration changes. Check whether the data validations are not used based on the file system. Check whether the redo log and data file are separated among the volumes.
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
5 Troubleshooting RAID Manager This chapter lists RM errors and describes the problem, typical cause, and solution for each.
Error reporting If you have a problem with RM, first make sure that the problem is not caused by the host or the connection to the disk array. The tables in this chapter provide detailed troubleshooting information: “Operational notes” on page 265 “Error codes” on page 268 “Command return values” on page 270 “Command errors” on page 273 If a failure occurs in CA or BC volumes, find the failure in the paired volumes, recover the volumes, and continue operation in the original system.
Operational notes Error Solution Coexistence of Logical Volume Manager (LVM) mirror and CA When the LVM mirror and CA volumes are used together, the LVM mirror handles write errors by switching LVM P-VOL volumes. Thus, the fence level of mirrored P-VOLs used by the LVM must be set to data. One instance of LVM must not be allowed to see both the P-VOL and S-VOL of the same BC or CA pair. This will cause an LVM error in that two volumes will contain the same LVM volume group ID.
Error Solution horctakeover (swap-takeover) When executing horctakeover on a standby server manually, I/O activity on the servers (for the pertinent CA volumes) must be stopped. Host machines that can own Host machines must be running the same operating system and the opposite sides of a CA pair same architecture. New RM installations After a new host system has been constructed, a RM failure to start can occur due to an improper environmental setting or an inaccurate configuration definition file.
Error Solution SCSI alternating path restrictions If the primary and secondary volumes are on the same server, alternate pathing, for example, pvlink, cannot be used (from primary volume to secondary volume). Use of SCSI alternative pathing to a volume pair is limited to one side of a pair. The hidden S-VOL option can avoid undesirable alternate pathing.
Error codes Error Code Problem Cause Solution HORCM_001 The RM log file cannot be The file cannot be created in Create space on the root opened. the RM directory. disk. HORCM_002 The RM trace file cannot The file cannot be created in Create space on the root be opened. the RM directory. disk. HORCM_003 The RM daemon could not produce enough processes to complete the request. The RM daemon attempted to create more processes than the maximum allowable number.
Error Code Problem Cause HORCM_009 CA/RM connection to RM failed. System devices are See the RM startup log to improperly connected, or an identify the cause of the error exists in the RM error. configuration file $HORCM_CONF. HORCM_101 CA/RM and RM communication failed. A system I/O error occurred See the RM startup log to or an error exists in the RM identify the cause of the configuration file error. $HORCM_CONF. HORCM_102 The volume is suspended. The pairing status was suspended.
Command return values For error descriptions, see “Error codes” on page 268. Return Value Command Error Error Message 211 EX_ERPERM RAID permission denied. 212 EX_ENQSIZ Unmatched pairing volume size. 213 EX_ENPERM LDEV permission denied. 214 EX_ENQCTG Unmatched CTGID. 215 EX_ENXCTG No such CT group (Open Systems volume) 216 EX_ENTCTG Extended CT group across disk arrays. 217 EX_ENOCTG Not enough CT groups in the disk array.
Return Value Command Error Error Message 231 EX_ESTMON RM monitoring has stopped. 232 EX_EWSLTO Local host timeout error. 233 EX_EWSTOT Timeout error. 234 EX_EWSUSE Pairsplit –E. 235 EX_EVOLCE Pair volume combination error. 236 EX_ENQVOL Group volume matching error occurred. 237 EX_CMDIOE Command I/O error. 238 EX_UNWCOD Unknown function code. 239 EX_ENOGRP Specified group is not defined. 240 EX_INVCMD Invalid disk array command. 241 EX_INVMOD Invalid disk array command.
Return Value Command Error Error Message 255 EX_COMERR Cannot communicate with RM. 256 EX_ENOSUP SVOL denied due to disabling 257 EX_EPRORT Mode changes denied due to retention time.
Command errors Command Error Problem Action EX_ATTDBG This command failed to communicate Verify that RM is functioning properly. with RM, or a log directory file could not be created. EX_ATTHOR Connection could not be made with RM. Verify that RM has started and that the correct HORCMINST value has been defined. EX_CMDIOE The request to the command device either failed or was rejected.
Command Error Problem Action EX_ENLDEV A device defined in the configuration file does not have an assigned LUN, port, or target ID. Verify that the configuration file is correct and that all devices are defined correctly. EX_ENOCTG Not enough CT groups. Could not register because 15 CTs (XP256), 63 CTs (XP512), 127 CTs (XP1024), or 255 CTs (XP12000) are already in use.
Command Error Problem Action EX_ENPERM A device mentioned in the configuration file does not have permission for a pair operation. Use the pairdisplay or raidscan –find verify command to confirm that a pair operation is permitted for the device. EX_ENQCTG The CT group in a group does not match the CTGID number. Confirm the CTGID by using the pairvolchk command.
Command Error Problem Action EX_ESTMON RM monitoring is prohibited. Verify the poll value defined in the configuration file. EX_EVOLCE The chosen primary and secondary volumes cannot be paired. Confirm the status of each volume using the pairdisplay command. EX_EWSLTO The command timed out because the remote host did not respond. Verify that the remote server is functioning properly. EX_EWSTOT The command has timed out. Change the timeout value and re-issue the command.
Command Error Problem Action EX_INVRCD Incorrect return code. Call the HP support center. EX_INVSTP The target volume is not accessible because of an invalid volume status. Verify the volume status using the pairdisplay command. EX_INVVOL The target volume is not accessible because of an invalid volume status. Verify the volume status using the pairdisplay command. EX_OPTINV Disk array error. Call the HP support center.
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
A Configuration file examples This appendix presents examples of RM configuration files.
Configuration definition for cascading volumes RAID Manager is capable of keeping track of up to four MU pair associations per LDEV (one for CA, three for BC). The following figure shows this configuration. Oradb Oradb1 CA LDEV BC MU#0 BC / MU#1 Oradb2 BC / MU#2 Oradb3 Correspondence between a configuration file and mirror descriptors The following table shows how MU usage can indicate that a pair is CA, BC, or either. Leaving MU blank means “0, and usable for either a CA or BC pair.
No MU designations in configuration file MU#0 CA 1 HORCM_DEV #dev_group dev_name Oradb oradev1 2 3 port# CL1-D TargetID 2 BC LU# 1 MU# oradev1 oradev1 HORCM_DEV #dev_group Oradb Oradb1 Oradb2 dev_name port# TargetID LU# oradev1 CL1-D 2 1 oradev11 CL1-D 2 1 oradev21 CL1-D 2 1 MU# oradev1 oradev1 HORCM_DEV #dev_group Oradb Oradb1 Oradb2 Oradb3 dev_name oradev1 oradev11 oradev21 oradev31 port# TargetID CL1-D 2 CL1-D 2 CL1-D 2 CL1-D 2 LU# 1 1 1 port# CL1-D 1 MU# oradev1 oradev1 1 0 1 1 o
Instance 0, in this case, describes the root (and all leaf) volumes (as if the normal diagram had been folded over from right to left). Instance 1 describes the intermediate S-VOL/P-VOLs.
The instance 1 configuration file in the preceding figure specifies that: • Three BC pairs are recognized. • The BC pairs are intermediate S-VOL/P-VOLs, because the TID/LUN combinations are all the same. Connecting CA and BC You can use three configuration files to describe a CA/BC cascaded configuration, as shown in the following figure.
HOST2 HOST1 HORCMINST for CA HORCMINST for CA/BC CA environment CA environment PVOL CA BC environment HORCC_MRCF=1 S/P VOL Oradb T3L0 266 HORCMINST0 for BC MU#0 T3L2 268 CA BC environment HORCC_MRCF=1 SVOL MU#0 Oradb1 T3L4 270 MU#1 Oradb SVOL MU#0 T3L6 272 284 HORCM_DEV #group dev_name port# TID LU MU Oradb oradev1 CL1-D 3 0 HORCM_DEV #group dev_name Oradb oradev1 Oradb1 oradev11 Oradb2 oradev21 HORCM_INST #dev_group ip_address service Oradb HST2 horcm Oradb HST2 horcm0 HORCM_INST
CA configuration (remote CA, two hosts) LAN Ip address:HST1 Ip address:HST2 HOSTA HOSTB CONF.file RM RM C1 SCSI port C0 SCSI port CONF.
Configuration file for HOSTA (/etc/horcm.conf) on page 285 HORCM_MON #ip_address HST1 service horcm poll(10ms) 1000 timeout(10ms) 3000 HORCM_CMD #dev_name /dev/xxx (See “Note” on page 286) HORCM_DEV #dev_group Oradb Oradb HORCM_INST #dev_group Oradb dev_name oradev1 oradev2 port# CL1-A CL1-A ip_address HST2 TargetID 1 1 LU# 1 2 service horcm Configuration file for HOSTB (/etc/horcm.
The following shows an example of the (raw) control device file format that must be used. HOSTx = HOSTA, HOSTB, etc... • HP-UX HORCM_CMD for HOSTx ... /dev/rdsk/c0t0d1 • Solaris HORCM_CMD for HOSTx ... /dev/rdsk/c0t0d1s2 • AIX HORCM_CMD for HOSTx ... /dev/rhdiskNN Where NN is the device number assigned automatically by AIX. • Digital UNIX HORCM_CMD for HOSTx ... /dev/rrzbNNc Where NN is device number (BUS number × 8 + target ID) defined by Digital UNIX. • DYNIX/ptx HORCM_CMD for HOSTx ...
CA (remote CA, two host) command examples Commands from HOSTA in the figure on page 285 The following examples employ CA commands from HOSTA. • Designate a group name (Oradb) and a local host P-VOL: # paircreate -g Oradb -f never -vl This command begins a pair coupling between the volumes designated as Oradb in the configuration definition file and begins copying the two pairs (in the example configuration).
Commands from HOSTB in the figure on page 285 The following examples employ CA commands from HOSTB. • Designate a group name and a remote host P-VOL: # paircreate -g Oradb -f never -vr This command begins a pair coupling between the volumes designated as Oradb in the configuration definition file and begins copying the two pairs (in the example configuration).
CA configuration (local loopback, two hosts) LAN Ip address:HST1 Ip address:HST2 HOSTA HOSTB CONF.file RM SCSI port CONF.
Configuration file for HOSTA on page 290 (/etc/horcm.conf) HORCM_MON #ip_address HST1 service horcm poll(10ms) 1000 timeout(10ms) 3000 HORCM_CMD #dev_name /dev/xxx (See “Note” on page 286) HORCM_DEV #dev_group Oradb Oradb HORCM_INST #dev_group Oradb dev_name oradev1 oradev2 port# CL1-A CL1-A ip_address HST2 TargetID 1 1 LU# 1 2 service horcm Configuration file for HOSTB on page 290 (/etc/horcm.
CA (local loopback, two hosts) command examples Commands from HOSTA in the figure on page 290 The following examples employ RM commands from HOSTA. • Designate a group name (Oradb) and a local host P-VOL: # paircreate -g Oradb -f never -vl This command begins a pair coupling between the volumes designated as Oradb in the configuration definition file and begins copying the two pairs (in the example configuration).
Commands from HOSTB in the figure on page 290 The following examples employ RM commands from HOSTB. • Designate a group name and a remote host P-VOL: # paircreate -g Oradb -f never -vr This command begins a pair coupling between the volumes designated as Oradb in the configuration definition file and begins copying the two pairs (in the example configuration).
CA configuration (two RM instances, one host) LAN Ip address:HST1 HORCMINST0 HORCMINST1 HOSTA CONF.file RM SCSI port CONF.
Configuration file for HOSTA, Instance 0 shown on page 294 (/etc/horcm0.conf) HORCM_MON #ip_address HST1 service horcm0 poll(10ms) 1000 timeout(10ms) 3000 HORCM_CMD #dev_name /dev/xxx (See “Note” on page 286) HORCM_DEV #dev_group Oradb Oradb HORCM_INST #dev_group Oradb dev_name oradev1 oradev2 port# CL1-A CL1-A ip_address HST1 TargetID 1 1 LU# 1 2 service horcm1 Configuration file for HOSTA, Instance 1 shown on page 294 (/etc/horcm1.
CA (two RM instances, one host) command examples Commands from HOSTA, Instance 0 in the figure on page 294 The following examples employ RM commands from HOSTA, Instance 0. • Set the instance number. (If C shell) # setenv HORCMINST 0 (Windows NT/2000/2003) set HORCMINST=0 • Designate a group name (Oradb) and a local instance P-VOL: # paircreate -g Oradb -f never -vl This command begins a pair coupling between the two pairs of volumes designated as Oradb in the configuration definition file.
Commands from HOSTA, Instance 1 in the figure on page 294 The following examples employ RM commands from HOSTA, Instance 1. • Set the instance number. (If C shell) # setenv HORCMINST 1 (Windows NT/2000/2003) set HORCMINST=1 • Designate a group name and a remote instance P-VOL: # paircreate -g Oradb -f never -vr This command begins a pair coupling between the two pairs of volumes designated as Oradb in the configuration definition file.
BC configuration LAN HOST D/dev/rdsk/c1t2d1 Ip address:HST4 /dev/rdsk/c1t2d1 ?CONF.file /dev/rdsk/c1t2d2 HOST C Ip address:HST3 /dev/rdsk/c1t2d1 /dev/rdsk/c1t2d2 /dev/rdsk/c1t0d1 /dev/rdsk/c1t2d1 CONF.file .file /dev/rdsk/c1t0d1 /dev/rdsk/c1t2d2 HOST B Ip address:HST2 /dev/rdsk/c1t2d2 /dev/rdsk/c1t2d1 /dev/rdsk/c1t0d1 /dev/rdsk/c1t0d1 CONF.file HORCM /dev/rdsk/c1t2d2 /dev/rdsk/c1t2d1 /dev/rdsk/c1t0d1 HORCM /dev/rdsk/c1t2d2 SCSI port /dev/rdsk/c1t0d1 SCSI port RM C1 C1 Ip address:HST1 HOST A CONF.
Configuration file for HOSTA shown on page 298 (/etc/horcm.
Configuration file for HOSTC shown on page 298 (/etc/horcm.conf) HORCM_MON #ip_address HST3 service horcm poll(10ms) 1000 timeout(10ms) 3000 HORCM_CMD #dev_name /dev/xxx (See “Note” on page 286) HORCM_DEV #dev_group Oradb1 Oradb1 HORCM_INST #dev_group Oradb1 dev_name oradev1-1 oradev1-2 port# CL2-C CL2-C ip_address HST1 TargetID 2 2 LU# 1 2 MU# service horcm Configuration file for HOSTD shown on page 298 (/etc/horcm.
BC command examples Commands from HOSTA shown on page 298 (group Oradb) • Set the HORCC_MRCF environment variable. (If C shell) # setenv HORCC_MRCF 1 (Windows NT/2000/2003) set HORCC_MRCF=1 • Designate a group name (Oradb) and a local host P-VOL: # paircreate -g Oradb -vl This command begins a pair coupling between the two pairs of volumes designated as Oradb in the configuration definition file.
Commands from HOSTB shown on page 298 (group Oradb) • Set the HORCC_MRCF environment variable. (If C shell) # setenv HORCC_MRCF 1 (Windows NT/2000/2003) set HORCC_MRCF=1 • Designate a group name and a remote host P-VOL: # paircreate -g Oradb -vr This command begins a pair coupling between the two pairs of volumes designated as Oradb in the configuration definition file.
Commands from HOSTA shown on page 298 (group Oradb1) • Set the HORCC_MRCF environment variable. (If C shell) # setenv HORCC_MRCF 1 (Windows NT/2000/2003) set HORCC_MRCF=1 • Designate a group name (Oradb1) and a local host P-VOL: # paircreate -g Oradb1 -vl This command begins a pair coupling between the two pairs of volumes designated as Oradb1 in the configuration definition file.
Commands from HOSTC shown on page 298 (group Oradb1) • Set the HORCC_MRCF environment variable. (If C shell) # setenv HORCC_MRCF 1 (Windows NT/2000/2003) set HORCC_MRCF=1 • Designate a group name and a remote host P-VOL: # paircreate -g Oradb1 -vr This command begins a pair coupling between the two pairs of volumes designated as Oradb1 in the configuration definition file.
Commands from HOSTA shown on page 298 (group Oradb2) • Set the HORCC_MRCF environment variable. (If C shell) # setenv HORCC_MRCF 1 (Windows NT/2000/2003) set HORCC_MRCF=1 • Designate a group name (Oradb2) and a local host P-VOL: # paircreate -g Oradb1 -vl This command begins a pair coupling between the two pairs of volumes designated as Oradb2 in the configuration definition file.
Commands from HOSTD shown on page 298 (group Oradb2) • Set the HORCC_MRCF environment variable. (If C shell) # setenv HORCC_MRCF 1 (Windows NT/2000/2003) set HORCC_MRCF=1 • Designate a group name and a remote host P-VOL: # paircreate -g Oradb2 -vr This command begins a pair coupling between the two pairs of volumes designated as Oradb2 in the configuration definition file.
Configuration for a BC cascaded connection LAN Ip address:HST1 HORCMINST0 HORCMINST1 HOSTA CONF.file RM SCSI port CONF.
Configuration file for HOSTA shown on page 307 (/etc/horcm0.
BC cascaded connection command examples Commands from HOSTA, Instance 0 shown on page 307 The following examples employ RM commands from HOSTA, Instance 0. • When the command execution environment is not set, set the instance number. (If C shell) # setenv HORCMINST 0 (Windows NT/2000/2003) set HORCMINST=0 • Set the HORCC_MRCF environment variable.
Commands from HOSTA, Instance 1 shown on page 307 The following examples employ RM commands from HOSTA, Instance 1. • Set the instance number. (If C shell) # setenv HORCMINST 1 (Windows NT/2000/2003) set HORCMINST=1 • Set the HORCC_MRCF environment variable.
Configuration for a CA/BC cascaded connection LAN HOSTA Ip address:HST1 HOSTB HORCMINST Ip address:HST2 HORCMINST0 HORCMINST CONF.file CONF.file /dev/rdsk/c0t1d1 /dev/rdsk/c0t1d2 /dev/rdsk/c0t0d1 RM SCSI port /dev/rdsk/c1t2d1 /dev/rdsk/c1t2d2 /dev/rdsk/c1t0d1 RM C0 /dev/rdsk/c1t3d1 /dev/rdsk/c1t3d2 /dev/rdsk/c1t0d1 SCSI port SCSI A CONF.
Configuration file for HOSTA shown on page 311 (/etc/horcm.conf) HORCM_MON #ip_address HST1 service horcm poll(10ms) 1000 timeout 3000 HORCM_CMD #dev_name /dev/xxx (See “Note” on page 286) HORCM_DEV #dev_group Oradb Oradb HORCM_INST #dev_group Oradb Oradb dev_name oradev1 oradev2 port# CL1-A CL1-A ip_address HST2 HST2 TargetID 1 1 LU# 1 2 MU# service horcm horcm0 Configuration file for HOSTB shown on page 311 (/etc/horcm.
Configuration file for HOSTB shown on page 311 (/etc/horcm0.
CA/BC cascaded connection command examples Commands from HOSTA and HOSTB shown on page 311 The following examples employ RM commands from HOSTA and HOSTB. • Set the HORCC_MRCF environment variable.
Commands from HOSTB shown on page 311 The following examples employ RM commands from HOSTB. • Set the HORCC_MRCF environment variable.
• Designate a group name and confirm BC pair states from HOSTB: # pairdisplay -g oradb1 –m cas Group PairVol(L/R) (Port#,TID,LU-M), oradb1 oradev11(L) (CL1-D, 2, 1-0) oradb2 oradev21(L) (CL1-D, 2, 1-1) oradb oradev1(L) (CL1-D, 2, 1) oradb1 oradev11(L) (CL1-D, 3, 1-0) oradb1 oradev12(L) (CL1-D, 2, 2-0) oradb2 oradev22(L) (CL1-D, 2, 2-1) oradb oradev2(R) (CL1-D, 2, 2) oradb1 oradev12(R) (CL1-D, 3, 2-0) Seq#, LDEV#..P/S, Status, Seq#, 30053 268..P-VOL PAIR 30053 30053 268..SMPL -------30053 268..
Two-host BC configuration These two RM configuration files illustrate how to configure a two-host BC. Each host will run one instance of RM. File 1 # This is the RaidManager Configuration file for host blue. # It will manage the PVOLs in the Business Copy pairing.
The RM configuration files show one RM group defined. The group, Group1, contains two disks. The comments note that system blue is defining the P-VOLs and system yellow is defining the S-VOLs. However, the P-VOL/S-VOL relationship is set when the paircreate command is issued.
Two BC mirror configuration These two RM configuration files illustrate how to configure two BC mirrors of the same P-VOLs. File 1 # This is the Raid Manager Configuration file for host blue. # It will manage the PVOLs in the Business Copy pairing.
A one-host configuration differs from a two-host configuration as follows: • The host names for the local and remote are the same. • The poll value under the HORCM_MON section for the S-VOL configuration file is –1. When creating more than one BC of the same P-VOL, the mirror unit column in the HORCM_DEV section must be filled in for the P-VOL configuration. Do not fill it in for the S-VOL configuration. If the mirror unit column is not filled in, the default value is 0.
Three-host BC configuration These three RM configuration files illustrate how to configure a three-host BC. Each host will run one instance of RM. File 1 # This is the Raid Manager configuration file for host blue. #It will manage the PVOLs in the Business Copy pairing.
File 3 # This is the Raid Manager Configuration file for host green. # It will manage the SVOLs in the Business Copy pairing.
Device group configuration This RM configuration file shows how to configure two device groups that belong to different unit IDs (disk arrays). File 1 HORCM_MON #ip_address HST1 service horcm HORCM_CMD #unitID 0... (seq#30014) #dev_name dev_name /dev/rdsk/c0t0d0 #unitID 1...
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
B HA Failover and failback This appendix covers high availability (HA) failover and failback sequences.
Using RAID Manager in HA environments When using HA software (such as MC/ServiceGuard or Cluster Extension XP), application packages can be transferred to the takeover host node at any time. If the application package transfer operation is performed in an environment where CA is used, you may need to switch the CA secondary volumes to primary volumes. The horctakeover command provides this function.
state No Volume Attributes and Pair Status DC1(DC2) 1 2 3 4 4-1 5 6 8 9 10 11 12 13 14 15 DC2(DC1) SMPL P-VOL SMPL or SVOL-PSUS (SSWS) SMPL P-VOL data or status && S-VOL PSUE or PDUB Other data or status && Unknown PSUE or PDUB Other SMPL P-VOL COPY 23 PAIR/ PFUL 24 PSUS PFUS PSUE PDUB PAIR Horctakeover result STATUS SMPL EX_VOLCRE XXX Nop EX_EVOLCE EX_ENORMT or EX_CMDIOE SMPL EX_EVOLCE SVOL_YYY XXX XXX PVOL_XXX EX_ENORMT or EX_CMDIOE EX_EVOLCE PVOL_XXX XXX XXX PAIR/PFUL PSUS PFUS PSUE dat
Table terms: XXX Pair status of P-VOL that was returned by the pairvolchk –s or pairvolchk –s –c command. YYY Pair status of S-VOL that was returned by the pairvolchk –s or pairvolchk –s –c command. PAIR STATUS Since the P-VOL controls status, PAIR STATUS is reported as PVOL_XXX (except when the P-VOLs status is Unknown).
XP256 microcode 52-47-xx and over XP512/48 microcode 10-00-xx and over XP1024/XP128 XP10000 XP12000 With newer firmware, a horctakeover results in a SSWS state S-VOL so that a delta copy is all that is required at failback. This functionality is known as “fast failback” and is accomplished via the –swaps|p option to pairresync. SVOL SVOL-SSUS takeover or swap-takeover In case of a host failure, this function will execute swap-takeover.
Failback after SVOL-SMPL takeover This failover situation occurs, for instance, when: • The original P-VOL status is unavailable • The S-VOL is changed to SMPL and unable to failback. The Host B (DC1) sequence illustrated by the following figures is required to change the SMPL volume to pvol_pair and make it suitable for failback. From the Data Center 1 (DC1) side, the required steps are: 1. pairsplit –S 2. paircreate –vl 3.
• When the DC2 volume becomes a svol_pair, it executes a swap-takeover to become a pvol_pair: horctakeover Host A Host B Host A dpaircreate -vl cpairsplit -S SMPL SMPL SVOL COPY DC2 DC1 Host B DC2 Host B Host A Host B epairevtwait PVOL COPY SVOL PAIR PVOL PAIR PVOL PAIR SVOL PAIR DC1 DC2 DC1 DC2 DC1 State No.15 State No.1 Host A State No.16 State No.16 • If DC2 attempts a failback while the DC1 volume is still SMPL, it is a State 10 situation.
• If a takeover operation is attempted while both volumes are SMPL (State 1), an EX_VOLCRE error results. If pairvolchk is executed during a volume group split, it would likely return an EX_ENQVOL error, indicating that the statuses of the volumes in the group do not match. Host A Host B pairsplit -S SMPL SMPL DC2 DC1 State No.1 • If a takeover is needed during State 15 (copy), the HA script could either run pairevtwait to wait for PAIR state, or prompt for system administrator intervention.
• An attempt to do a takeover prior to all group volumes reaching PAIR state (svol_copy) results in a SVOL-takeover and an EX_VOLCUR error. Host A Host B SVOL PAIR/ COPY PVOL PAIR/ COPY DC2 DC1 State No.16 • The HA script should prompt you for a decision before attempting a takeover in SVOL_PSUS (stale data) State 17, because it will result in a SVOL-takeover and an EX_VOLCUR error. Host A Host B PVOL SVOL PSUS PSUS DC2 DC1 State No.
• The horctakeover command will fail with an EX_ENORMT error in the following nested failure case (State No. 4 → 9). Therefore, the HA Control Script should prompt you for a decision and not change the volume state on the DC1 side. Host A Host B Host C Host A Host A Host B SMPL P vol SMPL DC1 DC2 DC1 Host B DC2 S VOL P vol DC1 DC2 State No.16 334 Host C HostB fail DC2 site fail P vol Host C State No.23Æ4 State No4Æ9.
PVOL-PSUE takeover The horctakeover command executes a PVOL-PSUE-takeover when the primary volume cannot report status or refuses writes (for example, data fence). • PSUE (or PDUB) and the horctakeover command returns a PVOL-PSUE-takeover value at exit(). • A PVOL-PSUE-takeover forces the primary volume to the suspend state (PSUE or PDUB → PSUE*, PAIR → PSUS), which permits WRITEs to all primary volumes of the group. The following illustrates how volumes in the same volume group may be of different status.
• Even if connected to the ESCON/FC link, PVOL-PSUE-takeover changes only the active P-VOL/S-VOLs to suspend state. Host C SVOL failure Host A Host C Host B PAIR PSUE PAIR PAIR PAIR PAIR PSUE PAIR PAIR PAIR P-VOL S-VOL horctakeover Host A Host B PSUS PSUE* PSUS PSUS PSUS PAIR PSUE PAIR PAIR PAIR P-VOL S-VOL P-VOL group status The result of the pvol_psue takeover is that PSUE and PSUS status is intermingled within the group.
Recovery after PVOL-PSUE-takeover The special PSUE* state can be turned back to PAIR state by issuing the pairresync command (after the recovery of the ESCON/FC link) instead of the horctakeover command.
SVOL-SSUS-takeover in the case of ESCON/FC link and host failure An SVOL-Takeover executes an SVOL-SSUS-takeover so that S-VOL writing is enabled without going to SMPL state. An SVOL-SSUS-takeover changes the secondary volume to suspend (PAIR, PSUE → SSUS) state, which permits WRITE and delta data maintenance (BITMAP) for all secondary volumes of the group.
SVOL_PSUS) between P-VOL and S-VOL may need to handled by HA control scripts. Async-CA specific behavior Before the S-VOL is changed to SSUS, an SVOL-takeover will try to copy non-transmitted data, which remains in the FIFO queue (sidefile) of the P-VOL, to the S-VOL side. In the case of an ESCON/FC link failure, this data synchronize operation may fail. Even so, the SVOL-takeover function performs the force split to SSUS, and enables usage of the secondary volume.
Host C Host A Host B PAIR PSUE PAIR PAIR PAIR SSUS SSUS SSUS SSUS SSUS P-VOL S-VOL Host A Host B COPY COPY COPY COPY COPY COPY COPY COPY COPY COPY S-VOL P-VOL Host C pairresync -swaps on HostB only Host A Host C Host B PAIR PAIR PAIR PAIR PAIR PAIR PAIR PAIR PAIR PAIR S-VOL P-VOL Delta COPY Æ PAIR If the pairresync –swaps command fails because the ESCON/FC link is not yet restored, then the special state (PVOL_PSUE and SVOL_PSUS) is not changed.
Failback without recovery on Host B The following procedure for recovery is necessary if, after host and ESCON/FC link recovery, you stop the application without executing the pairresync –swaps command on Host B and restart the application on the Host A. At that time, the pairvolchk command on Host A will return PVOL_PSUE and SVOL_PSUS as the state combination.
SVOL-takeover in the case of a host failure After SVOL-takeover changes the S-VOL (only) to suspend (PAIR, PSUE → SSUS) state, the SVOL-takeover will automatically execute the pairresync –swaps command to copy data between the new P-VOL and the new S-VOL. The horctakeover command returns a swap-takeover. Async-CA specific behavior Before the S-VOL is changed to SSUS, the SVOL-takeover operation will copy non-transmitted data (which remains in the P-VOL sidefile) to the S-VOL.
Another case of SVOL-takeover An SVOL-takeover from Host B to Host D will do nothing because the S-VOL was already in SSWS state. Host C horctakeover Host A Host B PAIR PSUE PAIR PAIR PAIR SSUS SSUS SSUS SSUS SSUS P-VOL S-VOL Host D Host C horctakeover Host A Host B PAIR PSUE PAIR PAIR PAIR SSUS SSUS SSUS SSUS SSUS P-VOL S-VOL Host D S-VOL data consistency function The consistency of the data within a pair is determined by the pair status and the fence level of the pair.
Object volume Status Fence SMPL — — Needs to be confirmed — P-VOL — — Needs to be confirmed — S-VOL COPY data PAIR paircurchk Currency SVOL_Takeover Attribute Inconsistent status Inconsistent (due to out-of-order copying) never Inconsistent async Inconsistent data OK OK status OK OK never Must be analyzed To be analyzed async Must be analyzed OK (Assumption) PFUL async To be analyzed OK (Assumption) PSUS data suspect suspect status suspect suspect never suspe
Object volume Attribute paircurchk Currency SVOL_Takeover Status Fence SSWS data suspect — status suspect — never suspect — async suspect — Terms: Inconsistent Data in the volume is inconsistent because it is being copied. Suspect The primary volume data and secondary volume data are not consistent (the same). Must be analyzed It cannot be determined from the status of the secondary volume whether data is consistent. It is “OK” if the status of the primary volume is PAIR.
Takeover-switch function The takeover command, when activated manually or by a control script, checks the attributes of volumes on the local and remote disk array to determine the proper takeover action. The table below shows the takeover actions.
Local node (Takeover) Volume attribute S-VOL (secondary) Fence and status Remote node Volume attribute Takeover action P-VOL status Status == SSWS Don’t care (After SVOL_SSUStakeover) — Nop-Takeover** Others — Volumes unconformable SMPL P-VOL PAIR or PFUL Swap-Takeover* Others SVOL-Takeover* S-VOL — Volumes unconformable Unknown — SVOL-Takeover* Terms: nop-takeover No operation is done, though the takeover command is accepted.
Unknown The attribute of the remote node is unknown. This means that the remote node system has failed or cannot communicate. Swap-takeover function It is possible to swap the designations of the primary and secondary volumes when the P-VOL of the remote disk array is in the PAIR or PFUL (async-CA and over HWM) state and the mirror consistency of S-VOL data has been assured. The takeover command carries out the commands internally to swap the designations of the primary and secondary volumes.
XP256 microcode 52-47-xx and over XP512/48 microcode 10-00-xx and over XP1024/128 XP10000 XP12000 The swap-takeover function no longer uses “Simplex” and “No Copy” mode for swapping. This assures greater mirror consistency. Moreover, it is included as a function of SVOL-takeover. 1. The command orders a “suspend for swapping” (SSWS) for the local volume (S-VOL). If this step fails, the swap-takeover function is disabled and returns an error. 2.
3. The swap operation is performed. The swap operation must copy non-transmitted P-VOL data within the timeout value specified by the –t timeout option. 4. The swap command returns after the synchronization between the P-VOL and S-VOL. XP256 microcode 52-47-xx and over XP512/48 microcode 10-00-xx and over XP1024/128 XP10000 XP12000 1. The S-VOL side RM issues a “suspend for swapping” to the S-VOL side disk array. 2.
function returns SVOL-SSUS-takeover as the return value of horctakeover command. If there is a Host failure, this function returns as swap-takeover. If an ESCON/FC link or P-VOL site failure occurs, this function returns as SVOL-SSUS-takeover. If SVOL-takeover is specified for a group, the data consistency check executes for all volumes in the group. Inconsistent volumes are displayed in the execution log file.
PVOL-takeover function The PVOL-takeover function terminates the PAIR state of a pair or group. The takeover node is given unrestricted and exclusive access to the primary volume (reading and writing are enabled), on the assumption that the remote node (controlling the S-VOL) is unavailable or unreachable. The PVOL-takeover function has two roles: • PVOL-PSUE-takeover puts the P-VOL into PSUE state, which permits “WRITE” access to all primary volumes of that group.
Recovery procedures of HA system configuration After installing CA, the system administrator should conduct operation tests on the assumption that system failures may occur. In normal operation, service personnel obtain failure cause information from the SVP. However, the CA commands may also give error information. XP256 microcode 52-47-xx and under XP512/48 microcode 10-00-xx and under The following figure shows a diagram of system failure and recovery.
2. Host B detects the failure in host A and issues the takeover command to make the S-VOL usable. If the S-VOL can continue processing, host B takes over from host A and continues processing. 3. While host B is processing, the P-VOL and S-VOL can be swapped using full copy (pairsplit –S, paircreate –vl) and the data updated by host B is fed back to the new S-VOL, host A. 4. When host A recovers from the failure, host A takes over processing from host B through the horctakeover swap-takeover command.
Scenario 1. A failure occurs in host A or in the P-VOL. 2. Host B detects the failure in host A and issues the takeover command to make the S-VOL usable. Host B takes over from host A and continues processing. In the case of a host A failure, a takeover command executes a swap-takeover. In the case of a P-VOL failure, a takeover command executes a S-VOL-SSUS-takeover. 3.
Regression and recovery of CA The figure below shows a diagram of regression and recovery where horctakeover is not needed. Regression state Mirroring state Host A Host B S-VOL failure P vol Host A Host B S-VOL goes down. S-VOL recovers.
3. The S-VOL or the link recovers from the failure. Host A issues the pairsplit –S, paircreate –vl, or pairresync command to update the P-VOL data by copying all data, or copying differential data only. The updated P-VOL is fed back to the S-VOL. CA recovery procedures Follow these steps to recover CA operations: 1. If an error occurs in writing paired volumes (for example, pair suspension), the server software using the volumes detects the error depending on the fence level of the paired volume. 2.
for which you can take no action, check the files in the log directory and contact HP. Failure to activate the RAID Manager instance The failure to activate RM on a new system can be caused by an incorrect environment setting and/or configuration file definition. Check the activation log file and take any necessary actions.
C Fibre Channel addressing This appendix provides Fibre Channel conversion tables for these operating systems: • HP-UX • Sun Solaris • Microsoft Windows NT • Microsoft Windows 2000 • Microsoft Windows 2003 • OpenVMS Fibre Channel addressing 359
Fibre Channel address conversions RM converts the Fibre Channel physical address to a target ID using conversion tables presented on the following pages.
HP-UX Fibre Channel address conversion C0 C1 C2 C3 C4 C5 C6 C7 AL PA TID AL PA TID AL PA TID AL PA TID AL PA TID AL PA TID AL PA TID EF E8 E4 E2 E1 E0 DC DA D9 D6 D5 D4 D3 D2 D1 CE 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 CD CC CB CA C9 C7 C6 C5 C3 BC BA B9 B6 B5 B4 B3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 B2 B1 AE AD AC AB AA A9 A7 A6 A5 A3 9F 9E 9D 9B 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 98 97 90 8F 88 84 82 81 80 7C 7A 79 76 75 74 73 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 72 71
Sun Solaris Fibre Channel address conversion C0 C1 C2 C3 C4 C5 C6 C7 AL PA TID AL PA TID AL PA TID AL PA TID AL PA TID AL PA TID AL PA TID AL PA TID EF 0 CD 16 B2 32 98 48 72 64 55 80 3A 96 25 112 E8 1 CC 17 B1 33 97 49 71 65 54 81 39 97 23 113 E4 2 CB 18 AE 34 90 50 6E 66 53 82 36 98 1F 114 E2 3 CA 19 AD 35 8F 51 6D 67 52 83 35 99 1E 115 E1 4 C9 20 AC 36 88 52 6C 68 51 84 34 100 1D 116 E0 5 C7 21 AB
Windows NT/2000 Fibre Channel address conversion (QLogic or Emulex driver) PhId1(C1) AL PA AL TID PA PhId2(C2) AL TID PA 27 15 3C 30 26 14 72 3A 29 25 13 71 39 28 23 36 AL TID PA PhId3(C3) AL TID PA 56 15 30 55 14 B1 29 54 12 6E 28 27 1F 11 35 26 34 AL TID PA PhId4(C4) AL TID PA AL TID PA PhId5(C5) AL TID PA AL TID PA TID 98 15 30 97 14 E4 30 CB 14 13 AE 29 90 13 E2 29 CA 13 53 12 AD 28 8F 12 E1 28 C9 12 6D 27 52 11 AC 27 88 11 E0 27 C7 1
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
D STDIN file formats This appendix provides the format specifications for the STDIN or device special files.
The STDIN or device special files are specified in the following formats: MPE/iX /dev/. . . HP-UX /dev/rdsk/* Solaris /dev/rdsk/*s2 or c*s2, Linux /dev/sd... or /dev/rd... MPE/iX /dev/...
E Porting notice for MPE/iX This appendix describes operating system requirements, restrictions, and known issues for MPE/iX.
Porting notice for MPE/iX Introduction MPE/iX does not fully support POSIX like UNIX. Therefore, RAID Manager has some restrictions in MPE/iX. The system calls (wait3(), gettimeofday()... ) that are not supported on MPE/iX are implemented in LIB BSD; however, RM has to avoid using LIB BSD due to its availability as free software. These functions are, therefore, implemented within RM. RM has accomplished porting within standard POSIX for MPE/iX only.
HORCM daemon startup HORCM can start as a daemon process from a UNIX Shell. But in the case of MPE/iX, if a parent process exits, then any child process also dies at the same time. In other words, it looks like MPE/iX POSIX cannot launch a daemon process from a POSIX Shell. Therefore, horcmstart.sh has been changed to wait until HORCM has exited after startup of the horcmgr. According to the rules for MPE/iX, horcmstart.sh is run as a MPE JOB.
Installing Since MPE/iX POSIX is unable to execute cpio to extract a file, the RM product is provided as a tar file. For further information about installing RAID Manager on MPE/iX systems, see “Installing RAID Manager on MPE/iX systems” on page 34. Uninstalling The RMuninst (rm -rf /$instdir/HORCM ) command cannot remove the directory (/HORCM/log*/curlog only) while the HORCM is running. Æ For more details, see the section “Cannot remove directories using the “rm -rf /users/HORCM” command on page 372.
This problem is resolved by RM010904(3), which supports the traffic control method for MPE socket. The traffic control method is to limit sending the packets for multiple commands at the same time, and over-packets are queued (FIFO) to wait until sending the next packets.The queued packets are sent after a reply is received for the sent message. This method controls the amount of packets that are sent to the remote host at the same time.
shell/iX> callci dstat LDEV-TYPE STATUS VOLUME VOLUME SET - GEN 99-OPEN-3-CVS UNKNOWN 100-OPEN-3-CVS MASTER MEMBER100 PVOL100-0 101-OPEN-3-CVS MASTER MEMBER101 PVOL101-0 102-OPEN-3-CVS MASTER MEMBER102 PVOL102-0 103-OPEN-3-CVS-C MASTER MEMBER103 PVOL103-0 Regarding “multiple capability” of the SCSI path-thru driver When other commands are executed via the SCSI path-thru driver, HORCM is blocked until the other commands have completed.
You cannot remove the ‘/tmp/curlog’ directory even if you use the mv /users/HORCM/log*/curlog /tmp command. MPE/iX startup procedures Make a JOB control file The following is an example of JOB control file named JRAIDMR0 ( HORCMINST=0 ). !job jraidmr0, manager.sys;pri=cs !setvar TZ "PST8PDT" !xeq sh.hpbin.sys '/HORCM/usr/bin/horcmstart.
HORCM_DEV #dev_group dev_name port# HORCM_INST #dev_group ip_address service TargetID LU# MU# You will have to start HORCM without a description for HORCM_DEV and HORCM_INST because the target ID & LUN are Unknown. You will be able to know about mapping a physical device with a logical device (ldev of MPE/iX term) by using raidscan -find. Execute an “horcmstart.
Describe the known HORCM_DEV & HORCM_INST on /etc/horcm*.conf HORCM_DEV #dev_group dev_name port# TargetID LU# MU# DSG1 dsvol0 CL1-L 0 1 0 DSG1 dsvol1 CL1-L 0 2 0 DSG1 dsvol2 CL1-L 0 3 0 HORCM_INST #dev_group DSG1 ip_address service HOSTB horcm1 Restart “horcmstart.sh 0” as a JOB shell/iX> horcmshutdown.
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
F Porting notice for OpenVMS This appendix describes operating system requirements, restrictions, and known issues for OpenVMS.
Porting notice for OpenVMS Introduction RM uses the UNIX domain socket for IPC (Inter Process Communication). While OpenVMS does not support the AF_UNIX socket, RAID Manager uses the Open VMS mailbox driver for inter-process communication between RAID Manager commands and the HORCM daemon. Requirements and restrictions Version of OpenVMS RM uses CRTL, and needs the following version to support the ROOT directory for POSIX: • OpenVMS Version 7.
HORCM daemon startup In OpenVMS, horcmstart.exe is created as a detached process or batch job by using the DCL command. Using the detached process: If you want the HORCM daemon to run in background, you will need to create the detached LOGINOUT.EXE process by using the RUN /DETACHED command. You will also need to make a command file for LOGINOUT.EXE. The following are examples of the loginhorcm*.com file given to SYS$INPUT for LOGINOUT.EXE. They show that VMS4$DKB100:[SYS0.SYSMGR.
$ $ run /DETACHED SYS$SYSTEM:LOGINOUT.EXE /PROCESS_NAME=horcm1 _$ /INPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]loginhorcm1.com _$ /OUTPUT=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.out _$ /ERROR=VMS4$DKB100:[SYS0.SYSMGR.][horcm]run1.err %RUN-S-PROC_ID, identification of created process is 00004166 You can verify that the HORCM daemon is running as a detached process by using the SHOW PROCESS command. $ show process horcm0 25-MAR-2003 23:27:27.
Example: $ show device Device Name Device Status Error Count VMS4$DKB0: Online 0 VMS4$DKB100: Mounted 0 VMS4$DKB200: Online 0 VMS4$DKB300: Online 0 VMS4$DQA0: Online 0 $1$DGA145: (VMS4) Online 0 $1$DGA146: (VMS4) Online 0 (VMS4) Online 0 Volume Label Free Blocks ALPHASYS 30782220 Trans Mnt Count Cnt 414 1 : : $1$DGA153: $ DEFINE/SYSTEM DKA145 $1$DGA145: $ DEFINE/SYSTEM DKA146 $1$DGA146: : : $ DEFINE/SYSTEM DKA153 $1$DGA153: -zx option for RAID Manager commands und
Startup log files Under OpenVMS, RAID Manager has two startup log files, which are separated by using PID. For example in the SYS$POSIX_ROOT:[HORCM.LOG*.CURLOG] directory: HORCMLOG_VMS4 HORCM_VMS4_10530.LOG HORCM_VMS4_10531.LOG Option syntax and case sensitivity RAID Manager commands are case sensitive. OpenVMS users needs to change case sensitivity in LOGIN.COM. The following upper-case strings are not case sensitive.
Example $ spawn /NOWAIT /PROCESS=horcm0 horcmstart 0 %DCL-S-SPAWNED, process HORCM0 spawned $ starting HORCM inst 0 $ spawn /NOWAIT /PROCESS=horcm1 horcmstart 1 %DCL-S-SPAWNED, process HORCM1 spawned $ starting HORCM inst 1 $ Note that the subprocess (HORCM, the RM daemon) created by spawn will be terminated when the terminal is logged off or the session is terminated. To run the process independently of LOGOFF, use the RUN /DETACHED command.
$ DEFINE/TRANSLATION=(CONCEALED,TERMINAL) SYS$POSIX_ROOT "Device:[directory]" $ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.bin], SYS$POSIX_ROOT:[horcm.etc] $ DEFINE/TABLE=LNM$PROCESS_DIRECTORY LNM$TEMPORARY_MAILBOX LNM$GROUP $ DEFINE DECC$ARGV_PARSE_STYLE ENABLE $ SET PROCESS/PARSE_STYLE=EXTENDED The Device:[directory] you choose will be defined as SYS$POSIX_ROOT. To install RAID Manager: Install RAID Manager by using the file HP-AXPVMS-RMXP-V0115-4-1.PCSI 1. Insert and mount the installation media. 2.
Known issues and concerns Rebooting on PAIR state (writing disabled) OpenVMS does not show the volumes with writing disabled (e.g., SVOL_PAIR) at system startup; therefore, the S-VOLs are hidden when rebooting in PAIR state or SUSPEND-mode. You can verify that the show device and inqraid commands do not show the S-VOLs after a reboot as shown below (that is, DGA148 and DGA150 devices are in the SVOL_PAIR state and do not display).
DKA152 CL1-H 30009 152 - s/s/ss 0004 5:01-11 OPEN-9 DKA153 CL1-H 30009 153 - s/s/ss 0004 5:01-11 OPEN-9 $ inqraid DKA148 sys$assign : DKA148 -> errcode = 2312 DKA148 -> OPEN: no such device or address After enabling the S-VOL for writing by using either the pairsplit or horctakeover command, you will need to execute the mcr sysman command to use the S-VOLs for backup or disaster recovery.
Device Device Error Volume Free Trans Mnt Name Status Count Cnt (VMS4) (VMS4) Online Online 0 0 (VMS4) Online 0 $1$DGA145: $1$DGA146: : : $1$DGA153: $ Label Blocks Count $ DEFINE/SYSTEM DKA145 $1$DGA145: $ DEFINE/SYSTEM DKA146 $1$DGA146: : : $ DEFINE/SYSTEM DKA153 $1$DGA153: Defining the environment for RAID Manager in LOGIN.COM You need to define the path for the RAID Manager commands to DCL$PATH as the foreign command. $ DEFINE DCL$PATH SYS$POSIX_ROOT:[horcm.usr.
DKA145 DKA146 CL1-H 30009 145 - - - CL1-H 30009 146 - s/S/ss 0004 - OPEN-9-CM OPEN-9 5:01-11 DKA147 CL1-H 30009 147 - s/P/ss 0004 OPEN-9 5:01-11 DKA148 CL1-H 30009 148 - s/S/ss 0004 OPEN-9 5:01-11 DKA149 CL1-H 30009 149 - s/P/ss 0004 OPEN-9 5:01-11 DKA150 CL1-H 30009 150 - s/S/ss 0004 OPEN-9 5:01-11 DKA151 CL1-H 30009 151 - s/P/ss 0004 OPEN-9 5:01-11 SYS$POSIX_ROOT:[etc]horcm0.conf HORCM_MON #ip_address 127.0.0.
Verifying physical mapping of the logical device $ HORCMINST := 0 $ raidscan -pi DKA145-151 -find DEVICE_FILE UID S/F PORT TARG LUN SERIAL LDEV PRODUCT_ID DKA145 0 F CL1-H 0 1 30009 145 OPEN-9-CM DKA146 0 F CL1-H 0 2 30009 146 OPEN-9 DKA147 0 F CL1-H 0 3 30009 147 OPEN-9 DKA148 0 F CL1-H 0 4 30009 148 OPEN-9 DKA149 0 F CL1-H 0 5 30009 149 OPEN-9 DKA150 0 F CL1-H 0 6 30009 150 OPEN-9 DKA151 0 F CL1-H 0 7 30009 151 OPEN-9 $ horcmshutdown 0 inst 0: HORCM Shutdown inst 0
#dev_group dev_name port# TargetID LU# MU# VG01 oradb1 CL1-H 0 3 0 VG01 oradb2 CL1-H 0 5 0 VG01 oradb3 CL1-H 0 7 0 HORCM_INST #dev_group VG01 ip_address HOSTA service horcm0 The UDP port name for HORCM communication in "SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT" is defined as shown in the example below. horcm0 horcm1 30001/udp 30002/udp Starting “horcm 0” and “horcm 1” as detached processes $ run /DETACHED SYS$SYSTEM:LOGINOUT.
$ show process horcm0 25-MAR-2003 23:27:27.72 User: SYSTEM 00004160 Process ID: Node: VMS4 Process name: "HORCM0" Terminal: User Identifier: Base priority: Default file spec: Number of Kthreads: [SYSTEM] 4 Not available 1 Soft CPU Affinity: off DCL command examples 1. Setting the environment variable by using symbol: $ HORCMINST := 0 $ HORCC_MRCF := 1 $ raidqry -l No Group Hostname HORCM_ver Uid Serial# 1 --VMS4 01.12.
2. Removing the environment variable: $ DELETE/SYMBOL HORCC_MRCF $ pairdisplay -g VG01 -fdc Group PairVol (L/R) Device_File ,Seq#, LDEV# .P/S, Status, Fence, % ,P-LDEV# M VG01 oradb1(L) DKA146 30009 146.. SMPL ---- ------, ----- ---- - VG01 oradb1(R) DKA147 30009 147.. SMPL ---- ------, ----- ---- - VG01 oradb2(L) DKA148 30009 148.. SMPL ---- ------, ----- ---- - VG01 oradb2(R) DKA149 30009 149.. SMPL ---- ------, ----- ---- - VG01 oradb3(L) DKA150 30009 150..
DEVICE_FILE PORT SERIALLDEV CTG C/B/12 SSID R:Group PRODUCT_ID DKA145 CL1-H 30009 145 - - - - OPEN-9-CM DKA146 CL1-H 30009 146 - s/S/ss 0004 5:01-11 OPEN-9 DKA147 CL1-H 30009 147 - s/P/ss 0004 5:01-11 OPEN-9 DKA148 CL1-H 30009 148 - s/S/ss 0004 5:01-11 OPEN-9 DKA149 CL1-H 30009 149 - s/P/ss 0004 5:01-11 OPEN-9 CL1-H 30009 150 - s/S/ss 0004 5:01-11 OPEN-9 DKA150 6.
SMGR.LOG9.CURLOG] HORCM_*.LOG', and modify 'ip_address & service'. HORCM inst 9 finished successfully. $ SYS$SYSROOT:[SYSMGR]horcm9.conf (/sys$sysroot/sysmgr/horcm9.conf) # Created by mkconf on Thu Mar 13 20:08:41 HORCM_MON #ip_address 127.0.0.
7. Using $1$* naming as native device name You can use the native device without the DEFINE/SYSTEM command by specifying $1$* naming directly.
DEVICE_FILE $1$DGA145 $1$DGA146 $1$DGA147 $1$DGA148 UID 0 0 0 0 S/F F F F F PORT CL2-H CL2-H CL2-H CL2-H TARG 0 0 0 0 LUN 1 2 3 4 SERIAL 30009 30009 30009 30009 LDEV 145 146 147 148 PRODUCT_ID OPEN-9-CM OPEN-9 OPEN-9 OPEN-9 $ pairdisplay -g BCVG -fdc Group BCVG BCVG $ PairVol(L/R) Device_File oradb1(L) $1$DGA146 oradb1(R) $1$DGA147 M ,Seq#,LDEV#..P/S,Status,% ,P-LDEV# M 0 30009 146..P-VOL PAIR, 100 147 0 30009 147..
: $1$DGA153: $ $ DEFINE/SYSTEM $ DEFINE/SYSTEM : : $ DEFINE/SYSTEM (VMS4) Online 0 DKA145 $1$DGA145: DKA146 $1$DGA146: DKA153 $1$DGA153: Defining the environment for RAID Manager in LOGIN.
#ip_address 127.0.0.1 service 52000 HORCM_CMD #dev_name DKA145 dev_name HORCM_DEV #dev_group LU# MU# poll(10ms timeout(10ms) 1000 3000 dev_name dev_name HORCM_INST #dev_group port# ip_address TargetID service You will have to start HORCM without a description for HORCM_DEV and HORCM_INST because the target ID and LUN are unknown. You will be able to determine the mapping of a physical device with a logical name by using the raidscan -find command.
DKA149 0 F CL1-H 0 5 30009 149 OPEN-9 DKA150 0 F CL1-H 0 6 30009 150 OPEN-9 DKA151 0 F CL1-H 0 7 30009 151 OPEN-9 Describing the known HORCM_DEV on /etc/horcm*.conf For horcm0.conf HORCM_DEV #dev_group VG01 VG01 VG01 dev_name oradb1 oradb2 oradb3 HORCM_INST #dev_group VG01 port# TargetID CL1-H 0 CL1-H 0 CL1-H 0 ip_address HOSTB LU# MU# 2 0 4 0 6 0 service horcm1 For horcm1.
bash$ horcmstart 0 & 19 bash$ starting HORCM inst 0 bash$ horcmstart 1 & 20 bash$ starting HORCM inst 1 400 HP StorageWorks Disk Array XP RAID Manager: User’s Guide
Glossary ACA HP StorageWorks Asynchronous Continuous Access XP. ACP Array Control Processor. The ACP handles passing data between cache and the physical drives. ACPs work in pairs. In the event of an ACP failure, the redundant ACP takes control. Both ACPs work together sharing the load. allocation The ratio of allocated storage capacity versus total capacity as a percentage. “Allocated storage” refers to those LDEVs that have paths assigned to them.
CFW Cache fast write. CH Channel. CHA (channel adapter) The channel adapter (CHA) provides the interface between the disk array and the external host system. Occasionally this term is used synonymously with the term channel host interface processor (CHIP) CHIP (channel host interface processor) Synonymous with the term channel adapter (CHA). CHP (channel processor) The processor(s) located on the channel adapter (CHA). CHPID Channel path identifier. CKD Count key data.
CU Control Unit. Contains LDEVs and is approximately equivalent to SCSI Target ID. CVS Custom volume size. CVS devices (OPEN-x CVS) are custom volumes configured using array management software to be smaller than normal fixed-size OPEN system volumes. Synonymous with volume size customization (VSC). disk group The physical disks associated with a parity group. disk type The manufacturer’s label in the physical disk controller firmware.
EPO Emergency power-off. ESCON Enterprise Systems Connection (an IBM trademark). A set of IBM and vendor products that interconnect S/390 computers with each other and with attached storage, locally attached workstations, and other devices using optical fiber technology and switches called ESCON Directors. expanded LUN A LUN is normally associated with only a single LDEV. The LUN Size Expansion (LUSE) feature allows a LUN to be associated with 2-36 LDEVs.
HORCM_MON A section of the RM instance configuration file that defines the instance you are configuring. host mode Each port can be configured for a particular host type. These modes are represented as two-digit hexadecimal numbers. For example, host mode 08 represents an HP-UX host. hot standby Using one or more servers as a standby in case of a primary server failure. HP Hewlett-Packard Company. instance An independent copy of RM. Instances are local or remote and can run on the same host.
OFC Open Fibre Control. OPEN-x A general term describing any one of the supported OPEN emulation modes (for example, OPEN-L). parity group Synonymous with the term RAID group. partition To divide a disk according to the UNIX kernel or device driver layer into two or more areas, which will be treated as if they were two or more physical disks. path “Path” and “LUN” are synonymous. Paths are created by associating a port, a target, and a LUN ID with one or more LDEVs. PB Petabyte.
shell script A command sequence executed by a UNIX shell. SIM Service information message. SNMP Simple Network Management Protocol. SSID Storage subsystem identification. S-VOL Secondary (or remote) volume. The volume that receives the data from the P-VOL (primary volume). SVP Service processor. The PC built into the array’s disk controller. The SVP provides a direct interface into the disk array. It is used only by the HP service representative.
HP StorageWorks Disk Array XP RAID Manager: User’s Guide
Index Symbols 23, 25 A addresses Fibre Channel conversion in RM 377 authorized resellers 11 C command devices, switching 26 commands using RAID Manager 60 configuration setting up 30 configuration file examples 279 configuration file parameters 43 D DCL command examples 391 disk array(s) supported 9 drivescan command option 215 E env command option 217 environment variables 77 error codes 268 error reporting 264 Index F features 18 Fibre Channel addressing in RM 377 Fibre Channel addressing 359 findcmdde
installing MPE/iX 34 OpenVMS 38 installing RAID Manager UNIX systems 31 instances 23 RAID Manager 23 pairresync command 165 pairsplit command 173 pairsyncwait command 179 parameters, configuration file 43 porting notice, MPE/iX 368 portscan command option 223 PVOL-takeover function 352 L log directories 75 R RAID Manager command devices 25 features 18 general commands 106 instances 23 product description 17 system requirements 28 topologies 23 using 59 Windows NT/2000/2003 command options 214 RAID Manage
sleep command option 226 Start-up procedures using detached process on DCL 386 state transistions 326 StorageWorks, supported arrays 9 Surestore, supported arrays 9 S-VOL data consistency function 343 SVOL-takeover function 350 swap-takeover function 348 switching command devices 26 sync command option 227 system administrator, required knowledge 9 system requirements RAID Manager 28 Windows NT/2000/2003 command options 214 T takeover-switch function 346 technical support, HP 10 topologies 23 troubleshoot
HP StorageWorks Disk Array XP RAID Manager: User’s Guide