HP P9000 RAID Manager Installation and Configuration Guide Abstract This guide describes and provides instructions to install and configure HP P9000 RAID Manager Software on HP P9500 disk arrays. The intended audience is a storage system administrator or authorized service provider with independent knowledge of HP P9000 disk arrays and the HP Remote Web Console.
© Copyright 2010, 2011 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Installation requirements..............................................................................5 System requirements..................................................................................................................5 Supported environments............................................................................................................6 Supported Business Copy environments...................................................................................
5 Troubleshooting........................................................................................47 Troubleshooting......................................................................................................................47 6 Support and other resources......................................................................48 Contacting HP........................................................................................................................48 Subscription service.........
1 Installation requirements NOTE: The raidcom commands described in this guide are supported only on the P9000 disk arrays. All other commands are supported on both the P9000 and the XP24000/XP20000, XP12000/XP10000, SVS200, and XP1024/XP128 disk arrays. The GUI illustrations in this guide were created using a Windows computer with the Internet Explorer browser. Actual windows may differ depending on the operating system and browser used.
• Host memory: ◦ Static memory capacity: minimum = 300 KB, maximum = 500 KB ◦ Dynamic memory capacity (set in HORCM_CONF): maximum = 500 KB per unit ID • Failover: RAID Manager supports several failover products, including FirstWatch, MC/ServiceGuard, HACMP, TruCluster, and ptx/CLUSTERS. See Table 2 (page 7) – Table 10 (page 12) for detailed information.
Table 1 Supported platforms for Business Copy (continued) Vendor IBM Microsoft Operating system Failover software Volume Manager I/O interface Digital UNIX 4.0 TruCluster LSM SCSI Tru64 UNIX 5.0 TruCluster LSM SCSI/Fibre OpenVMS 7.3-1 – – Fibre DYNIX/ptx 4.4 ptx/Cluster LVM SCSI/Fibre AIX 4.3 HACMP LVM SCSI/Fibre z/Linux (SUSE 8) For restrictions on – z/Linux, see “Requirements and restrictions for z/Linux” (page 13). – Fibre (FCP) Windows NT 4.
Table 2 Supported platforms for Continuous Access Synchronous (continued) Vendor Operating system Failover software Volume Manager I/O interface Windows 2003/2008 on IA64* MSCS LDM Fibre – – SCSI/Fibre** AS/ES 2.1, 3.0 Update 2, 4.0, 5.0 on EM64T / IA64* – – Fibre IRIX64 6.5 – – SCSI/Fibre Windows 2003/2008 on EM64T Red Hat Red Hat Linux 6.0, 7.0, 8.0 AS/ES 2.1, 3.0, 4.0, 5.
Supported Continuous Access Journal environments Table 4 Supported platforms for Continuous Access Journal Vendor Operating system Failover software Volume Manager I/O interface Oracle Solaris 2.8 VCS VxVM Fibre Solaris 10 /x86 – VxVM Fibre HP-UX 11.0/11.2x MC/Service Guard LVM, SLVM Fibre HP-UX 11.2x on IA64* MC/Service Guard LVM, SLVM Fibre IBM AIX 5.
Table 5 Supported platforms for Snapshot (continued) Vendor Operating system Failover software Volume Manager I/O interface SGI IRIX64 6.5 – – Fibre * IA64: using IA-32EL on IA64 (except RAID Manager for Linux/IA64) ** See “Troubleshooting” (page 47) about RHEL 4.0 using Kernel 2.6.9.XX. Supported Data Retention environments Table 6 Supported platforms for Data Retention Vendor Operating system Volume Manager I/O interface Oracle Solaris 2.
Table 7 Supported platforms for Database Validator (continued) Vendor IBM Operating system Volume Manager I/O interface Digital UNIX 4.0 LSM SCSI Tru64 UNIX 5.0 LSM SCSI/Fibre OpenVMS 7.3-1 – Fibre DYNIX/ptx 4.4 LVM SCSI/Fibre AIX 4.3 LVM SCSI/Fibre z/Linux (SUSE 8) – Fibre (FCP) Windows NT4.
Table 8 Supported guest OS for VM (continued) VM Vendor Layer Parent Guest OS RAID Manager support confirmation Volume mapping I/O interface SLES10 SP2 Confirmed Path-thru Fibre Windows 2008 Confirmed Direct Fibre * RDM: Raw Device Mapping using Physical Compatibility Mode. 1: See “Restrictions for VMware ESX Server” (page 14). 2: See “Restrictions on AIX VIO” (page 15). 3: See “Restrictions on Windows 2008 Hyper-V” (page 16).
Table 10 Supported platforms: IPv4 vs. IPv6 (continued) RAID Manager / IPv6 *1 IPv6 IPv4 AV: Available for communicating with different platforms. N/A: Not Applicable (Windows LH does not support IPv4 mapped IPv6). Minimum platform versions for RAID Manager/IPv6 support: • HP-UX: HP-UX 11.23 (PA/IA) or later • Solaris: Solaris 8/Sparc or later, Solaris 10/x86/64 or later • AIX: AIX 5.1 or later • Windows: Windows 2008(LH), Windows 2003 + IPv6 Install • Linux: Linux Kernel 2.4 (RH8.
Business Copy supports only 3390-9A multiplatform volumes. Continuous Access Synchronous does not support multiplatform volumes (including 3390-9A) via FICON. • Volume discovery via FICON. The inqraid command discovers the FCP volume information by using SCSI inquiry. FICON volumes can be discovered only by using RAID Manager to convert the mainframe interface (Read_device_characteristics or Read_configuration_data) to SCSI Inquiry.
1. 2. 3. 4. 5. 6. Guest OS. RAID Manager needs to use guest OS that is supported by RAID Manager, and also VMware supported guest OS (for example, Windows Server 2003, Red Hat Linux, SUSE Linux). See “Supported guest OS for VM” (page 11). Command device. RAID Manager uses SCSI path-through driver to access the command device. Therefore, the command device must be mapped as Raw Device Mapping using Physical Compatibility Mode. At least one command device must be assigned for each guest OS.
1. 2. 3. Command device. RAID Manager uses the SCSI Path-through driver for accessing the command device. Therefore, the command device must be mapped as RAW device of Physical Mapping Mode. At least one command device must be assigned for each VIO Client. The RAID Manager instance numbers among different VIO Clients must be different, even if the command is assigned for each VIO Client, because the command device cannot distinguish between VIO Clients due to use of the same WWN via vscsi.
Figure 4 RAID Manager configuration on Hyper-V The restrictions for using RAID Manager on Hyper-V are as follows: 1. Guest OS. RAID Manager needs to use the guest OS that is supported by RAID Manager and also the Hyper-V supported guest OS (for example, Windows Server 2003/2008, SUSE Linux). See Table 8 (page 11) for details. 2. Command device. RAID Manager uses the SCSI path-through driver to access the command device. Therefore the command device must be mapped as RAW device of the path-through disk.
About platforms supporting IPv6 Library and system call for IPv6 RAID Manager uses the following functions of IPv6 library to get and convert from hostname to IPv6 address.
export IPV6_GET_ADDR=9 horcmstart.sh 10 HORCM startup log Support level of IPv6 feature depends on the platform and OS version. In certain OS platform environments, RAID Manager cannot perform IPv6 communication completely, so RAID Manager logs the results of whether the OS environment supports the IPv6 feature or not. /HORCM/log/curlog/horcm_HOST NAME.
that OpenVMS cannot make a daemon process from the POSIX program. Therefore, horcmstart.exe has been changed to wait until HORCM has been exiting by horcmshutdown.exe after startup of the horcmgr. According to the rule for creating process in OpenVMS, to start up the horcmstart.exe is to create the detached process or Batch JOB by using DCL command, as this method closely resembles the horcmd process on UNIX.
$ show device Device Name VMS4$DKB0: VMS4$DKB100: VMS4$DKB200: VMS4$DKB300: VMS4$DQA0: (VMS4) Online $ DEFINE/SYSTEM $ DEFINE/SYSTEM : : $ DEFINE/SYSTEM Device Error Volume Free Trans Mnt Status Count Label Blocks Count Cnt Online 0 Mounted 0 ALPHASYS 30782220 414 1 Online 0 Online 0 Online 0$1$DGA145: (VMS4) Online 0$1$DGA146: 0::$1$DGA153: (VMS4) Online 0 DKA145 $1$DGA145: DKA146 $1$DGA146: DKA153 $1$DGA153: (6) -zx option for RAID Manager commands The -zx option for RAID Manager commands uses the sele
starting HORCM inst 0 $ spawn /NOWAIT /PROCESS=horcm1 horcmstart 1 %DCL-S-SPAWNED, process HORCM1 spawned $ starting HORCM inst 1 $ The subprocess (HORCM) created by SPAWN is terminated when the terminal is LOGOFF or the session is terminated. If you want independence Process to the terminal LOGOFF, then use the “RUN /DETACHED” command.
$ PRODUCT REMOVE RM /LOG (13) About exit code of the command on DCL RAID Manager return codes are the same for all platforms. However, if the process was invoked by the DCL, the status is interpreted by DCL and a message appears as below. ---------------------------on DCL of OpenVMS------------------------$ pairdisplay jjj PAIRDISPLAY: requires '-jjj' or '/jjj' as argument PAIRDISPLAY: [EX_REQARG] Required Arg list Refer to the command log(SYS$POSIX_ROOT:[HORCM.LOG]HORCC_RMOVMS.
Startup procedures using detached process on DCL (1) Create the shareable Logical name for RAID if undefined initially. RAID Manager needs to define the physical device ($1$DGA145…) as either DG* or DK* or GK* by using SHOW DEVICE command and DEFINE/SYSTEM command, but then does not need to be mounted in RAID Manager version 01-12-03/03 or earlier.
VG01 VG01 VG01 HORCM_INST #dev_group VG01 oradb1 oradb2 oradb3 CL1-H CL1-H CL1-H ip_address HOSTB 0 0 0 2 4 6 0 0 0 service horcm1 For horcm1.conf HORCM_DEV #dev_group VG01 VG01 VG01 HORCM_INST #dev_group VG01 dev_name oradb1 oradb2 oradb3 port# CL1-H CL1-H CL1-H ip_address HOSTA TargetID 0 3 0 5 0 7 LU# MU# 0 0 0 service horcm0 Defines the UDP port name for HORCM communication in the SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT file, as in the example below.
(2) Removing the environment variable. $ DELETE/SYMBOL HORCC_MRCF $ pairdisplay -g VG01 -fdc Group PairVol(L/R) Device_File VG01 oradb1(L) DKA146 VG01 oradb1(R) DKA147 VG01 oradb2(L) DKA148 VG01 oradb2(R) DKA149 VG01 oradb3(L) DKA150 VG01 oradb3(R) DKA151 $ ,Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# 30009 146..SMPL ---- ------,----- ---30009 147..SMPL ---- ------,----- ---30009 148..SMPL ---- ------,----- ---30009 149..SMPL ---- ------,----- ---30009 150..SMPL ---- ------,----- ---30009 151..
# ERROR [CMDDEV] DKA145 HORCM_DEV #dev_group dev_name # DKA146 SER = URA URA_000 # DKA147 SER = URA URA_001 # DKA148 SER = URA URA_002 # DKA149 SER = URA URA_003 # DKA150 SER = URA URA_004 HORCM_INST #dev_group ip_address URA 127.0.0.
(1) Create the shareable Logical name for RAID if undefined initially. You need to define the Physical device ($1$DGA145…) as either DG* or DK* or GK* by using the SHOW DEVICE command and the DEFINE/SYSTEM command, but then it does not need to be mounted.
HORCM_DEV #dev_group VG01 VG01 VG01 HORCM_INST #dev_group VG01 dev_name oradb1 oradb2 oradb3 port# CL1-H CL1-H CL1-H ip_address HOSTB TargetID 0 2 0 4 0 6 LU# MU# LU# MU# 0 0 0 service horcm1 FOR horcm1.conf HORCM_DEV #dev_group VG01 VG01 VG01 HORCM_INST #dev_group VG01 dev_name oradb1 oradb2 oradb3 port# CL1-H CL1-H CL1-H ip_address HOSTA TargetID 0 3 0 5 0 7 0 0 0 service horcm0 (7) Start ‘horcmstart 0 1’. The subprocess (HORCM) created by bash is terminated when the bash is EXIT.
Using CCI with Hitachi and other storage systems Table 11 (page 30) shows the related two controls between CCI and the RAID storage system type (Hitachi or HP XP). Figure 6 (page 31) shows the relationship among the application, CCI, and RAID storage system.
Figure 6 Relationship among application, CCI, and storage system Using CCI with Hitachi and other storage systems 31
2 Installing and configuring RAID Manager This chapter describes installing and configuring RAID Manager. Installing the RAID Manager hardware Installation of the hardware required for RAID Manager is performed by the user and the HP representative. To install the hardware required for RAID Manager operations: 1. User: 1. Make sure that the UNIX/PC server hardware and software are properly installed and configured. See “Supported environments” (page 6). 2.
be different on your platform. Please consult your operating system documentation (for example, UNIX man pages) for platform-specific command information. To install the RAID Manager software in the root directory: 1. Insert the installation medium into the I/O device properly. 2. Move to the current root directory: # cd /. 3. Copy all files from the installation medium using the cpio command: 4.
1. 2. 3. 4. 5. Change the owner of the following RAID Manager files from the root user to the desired user name: • /HORCM/etc/horcmgr • All RAID Manager commands in the /HORCM/usr/bin directory • All RAID Manager log directories in the /HORCM/log* directories Change the owner of the raw device file of the HORCM_CMD command device in the configuration definition file from the root user to the desired user name.
Windows installation Make sure to install RAID Manager on all servers involved in RAID Manager operations. If network (TCP/IP) is not established, install a network of Windows attachment, and add TCP/IP protocol. To install the RAID Manager software on a Windows system: 1. If a previous version of RAID Manager is installed, remove it according to the instructions in “Removing RAID Manager in a Windows environment” (page 45). 2. Insert the installation medium (for example, CD-ROM) into the proper I/O device.
Because the ACL (Access Control List) of the Device Objects is set every time Windows starts-up, the Device Objects are also required when Windows starts-up. The ACL is also required when new Device Objects are created. RAID Manager administrator tasks 1. 2. Establish the HORCM (/etc/horcmgr) startup environment.
2. Execute the following command: $ PRODUCT INSTALL RM /source=Device:[PROGRAM.RM.OVMS]/LOG _$ /destination=SYS$POSIX_ROOT:[000000] Device:[PROGRAM.RM.OVMS] where HITACH-ARMVMS-RM-V0122-2-1.PCSI exists 3. Verify installation of the proper version using the raidqry command: $ raidqry -h Model: RAID-Manager/OpenVMS Ver&Rev: 01-22-03/06 Usage: raidqry [options] 4. Follow the requirements and restrictions in “Porting notice for OpenVMS” (page 19).
Figure 7 System configuration example and setting example of command device and virtual command device by in-band and out-of-band methods Setting the command device RAID Manager commands are issued to the RAID storage system via the command device. The command device is a user-selected, dedicated logical volume on the storage system that functions as the interface to the RAID Manager software on the UNIX/PC host.
3. 4. 5. 6. Configure the device as needed before setting it as a command device. For example, use Virtual LUN or Virtual LVI to create a device that has 36 MB of storage capacity. For instructions, see the HP P9000 Provisioning for Open Systems User Guide. Launch LUN Manager, locate and select the device, and set the device as a command device. For more information, see the HP P9000 Provisioning for Open Systems User Guide.
Example 3 Setting example of virtual command device in configuration definition file (out-of-band method) HORCM_CMD #dev_name \\.\IPCMD-192.168.1.100-31001 dev_name dev_name About alternate command devices If RAID Manager receives an error notification in reply to a read or write request to a command device, the RAID Manager software can switch to an alternate command device, if one is defined.
Creating/editing the configuration definition file The configuration definition file is a text file that is created and edited using any standard text editor (for example, UNIX vi editor, Windows Notepad). The configuration definition file defines correspondences between the server and the volumes used by the server. There is a configuration definition file for each host server. When the RAID Manager software starts up, it refers to the definitions in the configuration definition file.
Table 12 Configuration (HORCM_CONF) parameters (continued) Parameter Default Type Limit MU# 0 Numeric value 7 characters See *1 Serial# None Numeric value 12 characters CU:LDEV(LDEV#) None Numeric value 6 characters dev_name for HORCM_CMD None Character string 63 characters Recommended value = 8 char. or less 1: Use decimal notation for numeric values (not hexadecimal). Do not edit the configuration definition file while RAID Manager is running.
3 Upgrading RAID Manager For upgrading RAID Manager software, the RMuninst script on the CD-ROM is used. For other media, use the following instructions to upgrade the RAID Manager software. The instructions may be different for your platform. Consult your operating system documentation (for example, UNIX man pages) for platform-specific command information. Upgrading RAID Manager in a UNIX environment Use the RMinstsh script on the CD-ROM to upgrade the RAID Manager software to a later version.
6. 7. 8. 9. The Run window opens, enter A:\Setup.exe (where A: is diskette or CD drive) in the Open pull-down list box. An InstallShield will open. Follow the on screen instructions to install the RAID Manager software. Reboot the Windows server, and verify that the correct version of the RAID Manager software is running on your system by executing the raidqry -h command.
4 Removing RAID Manager This chapter explains how to remove RAID Manager. Removing RAID Manager in a UNIX environment To remove the RAID Manager software: 1. If you are discontinuing local and/or remote copy functions (for example, Business Copy, Continuous Access Synchronous), delete all volume pairs and wait until the volumes are in simplex status. If you will continue copy operations using Remote Web Console, do not delete any volume pairs. 2.
2. You can remove the RAID Manager software only when RAID Manager is not running. If RAID Manager software is running, shut down RAID Manager using the horcmshutdown command to ensure a normal end to all functions: One RAID Manager instance: D:\HORCM\etc> horcmshutdown Two RAID Manager instances: D:\HORCM\etc> horcmshutdown 0 1 3. Delete the RAID Manager software using the Add or Remove Programs control panel: 1. Open the Control Panel, and double-click Add or Remove Programs. 2.
5 Troubleshooting This chapter provides troubleshooting information. Troubleshooting If you have a problem installing or upgrading the RAID Manager software, ensure that all system requirements and restrictions have been met.
6 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
HP websites For additional information, see the following HP websites: • http://www.hp.com • http://www.hp.com/go/storage • http://www.hp.com/service_locator • http://www.hp.com/support/manuals • http://www.hp.com/support/downloads • http://www.hp.
Table 13 Document conventions (continued) Convention Element Monospace text • File and directory names • System output • Code • Commands, their arguments, and argument values Monospace, italic text • Code variables • Command variables Monospace, bold text WARNING! CAUTION: IMPORTANT: NOTE: TIP: 50 Emphasized monospace text Indicates that failure to follow directions could result in bodily harm or death. Indicates that failure to follow directions could result in damage to equipment or data.
A Fibre-to-SCSI address conversion Disks connected with Fibre Channel display as SCSI disks on UNIX hosts. Disks connected with Fibre Channel connections can be fully utilized. RAID Manager converts Fibre Channel physical addresses to SCSI target IDs (TIDs) using a conversion table (see Figure 9 (page 51)). Table 14 (page 51) shows the current limits for SCSI TIDs and LUNs on various operating systems.
Example 6 Using Raidscan to display TID and LUN for Fibre Channel devices C:\>raidscan -pd hd6 -x drivescan hd6 Harddisk 6... Port[ 2] PhId[ 4] TId[ 3] Lun[ 5] [HITACHI ] [OPEN-3 Port[CL1-J] Ser#[ 30053] LDEV#[ 14(0x00E)] HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 2] SSID = 0x0004 PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-LDEV# CL1-J / e2/4, 29, 0.1(9).............SMPL ---- ------ ----, ----- ---CL1-J / e2/4, 29, 1.1(10)............
Figure 10 LUN configuration RAID Manager uses absolute LUNs to scan a port, whereas the LUNs on a group are mapped for the host system so that the target ID & LUN, that is indicated by the raidscan command, is different from the target ID & LUN shown by the host system. In this case, the target ID & LUN indicated by the raidscan command should be used. In this case, you must start HORCM without a description for HORCM_DEV and HORCM_INST because target ID & LUN are unknown.
The conversion table for Windows systems is based on the Emulex driver. If a different Fibre Channel adapter is used, the target ID indicated by the raidscan command may be different than the target ID indicated by the Windows system. Note on Table 3 for other Platforms: Table 3 is used to indicate the LUN without Target ID for unknown FC_AL conversion table or Fibre Channel fabric (Fibre Channel WWN). In this case, the Target ID is always zero, thus Table 3 is not described in this document.
Table 16 Fibre address conversion table for Solaris and IRIX systems (Table1) (continued) C0 C1 C2 C3 C4 C5 C6 C7 AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID AL-PA TID D9 8 C3 24 A7 40 80 56 67 72 4B 88 2E 104 10 120 D6 9 BC 25 A6 41 7C 57 66 73 4A 89 2D 105 0F 121 D5 10 BA 26 A5 42 7A 58 65 74 49 90 2C 106 08 122 D4 11 B9 27 A3 43 79 59 63 75 47 91 2B 107 04 123 D3 12 B6 28 9F 44 76 60 5C 7
B Sample configuration definition files This chapter describes sample configuration definition files. Sample configuration definition files Figure 11 (page 56) illustrates the configuration definition of paired volumes. Example 9 “Configuration file example – UNIX-based servers” shows a sample configuration file for a UNIX-based operating system. Figure 12 (page 57) shows a sample configuration file for a Windows operating system.
Example 9 Configuration file example – UNIX-based servers HORCM_MON #ip_address service poll(10ms) timeout(10ms) HST1 horcm 1000 3000 HORCM_CMD #unitID 0... (seq#30014) #dev_name dev_name dev_name /dev/rdsk/c0t0d0 #unitID 1...
• Poll: The interval for monitoring paired volumes. To reduce the HORCM daemon load, make this interval longer. If set to -1, the paired volumes are not monitored. The value of -1 is specified when two or more RAID Manager instances run on a single machine. • Timeout: The time-out period of communication with the remote server. (2) HORCM_CMD The command parameter (HORCM_CMD) defines the UNIX device path or Windows physical device number of the command device.
If Windows has two different array models that share the same serial number, fully define the serial number, ldev#, port and host group for the CMDDEV. • For under Multi Path Driver. Specifies to use any port as the command device for Serial#30095, LDEV#250 \\.\CMD-30095-250 • For full specification. Specifies the command device for Serial#30095, LDEV#250 connected to Port CL1-A, Host group#1 \\.\CMD-30095-250-CL1-A-1 • Other examples \\.\CMD-30095-250-CL1-A \\.
\\.\CMD-30095-250:/dev/rdsk/ Example for full specification. Specifies the command device for Serial#30095, LDEV#250 connected to Port CL1-A, Host group#1: \\.\CMD-30095-250-CL1-A-1:/dev/rdsk/ Other examples: \\.\CMD-30095-250-CL1:/dev/rdsk/ \\.\CMD-30095:/dev/rdsk/c1 \\.\CMD-30095-250-CL2 \\.\CMD-30095:/dev/rdsk/c2 \\.\IPCMD-158.214.135.113-31001 (3) HORCM_DEV The device parameter (HORCM_DEV) defines the RAID storage system device addresses for the paired logical volume names.
The following ports can only be specified for XP12000 Disk Array/XP10000 Disk Array and XP24000/XP20000 Disk Array: - Basic CL5 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CL6 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CL7 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CL8 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CL9 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CLA an bn cn dn en fn gn hn jn
• MU# for HORC/Continuous Access Journal: Defines the mirror unit number (0 - 3) of one of four possible HORC/Cnt Ac-J bitmap associations for an LDEV. If this number is omitted, it is assumed to be zero (0). The Continuous Access Journal mirror description is described in the MU# column by adding “h” in order to identify identical LUs as the mirror descriptor for Cnt Ac-J. The MU# for HORC must be specified “blank” as “0”.
# horcctl -ND -g IP46G Current network address = 158.214.135.106,services = 50060# horcctl -NC -g IP46G Changed network address(158.214.135.106,50060 -> fe80::39e7:7667:9897:2142,50060) For IPv6 only, the configuration must be defined as HORCM/IPv6. Figure 15 Network configuration for IPv6 It is possible to communicate between HORCM/IPv4 and HORCM/IPv6 using IPv4 mapped to IPv6.
In the case of mixed IPv4 and IPv6, it is possible to communicate between HORCM/IPv4 and HORCM/IPv6 and HORCM/IPv6 using IPv4 mapped to IPv6 and native IPv6. Figure 17 Network configuration for mixed IPv4 and IPv6 (5) HORCM_LDEV The HORCM_LDEV parameter is used for specifying stable LDEV# and Serial# as the physical volumes corresponding to the paired logical volume names. Each group name is unique and typically has a name fitting its use (for example, database data, Redo log file, UNIX file).
oradb oradb • dev1 dev2 30095 30095 02:40 02:41 0 0 Specifying “CU:LDEV” in hex used by SVP or Remote Web Console. Example for LDEV# 260 01: 04 • Specifying “LDEV” in decimal used by the RAID Manager inqraid command. Example for LDEV# 260 260 • Specifying “LDEV” in hex used by the RAID Manager inqraid command. Example for LDEV# 260 0x104 HORCM_LDEV format can be used for XP1024/XP128 Disk Array and later.
The command device is defined using the system raw device name (character-type device file name). For example, the command devices for the following figure would be: • HP-UX: HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1 HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1 • Solaris: HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1s2 HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1s2 For Solaris operations with RAID Manager version 01-09-03/04 and higher, the command device does not need to be labeled during format command.
Example of RAID Manager commands with HOSTA: • Designate a group name (Oradb) and a local host P-VOL a case. # paircreate -g Oradb -f never -vl This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in the above figure). • Designate a volume name (oradev1) and a local host P-VOL a case.
where XX = device number assigned by Tru64 UNIX • DYNIX/ptx: HORCM_CMD of HOSTA = /dev/rdsk/sdXX HORCM_CMD of HOSTB = /dev/rdsk/sdXX where XX = device number assigned by DYNIX/ptx • Windows 2008/2003/2000: HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port# HORCM_CMD of HOSTB = \\.\CMD-Ser#-ldev#-Port# • Windows NT: HORCM_CMD of HOSTA = \\.\CMD-Ser#-ldev#-Port# HORCM_CMD of HOSTB = \\.
Example of RAID Manager commands with HOSTA: • Designate a group name (Oradb) and a local host P- VOL a case. # paircreate -g Oradb -f never -vl This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in above figure). • Designate a volume name (oradev1) and a local host P-VOL a case.
For Solaris operations with RAID Manager version 01-09-03/04 and higher, the command device does not need to be labeled during format command.
Figure 20 Continuous Access Synchronous configuration example for two instances Example of RAID Manager commands with Instance-0 on HOSTA: • When the command execution environment is not set, set an instance number.
For Windows: set HORCMINST=0 • Designate a group name (Oradb) and a local instance P- VOL a case. # paircreate -g Oradb -f never -vl This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in above figure). • Designate a volume name (oradev1) and a local instance P-VOL a case.
For Solaris operations with RAID Manager version 01-09-03/04 and higher, the command device does not need to be labeled during format command.
Figure 21 Business Copy configuration example (continues in next figure) 74 Sample configuration definition files
Figure 22 Business Copy configuration example (continued) Example of RAID Manager commands with HOSTA (group Oradb): • When the command execution environment is not set, set HORCC_MRCF to the environment variable.
Windows: set HORCC_MRCF=1 • Designate a group name (Oradb) and a local host P-VOL a case. # paircreate -g Oradb -vl This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in above figure). • Designate a volume name (oradev1) and a local host P-VOL a case.
For Windows: set HORCC_MRCF=1 • Designate a group name (Oradb1) and a local host P-VOL a case. # paircreate -g Oradb1 -vl This command creates pairs for all LUs assigned to group Oradb1 in the configuration definition file (two pairs for the configuration in the above figure). • Designate a volume name (oradev1-1) and a local host P-VOL a case.
For Windows: set HORCC_MRCF=1 • Designate a group name (Oradb2) and a local host P-VOL a case. # paircreate -g Oradb2 -vl This command creates pairs for all LUs assigned to group Oradb2 in the configuration definition file (two pairs for the configuration in above figure). • Designate a volume name (oradev2-1) and a local host P-VOL a case.
For Solaris operations with RAID Manager version 01-09-03/04 and higher, the command device does not need to be labeled during format command.
Figure 23 Business Copy configuration example with cascade pairs See “Configuration definition for cascading volume pairs” (page 85) for more information on Business Copy cascading configurations. Example of RAID Manager commands with Instance-0 on HOSTA: • When the command execution environment is not set, set an instance number.
For Windows: set HORCMINST=0 set HORCC_MRCF=1 • Designate a group name (Oradb) and a local instance P- VOL a case. # paircreate -g Oradb -vl # paircreate -g Oradb1 -vr # paircreate –g oradb –pvol These commands create pairs for all LUs assigned to groups Oradb and Oradb1 in the configuration definition file. • Designate a group name and display pair status.
For Solaris operations with RAID Manager version 01-09-03/04 and higher, the command device does not need to be labeled during format command. • AIX: HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/rhdiskXX HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/rhdiskXX HORCM_CMD of HOSTB(/etc/horcm0.conf)... /dev/rhdiskXX where XX = device number assigned by AIX • Tru64 UNIX: HORCM_CMD of HOSTA(/etc/horcm.conf) ... /dev/rrzbXXc HORCM_CMD of HOSTB(/etc/horcm.conf) ... /dev/rrzbXXc HORCM_CMD of HOSTB(/etc/horcm0.
Figure 24 Continuous Access Synchronous/Business Copy configuration example with cascade pairs Example of RAID Manager commands with HOSTA and HOSTB: • Designate a group name (Oradb) on Continuous Access Synchronous environment of HOSTA. # paircreate -g Oradb -vl • Designate a group name (Oradb1) on Business Copy environment of HOSTB. When the command execution environment is not set, set HORCC_MRCF.
These commands create pairs for all LUs assigned to groups Oradb and Oradb1 in the configuration definition file (four pairs for the configuration in the above figures). • Designate a group name and display pair status on HOSTA. # pairdisplay -g oradb -m cas Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# oradb oradev1(L) (CL1-A , 1, 1-0)30052 266..SMPL ----,-------oradb oradev1(L) (CL1-A , 1, 1) 30052 266..P-VOL COPY,30053 268 oradb1 oradev11(R) (CL1-D , 2, 1-0)30053 268..
oradb2 oradb oradev22(R) (CL1-D , 2, oradev2(R) (CL1-D , 2, 2-1)30053 2) 30053 269..SMPL ----,----269..S-VOL PAIR,----- ---267 - Configuration definition for cascading volume pairs The RAID Manager software (HORCM) is capable of keeping track of up to seven pair associations per LDEV (1 for Cnt Ac-S/Cnt Ac-J, 3 for Cnt Ac-J, 3 for BC/Snapshot, 1 for Snapshot).
Table 18 Mirror descriptors and group assignments (continued) HORCM_DEV parameter in configuration file HORCM_DEV #dev_group LU# MU# dev_name port# TargetID Oradb 1 oradev1 CL1-D 2 Oradb1 1 0 oradev11 CL1-D 2 Oradb2 1 1 oradev21 CL1-D 2 Oradb3 1 2 oradev31 CL1-D 2 HORCM_DEV #dev_group LU# MU# dev_name port# TargetID Oradb 1 0 oradev1 CL1-D 2 HORCM_DEV #dev_group LU# MU# dev_name port# TargetID Oradb 1 0 oradev1 CL1-D 2 Oradb1 1 1 oradev1 CL1-D 2 Oradb2 1 2 oradev21
sections present examples of Business Copy and Business Copy/Continuous Access Synchronous cascading configurations. Business Copy The following figure shows an example of a Business Copy cascade configuration and the associated entries in the configuration definition files. Business Copy is a mirror configuration within one storage system, so the volumes are described in the configuration definition file for each HORCM instance: volumes T3L0, T3L4, and T3L6 in HORCMINST0, and volume T3L2 in HORCMINST1.
Figure 28 Pairdisplay -g on HORCMINST1 Figure 29 Pairdisplay -d on HORCMINST0 Cascading connections for Continuous Access Synchronous and Business Copy The cascading connections for Continuous Access Synchronous/Business Copy can be set up by using three configuration definition files that describe the cascading volume entity in a configuration definition file on the same instance.
Figure 30 Continuous Access Synchronous/Business Copy cascading connection and configuration file The following figures cascading configurations and the pairdisplay information for each configuration.
Figure 32 Pairdisplay for Continuous Access Synchronous on HOST2 (HORCMINST) Figure 33 Pairdisplay for Business Copy on HOST2 (HORCMINST) Figure 34 Pairdisplay for Business Copy on HOST2 (HORCMINST0) 90 Sample configuration definition files
Glossary AL-PA Arbitrated loop physical address. A 1-byte value that the arbitrated loop topology uses to identify the loop ports. This value becomes the last byte of the address identifier for each public port on the loop. BC P9000 or XP Business Copy. An HP application that provides volume-level, point-in-time copies in the disk array. CB Circuit breaker. CLI Command-line interface. An interface comprised of various commands which are used to control operating system responses.
LUSE Logical Unit Size Expansion. The LUSE feature is available when the HP StorageWorks LUN Manager product is installed, and allows a LUN, normally associated with only a single LDEV, to be associated with 1 to 36 LDEVs. Essentially, LUSE makes it possible for applications to access a single large pool of storage. MCU Main control unit. MSCS Microsoft Cluster Service. MU Mirror unit.
Index A access requirements, 5 AIX VIO, restrictions, 15 alternate command devices, 40 C cascading, configuration definitions, 85 changing the user UNIX environment, 33 Windows environment, 35 command devices alternate, 40 requirements, 6 setting, 38 virtual, 39 command execution, 37 components, removing, 46 configuration examples, 65 configuration file cascading examples, 86 cascading volume pairs, 85 creating, 41 editing, 41 examples, 56 mirror descriptors, 85 parameters, 41 sample file, 41 configuration
I license key requirements, 6 LUN configurations, 52 components, 46 OpenVMS environment, 46 UNIX environment, 45 Windows environment, 45 software upgrade OpenVMS environment, 44 UNIX environment, 43 Windows environment, 43 storage capacity values conventions, 49 Subscriber's Choice, HP, 48 SVC, VMWare restrictions, 15 symbols in text, 50 system requirements, 5 M T memory requirements, 6 mirror descriptors, 85 configuration file correspondence, 85 group assignments, 85 tables, Fibre-to-SCSI address con