HP XP P9500 Disk Array Configuration Guide Abstract This guide provides requirements and procedures for connecting a HP XP P9000 disk array to a host system, and for configuring the disk array for use with a specific operating system. This document is intended for system administrators, HP representatives, and authorized service providers who are involved in installing, configuring, and operating HP XP P9000 disk arrays.
© Copyright 2010, 2012 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Overview................................................................................................10 What's in this guide................................................................................................................10 Audience...............................................................................................................................10 Features and requirements................................................................................................
Creating and formatting disk partitions.................................................................................34 Verifying file system operations ...........................................................................................34 4 Novell NetWare......................................................................................35 Installation roadmap...............................................................................................................
Installing and configuring the FCAs .....................................................................................55 Clustering and fabric zoning...............................................................................................55 Fabric zoning and LUN security for multiple operating systems.................................................56 Configuring FC switches..........................................................................................................
Defining the paths..............................................................................................................74 Setting the host mode and host group mode for the disk array ports.........................................75 Setting the system option modes..........................................................................................76 Configuring the Fibre Channel ports.....................................................................................
Connecting the disk array......................................................................................................100 Restarting the Linux server.................................................................................................100 Verifying new device recognition.......................................................................................100 Configuring disk array devices...............................................................................................
Physical partition size table...............................................................................................149 D Using Veritas Cluster Server to prevent data corruption................................151 Using VCS I/O fencing.........................................................................................................151 E Reference information for the HP System Administration Manager (SAM)........154 Configuring the devices using SAM............................................
Contents Contents 9
1 Overview What's in this guide This guide includes information on installing and configuring P9000 disk arrays. The following operating systems are covered: • HP-UX • Windows • Novell Netware • NonStop • OpenVMS • VMware • Linux • Solaris • IBM AIX For additional information on connecting disk arrays to a host system and configuring for a mainframe, see the HP StorageWorks P9000 Mainframe Host Attachment and Operations Guide.
For all operating systems, before installing the disk array, ensure the environment conforms to the following requirements: • Fibre Channel Adapters (FCAs): Install FCAs, all utilities, and drivers. For installation details, see the adapter documentation. • HP StorageWorks P9000 Remote Web Console or HP StorageWorks P9000 Command View Advanced Edition Suite Software for configuring disk array ports and paths.
Device emulation types The P9000 family of disk arrays supports these device emulation types: • OPEN-x devices: OPEN-x logical units represent disk devices. Except for OPEN-V, these devices are based on fixed sizes. OPEN-V is a user-defined size based on a CVS device. Supported emulations include OPEN-3, OPEN-8, OPEN-9, OPEN-E, OPEN-L, and OPEN-V devices.
Your HP representative might need to set specific disk array system modes for these products. Check with your HP representative for the current versions supported. • For I/O path failover, different products are available from Oracle, Veritas, and HP. Oracle supplies software called STMS for Solaris 8/9 and Storage Multipathing for Solaris 10. Veritas offers VxVM, which includes DMP. HP supplies HDLM.
2 HP-UX You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 14) 2. 3. 4.
Defining the paths Use Command View Advanced Edition or Remote Web Console to define paths between hosts and volumes (LUNs) in the disk array. This process is also called “LUN mapping.
Table 2 System option modes (NonStop) SystemOption Mode Minimum microcode version P9500 142 Available from initial release 454 Available from initial release 6851 N/A 724 N/A HP also recommends setting host group mode 13 with P9000 storage systems that are connected to HP NonStop systems. Configuring the Fibre Channel ports Configure the disk array Fibre Channel ports by using Command View Advanced Edition or Remote Web Console. Select the settings for each port based on your SAN topology.
Figure 2 Multi-cluster environment (HP-UX) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration.
Use the ioscan –f command, and verify that the rows shown in the example are displayed. If these rows are not displayed, check the host adapter installation (hardware and driver installation) or the host configuration. Example # ioscan Class ... fc lan fcp ext bus –f I H/W Path Driver S/W State H/W Type Description 0 1 0 2 fcT1 fcT1_cntl fcp fcpdev CLAIMED CLAIMED CLAIMED CLAIMED HP Fibre ... HP Fibre ... FCP Proto... FCP Devic... 8/12 8/12.5 8/12.8 8/12.8.0.255.
• z = LUN • c stands for controller • t stands for target ID • d stands for device The numbers x, y, and z are hexadecimal. Table 4 Device file name example (HP-UX) 5. SCSI bus instance number Hardware path SCSI TID LUN File name 00 14/12.6.0 6 0 c6t0d0 00 14/12.6.1 6 2 c6t0d1 Verify that the SCSI TIDs correspond to the assigned port address for all connected ports (see mapping tables in “SCSI TID map for Fibre Channel adapters (HP-UX)” (page 115), for values).
Verifying the device files and drivers The device files for new devices are usually created automatically during HP-UX startup. Each device must have a block-type device file in the /dev/dsk directory and a character-type device file in the /dev/rdsk directory. However, some HP-compatible systems do not create the device files automatically. If verification shows that the device files were not created, follow the instructions in “Creating the device files” (page 20).
Example # insf -e insf: Installing special files for mux2 instance 0 address 8/0/0 : : : : : : : : # Failure of the insf –e command indicates a SAN problem. If the device files for the new disk array devices cannot be created automatically, you must create the device files manually using the mknodcommand as follows: 1. Retrieve the device information you recorded earlier. 2. Construct the device file name for each device, using the device information, and enter the file names in your table.
6. Create the device files for all disk array devices (SCSI disk and multiplatform devices) using the mknodcommand. Create the block-type device files in the /dev/dsk directory and the character-type device files in the /dev/rdsk directory. Example # cd /dev/dsk Go to /dev/dsk directory. # mknod /dev/dsk/c2t6d0 b 31 0x026000 Create block-type file. File name, b=block-type, 31=major #, 0x026000= minor # # cd /dev/rdsk Go to /dev/rdsk directory.
The physical volumes that make up one volume group can be located either in the same disk array or in other disk arrays. To allow more volume groups to be created, use SAM to modify the HP-UX system kernel configuration. See “Reference information for the HP System Administrator Manager SAM” (page 154) for details. The newer releases of HP-UX have deprecated the SAM tool and replaced it with the System Management Homepage (SMH) tool. To create volume groups: 1. 2. 3.
9. Use vgdisplay –v to verify that the volume group was created correctly. The –v option displays the detailed volume group information.
To create logical volumes: 1. Use the lvcreate –L command to create a logical volume. Specify the volume size (in megabytes) and the volume group for the new logical volume. HP-UX assigns the logical volume numbers automatically (lvol1, lvol2, lvol3).
Creating the file systems Create the file system for each new logical volume on the disk array. The default file system types are: • HP-UX OS version 10.20 = hfs or vxfs, depending on entry in the /etc/defaults/fs file. • HP-UX OS version 11.0 = vxfs • HP-UX OS version 11.i = vxfs To create file systems: 1. Use the newfs command to create the file system using the logical volume as the argument.
# pvchange -t 60 /dev/dsk/c0t6d0 Physical volume "/dev/dsk/c0t6d0" has been successfully changed. Volume Group configuration for /dev/vg06 has been saved in /etc/lvmconf/vg06.conf. 3. Verify that the new I/O timeout value is 60 seconds using the pvdisplay command: Example # pvdisplay /dev/dsk/c0t6d0 --- Physical volumes --PV Name /dev/dsk/c0t6d0 VG Name /dev/vg06 PV Status available : Stale PE 0 IO Timeout (Seconds) 60 [New I/O timeout value] 4.
: /ldev/vg06/lvol1 4. 2348177 9 2113350 0% /AHPMD-LU00 As a final verification, perform some basic UNIX operations (for example file creation, copying, and deletion) on each logical device to make sure that the devices on the disk array are fully operational. Example #cd /AHPMD-LU00 #cp /bin/vi /AHPMD-LU00/vi.back1 #ls -l drwxr-xr-t 2 root root 8192 Mar 15 11:35 lost+found -rwxr-xr-x 1 root sys 217088 Mar 15 11:41 vi.back1 #cp vi.back1 vi.
3 Windows You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 29) 2. 3.
In Command View Advanced Edition, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For more information about LUN mapping, see the HP StorageWorks P9000 Provisioning for Open Systems User Guide or Remote Web Console online help. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.
Table 9 Host group modes (options) Windows Host Group Mode 6 Function Default Parameter Setting Failure for TPRLO Inactive When using the Emulex FCA in the Windows environment, the parameter setting for TPRLO failed. After receiving TPRLO and FCP_CMD, respectively. PRLO will respond when HostMode=0x0C/ 0x2C and HostModeOption=0x06. (MAIN Ver.50-03-14-00/00 and later) 13 SIM report at link failure.
Fabric zoning and LUN security By using appropriate zoning and LUN security, you can connect various servers with various operating systems to the same switch and fabric with the following restrictions: • Storage port zones can overlap if more than one operating system needs to share an array port. • Heterogeneous operating systems can share an array port if you set the appropriate host group and mode. All others must connect to a dedicated array port.
1. 2. 3. Verifying operational status of the disk array channel adapters, LDEVs, and paths. Connecting the Fibre Channel cables between the disk array and the fabric switch or host. Verifying the ready status of the disk array and peripherals. Verifying the host recognizes array devices 1. 2. 3. 4. 5. 6. 7. Log into the host as an administrator. Right-click My Computer , and then click Manage. Click Device Manager. Click SCSI and then RAID Controllers.
Creating and formatting disk partitions Dynamic Disk is supported with no restrictions for a disk array connected to a Windows 2000/2003/2008 system. For more information, see Microsoft's online help. CAUTION: Do not partition or create a file system on a device that will be used as a raw device (for example, some database applications use raw devices.) 1. 2. In the Disk Management main window, select the unallocated area for the SCSI disk you want to partition.
4 Novell NetWare You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 35) 2. 3.
• Creating host groups • Assigning Fibre Channel adapter WWNs to host groups • Mapping volumes (LDEVs) to host groups (by assigning LUNs) In Command View Advanced Edition, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide.
NetWare Client software is required for the client system. After installing the software on the NetWare server, follow these steps: 1. Open the Novell Client Configuration dialog and click the Advanced Settings tab. 2. Change the following parameters: Give up on Requests to Sas: 180 Net Status Busy Timeout: 90 Configuring NetWare ConsoleOne NetWare 6.x Novell NetWare v6.x requires a tool called ConsoleOne to work in a storage environment. ConsoleOne is a free Java utility used to manage network resources.
Within the SAN, the clusters must be homogeneous (all the same operating system). Heterogeneous (mixed operating systems) clusters are not allowed. How you configure LUN security and fabric zoning depends on the SAN configuration.
3. 4. Mounting the new volumes Verifying client operations Creating scripts to configure all devices at once can save you considerable time. Creating the disk partitions Before you create the disk partitions, consult the Novell documentation for confirmation about the type of partition that is available with your operating system version. NetWare 5.x 1. 2. 3. 4. 5. 6. 7. At the server console enter “LOAD NWCONFIG” to load the Configuration Options module.
7. To reserve space for the Hot Fix error correction feature, select Hot Fix and enter the amount of space or percentage you want to reserve. Mirrored partitions must be compatible in data area size. This means the new partition must be at least the same size or slightly larger than the other partitions in the group.
1. 2. 3. 4. 5. 6. 7. 8. 9. Move the cursor to the line containing the desired device. Move the cursor onto (free space) in the Volume assignment column. Press Enter. When the What do you want to do with this free segment? message appears, select an option, and press Enter. If you selected Make this segment part of another volume, select the volume to add this segment to, and then press Enter.
5. 6. If you chose to mount volumes selectively, select the desired volume, press Enter to mount the volume, and then confirm that the volume's status changed to MOUNTED. Repeat this step for each new volume to confirm that all new volumes can be mounted successfully. When you have confirmed that all new volumes/devices were mounted successfully, you are finished with disk array device configuration. Leave the new volumes mounted for now, so you can verify that NetWare clients can access the new volumes.
For assistance with NHAS or SFT III operations, see the Novell user documentation, or contact Novell customer support. Multipath failover The P9000 disk arrays support NetWare multipath failover. If multiple FCAs are connected to the disk array with commonly-shared LUNs, you can configure path failover to recognize each new device path: 1. In the startup.cfg file, enter SET MULTI-PATH SUPPORT=ON LOAD SCSIHD.CDM AEN 2. If the line LOAD CPQSHD.CDM is present, it should be commented out. Example startup.
This command sets the state of the path. The ID must be a valid path ID. If the path is Up, it can be taken offline, and will not be used to switch to. Another path will be selected. If the path is offline, the path will be reactivated, if possible, and set to an Up state. The /setpath option will reselect the highest priority path that is up. MM Set failover path This command moves the selected path to the one specified by the path ID. The ID must be a valid path that is up.
5. 6. 7. 6. Click Next. Select the server context within the tree (Novell), and click OK. Click Next. Add servers to the cluster: Click Browse. Highlight all of the servers that you want to add and click Add to Cluster. Use the shift and control keys to select multiple nodes. After all servers in the cluster appear on the list, click Next. Wait while NWDeploy accesses each node. After all services have been accessed and added to the “NetWare Servers in Cluster” list, click OK. 7. 8.
6. 46 Select Create.
5 NonStop You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. The HP NonStop operating system runs on HP S-series and Integrity NonStop servers to provide continuous availability for applications, databases, and devices.
two different clusters of the disk array, and give each host group access to separate but identical LUNs. This arrangement minimizes the shared components among the four paths, providing both mirroring and greater failure protection. NOTE: For the highest level of availability and fault tolerance, HP recommends the use of two P9000 disk arrays, one for the Primary disks and one for the Mirror disks. This process is also called “LUN mapping.
Table 12 System option modes (NonStop) SystemOption Mode Minimum microcode version P9500 142 Available from initial release 454 Available from initial release 6851 N/A 724 N/A HP also recommends setting host group mode 13 with P9000 storage systems that are connected to HP NonStop systems. System option mode 724 is used to balance the load across the cache PC boards by improving the process of freeing pre-read slots. To use system option mode 724, four or more cache PC boards must be installed.
Table 13 Fabric zoning and LUN security settings (NonStop) Environment Fabric Zoning LUN Security Single node SAN Not required Must be used Multiple node SAN Not required Must be used Connecting the disk array The 1. 2. 3. HP service representative performs the following steps to connect the disk array to the host: Verifying operational status of the disk array channel adapters, LDEVs, and paths. Connecting the Fibre Channel cables between the disk array and the fabric switch or host.
6 OpenVMS You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. Installing and configuring the disk array 2. 3. 4.
NOTE: As illustrated in “Microprocessor port sharing (OpenVMS)” (page 52), there is no microprocessor sharing with 8-port module pairs. With 16- and 32-port module pairs, alternating ports are shared. Table 14 Microprocessor port sharing (OpenVMS) Channel adapter Model Description Nr.
For all systems in an OpenVMS cluster: $run sys$system:sysman sysman> set environment/cluster sysman> io autoconfigure/log 5. Verify the online status of the P9000 LUNs, and confirm that all expected LUNs are shown online. Setting the host mode for the disk array ports After the disk array is installed, you must set the host mode for each host group that is configured on a disk array port to match the host OS. Set the host mode using LUN Manager in Remote Web Console or Command View Advanced Edition.
If host mode option 33 is not set, then the default behavior is to present the volumes to the OpenVMS host by calculating the decimal value of the hexadecimal CU:LDEV value. That calculated value will be the value of the DGA device number. CAUTION: • The UUID (or by default the decimal value of the CU:LDEV value) must be unique across the SAN for the OpenVMS host and/or OpenVMS cluster. No other SAN storage controllers should present the same value. If this value is not unique, data loss will occur.
Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer. Installing and configuring the FCAs Install and configure the Fibre Channel adapters using the FCA manufacturer's instructions. Clustering and fabric zoning If you plan to use clustering, install and configure the clustering software on the servers.
Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows: • Storage port zones can overlap if more than one operating system needs to share an array port. • Heterogeneous operating systems can share an array port if you set the appropriate host group and mode. All others must connect to a dedicated array port.
Configuring disk array devices Configure the disk array devices in the same way you would configure any new disk on the host server. Creating scripts to configure all devices at once could save you considerable time.
4. Verify that this directory exists: Example $ show default $1$dga100:[user] If the user directory does not exist, OpenVMS returns an error. 5. Create a test user file: Example $ create test.txt this is a line of text for the test file test.txt [Control-Z] The create command creates a file with data entered from the terminal. Control-z terminates the data input. 6. Verify whether the file is created: Example $ directory Directory $1$DGA100:[USER] TEST.TXT;1 Total of 1 file. 7.
7 VMware You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 59) 2. 3. 4.
In Command View Advanced Edition, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.
multi-cluster environment with three clusters, each containing two nodes. The nodes share access to the disk array. Figure 6 Multi-cluster environment (VMware) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration.
Configuring VMware ESX Server VMware ESX Server 2.5x 1. Open the management interface, select the Options tab, and then click Advanced Settings.... 2. In the “Advanced Settings” window, scroll down to Disk.MaskLUN. 3. Verify that the value is large enough to support your configuration (default=8). If the value is less than the number of LUNs you have presented then you will not see all of your LUNs. The maximum value is 256. VMware ESX Server 3.0x 1. 2. 3.
Setting up virtual machines (VMs) and guest operating systems Setting the SCSI disk timeout value for Windows VMs To ensure Windows VM’s (Windows 2000 and Windows Server 2003) wait at least 60 seconds for delayed disk operations to complete before generating errors, you must set the SCSI disk timeout value to 60 seconds by editing the registry of the guest operating system as follows: CAUTION: file. 1. 2. 3.
3. Select the Bus Sharing mode (virtual or physical) appropriate for your configuration, and then click OK. NOTE: 64 VMware Sharing VMDK disks is not supported.
VMware ESX Server 3.0x 1. In VirtualCenter, select the VM you plan to edit, and then click Edit Settings. 2. Select the SCSI controller for use with your shared LUNs. NOTE: If only one SCSI controller is present, add another disk that uses a different SCSI bus than your current configured devices. 3. Select the Bus Sharing mode (virtual or physical) appropriate for your configuration, and then click OK. NOTE: Sharing VMDK disks is not supported.
2. 66 VMware Linux • For the 2.4 kernel use the LSI Logic SCSI driver. • For the 2.6 kernel use the BusLogic SCSI driver.
8 Linux You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 67) 2. 3. 4.
This process is also called “LUN mapping.
Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Installing and configuring the FCAs Install and configure the Fibre Channel adapters using the FCA manufacturer's instructions. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host.
Table 19 Fabric zoning and LUN security settings (Linux) Environment OS Mix Fabric Zoning Standalone SAN (non-clustered) homogeneous (a single OS type present Not required in the SAN) Clustered SAN heterogeneous (more than one OS type Required present in the SAN) Multi-Cluster SAN LUN Security Must be used when multiple hosts or cluster nodes connect through a shared port Connecting the disk array The 1. 2. 3.
3. Verify that the system recognizes the disk array partitions by viewing the /proc/partitions file.
4. 5. Select w to write the partition information to disk and complete the fdisk command. Other commands that you might want to use include: d to remove partitions q to stop a change 6. Repeat steps 1–5 for each device. Creating the file systems The supported file system for Linux is ext2. Creating file systems with ext2 1. Enter mkfs –t ext2 /dev/device_name. Example # mkfs –t ext2 /dev/sdd 2. Repeat step 1 for each device on the disk array.
1. Edit the /etc/fstab file to add one line for each device to be automounted. Each line of the file contains: (A) device name, (B) mount point, (C) file system type (“ext2”), (D) mount options (“defaults”), (E) enhance parameter (“1”), and (F) fsck pass 2. Example /dev/sdb /dev/sdc /dev/sdd A /A5700F_ID08 /A5700F_ID09 /A5700F_ID10 B ext2 ext2 ext2 C defaults defaults defaults D 1 1 1 E 2 2 2 F Make an entry for each device. After all the entries are made, save the file and exit the editor. 2. 3.
9 Solaris You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 74) 2. 3. 4. 5.
This process is also called “LUN mapping.
Table 20 Host group modes (options) Solaris (continued) Host Group Mode 13 Function Default Comments SIM report at link failure Inactive Optional This mode is common to all host platforms. Select HMO 13 to enable SIM notification when the number of link failures detected between ports exceeds the threshold.
Installing and configuring the FCAs Install and configure the FCA driver software and setup utilities according to the manufacturer's instructions. Configuration settings specific to the P9000 array differ depending on the manufacturer. Specific configuration information is detailed in the following sections. WWN The FCA configuration process might require you to enter the WWN for the array port(s) to which the FCA connects.
3. To set the queue depth, add the following to the /etc/system file: set sd:sd_max_throttle = x set ssd:ssd_max_throttle = x for Oracle generic FCA (for x value, see Table 21 (page 77)) Example: set sd:sd_max_throttle = 16 set ssd:ssd_max_throttle = 16 <— Add this line to /etc/system <— Add this line to /etc/system (for Oracle generic FCA) Configuring FCAs with the Oracle SAN driver stack Oracle branded FCAs are only supported with the Oracle SAN driver stack.
• For Solaris 8/9, perform a reconfiguration reboot of the host to implement changes to the configuration file. For Solaris 10, use the stmsboot command which will perform the modifications and then initiate a reboot. • For Solaris 8/9, after you have rebooted and the LDEV has been defined as a LUN to the host, use the cfgadm command to configure the controller instances for SAN connectivity. The controller instance (c#) may differ between systems.
name="sd" parent="lpfc" target=30 lun=1; name="sd" parent="lpfc" target=30 lun=2; • Perform a reconfiguration reboot to implement the changes to the configuration files. • If LUNs have been preconfigured in the /kernel/drv/sd.conf file, use the devfsadm command to perform LUN rediscovery after configuring LUNs as explained in “Defining the paths” (page 15).
Clustering and fabric zoning If you plan to use clustering, install and configure the clustering software on the servers. Clustering is the organization of multiple servers into groups. Within a cluster, each server is a node. Multiple clusters compose a multi-cluster environment. The following example shows a multi-cluster environment with three clusters, each containing two nodes. The nodes share access to the disk array.
Pre-configure additional LUNs (not yet made available) to avoid unnecessary reboots. See “Installing and configuring the FCAs” (page 77) for individual driver requirements. Verifying host recognition of disk array devices Verify that the host recognizes the disk array devices as follows: 1. Use format to display the device information. 2. Check the list of disks to verify the host recognizes all disk array devices.
6. 7. 8. If you are not using Veritas Volume Manager or Solaris Volume Manager with named disk sets, use the partition command to create or adjust the slices (partitions) as necessary. Repeat this labeling procedure for each new device (use the disk command to select another disk). When you finish labeling the disks, enter quit or press Ctrl-D to exit the format utility. For further information, see the System Administration Guide - Devices and File Systems at: http://www.oracle.
VxVM 3.2 and later use ASL to configure the DMP feature and other parameters. The ASL is required for all arrays. With VxVM 5.0 or later, the ASL is delivered with the Volume Manager and does not need to be installed separately. With VxVM 4.x versions, you need to download and install the ASL from the Symantec/Veritas support website (http://support.veritas.com): 1. Select Volume Manager for Unix/Linux as product and search the P9000 array model for Solaris as the platform. 2.
10 IBM AIX You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 85) 2. 3.
• Assigning Fibre Channel adapter WWNs to host groups • Mapping volumes (LDEVs) to host groups (by assigning LUNs) In Command View Advanced Edition, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.
Setting the system option modes The HP service representative sets the system option mode(s) based on the operating system and software configuration of the host. Notify your HP representative if you install storage agnostic software (such as backup or cluster software) that might require specific settings. Configuring the Fibre Channel ports Configure the disk array Fibre Channel ports by using Command View Advanced Edition or Remote Web Console.
Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows: • Storage port zones can overlap if more than one operating system needs to share an array port. • Heterogeneous operating systems can share an array port if you set the appropriate host group and mode. All others must connect to a dedicated array port.
Configuring disk array devices Disks in the disk array are configured using the same procedure for configuring any new disk on the host. This includes the following procedures: • Changing the device parameters • Assigning the new devices to volume groups • Creating the Journaled File Systems • Mounting and verifying the file systems Creating scripts to configure all devices at once can save you considerable time.
For example, enter the following command to change the queue depth for the device hdisk3: # chdev –l hdisk3 –a queue_depth='2' 4. Verify that the parameters for all devices were successfully changed. For example, enter the following command to verify the parameter change for the device hdisk3: # lsattr –E –l hdisk3 5. Repeat these steps for each OPEN-x device on the disk array.
Use QERR Bit Device CLEARS its Queue on Error READ/WRITE time out value START unit time out value REASSIGN time out value APPLY change to DATABASE only 7. [yes] [no] [60] [60] [120] no Repeat these steps for each OPEN-x device on the disk array. Assigning the new devices to volume groups Assign the new devices to volume groups using the AIX system's Logical Volume Manager (accessed from within SMIT). This operation is not required when the volumes are used as raw devices.
List All Volume Groups Add a Volume Group Set Characteristics of a Volume Group List Contents of a Volume Group Remove a Volume Group Activate a Volume Group Deactivate a Volume Group Import a Volume Group Export a Volume Group Mirror a Volume Group *1 Unmirror a Volume Group *1 Synchronize LVM Mirrors *1 Back Up a Volume Group Remake a Volume Group List Files in a Volume Group Backup Restore Files in a Volume Group Backup 6.
Creating the journaled file systems Create the journaled file systems using SMIT. This operation is not required when the volumes are used as raw devices. The largest file system permitted in AIX is 64 GB. 1. Start SMIT. 2. Select System Storage Management (Physical & Logical Storage). Example System Management Move cursor to desired item and press Enter.
6. Select Add a Journaled File System. Example Journaled File System Move cursor to desired item and press Enter. Add a Journaled File System Add a Journaled File System on a Previously Defined Logical Volume Change / Show Characteristics of a Journaled File System Remove a Journaled File System Defragment a Journaled File System 7. Select Add a Standard Journaled File System. Example Add a Journaled File System Move cursor to desired item and press Enter.
10. Press Enter to create the Journaled File System. The Command Status screen appears. Wait for “OK” to appear on the Command Status line. 11. To continue creating Journaled File Systems, press the F3 screen. key until you return to the Add a Journaled File System screen. Repeat steps 2 through 10 for each Journaled File System to be created. 12. To exit SMIT, press the F10 key.
5. Use the df command to verify that the file systems have successfully automounted after a reboot. Any file systems that were not automounted can be set to automount using the SMIT Change a Journaled File System screen. If you are using HACMP or HAGEO, do not set the file systems to automount.
11 Citrix XenServer Enterprise You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 97) 2. 3. 4.
• Creating host groups • Assigning Fibre Channel adapter WWNs to host groups • Mapping volumes (LDEVs) to host groups (by assigning LUNs) In Command View Advanced Edition, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide.
Installing and configuring the FCAs Install and configure the Fibre Channel adapters using the FCA manufacturer's instructions. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer. Clustering and fabric zoning If you plan to use clustering, install and configure the clustering software on the servers.
Table 27 Fabric zoning and LUN security settings (Linux) Environment OS Mix Fabric Zoning Standalone SAN (non-clustered) homogeneous (a single OS type present Not required in the SAN) Clustered SAN heterogeneous (more than one OS type Required present in the SAN) Multi-Cluster SAN LUN Security Must be used when multiple hosts or cluster nodes connect through a shared port Connecting the disk array The 1. 2. 3.
host1 qlogic QLogic HBA Driver 1 host0 qlogic QLogic HBA Driver 0 [root@cb-xen-srv31 ~]# Configuring disk array devices Disks in the disk array are configured using the same procedure for configuring any new disk on the host. This includes the following procedures: 1. Configuring multipathing 2. Creating a Storage Repository 3.
3. Click Enter Maintenance Mode . 4. Select the General tab and then click Properties.
5. Select the Multipathing tab, check the Enable multipathing on this server check box, and then click OK. 6. Right-click the domU that was placed in maintenance mode and select Exit Maintenance Mode.
7. Open a command line interface to the dom0 and edit the /etc/multipath-enable.conf file with the appropriate array. NOTE: HP recommends that you use the RHEL 5.x device mapper config file and multipathing parameter settings on HP.com. Use only the array-specific settings, and not the multipath.conf file bundle into the device mapper kit. All array host modes for Citrix XenServer are the same as Linux. 8.
3. Select the type of virtual disk storage for the storage array and then click Next. NOTE: For Fibre Channel, select Hardware HBA.
4. Complete the template and then click Finish. Adding a Virtual Disk to a domU After the Storage Repository has been created on the dom0, the vdisk from the Storage Repository can be assigned to the domU. This section describes how to pass vdisks to the domU. HP Proliant Virtual Console can be used with HP Integrated CitrixXen Server Enterprise Edition to complete this process.
1. Select the domU. 2. Select the Storage tab and then click Add.
3. Type a name, description, and size for the new disk and then click Add. Adding a dynamic LUN To add a LUN to a dom0 dynamically, follow these steps. 1. Create and present a LUN to a dom0 from the array. 2. Enter the following command to rescan the sessions that are connected to the arrays for the new LUN: xe sr-probe type=lvmohba. NOTE: To create a new Storage Repository, see “Creating a Storage Repository” (page 104).
12 Troubleshooting This chapter includes resolutions for various error conditions you may encounter. If you are unable to resolve an error condition, ask your HP support representative for assistance.
Table 28 Error conditions (continued) 110 Error condition Recommended action The host detects a parity error. Check the FCA and make sure it was installed properly. Reboot the host. The host hangs or devices are declared and the host hangs. Make sure there are no duplicate disk array TIDs and that disk array TIDs do not conflict with any host TIDs.
13 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
• 1 GB (gigabyte) = 1,0003 bytes • 1 TB (terabyte) = 1,0004 bytes • 1 PB (petabyte) = 1,0005 bytes • 1 EB (exabyte) = 10006 bytes HP P9000 storage systems use the following values to calculate logical storage capacity values (logical devices): 112 • 1 block = 512 bytes • 1 KB (kilobyte) = 1,024 (210) bytes • 1 MB (megabyte) = 1,0242 bytes • 1 GB (gigabyte) = 1,0243 bytes • 1 TB (terabyte) = 1,0244 bytes • 1 PB (petabyte) = 1,0245 bytes • 1 EB (exabyte) = 10246 bytes Support and other
A Path worksheet Worksheet Table 29 Path worksheet LDEV (CU:LDEV) (CU = control unit) 0:00 0:01 0:02 0:03 0:04 0:05 0:06 0:07 0:08 0:09 0:10 Device Type SCSI Bus Number Path 1 Alternate Paths TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID:
B Path worksheet (NonStop) Worksheet Table 30 Path worksheet (NonStop) LUN # CU:LDEV ID Array Group Emulation type Array Array Port Port WWN NSK Server NSK SAC name (G-M-S-S) NSK SAC WWN Example: 00 01:00 1-11 OPEN-E 1A /OSDNSK3 110-2-3-1 50060B00 $XPM001 50060E80 0437B000 114 Path worksheet (NonStop) 002716AC NSK volume name Path P
C Disk array supported emulations HP-UX This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 32 Emulation specifications (HP-UX) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote 7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footnote 7 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Footnote6 15 96 Footnote 7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Footnote6 15 96
OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.
Table 33 LUSE device parameters (HP-UX) (continued) Device type OPEN-L*n Physical extent size (PE) Max physical extent size (MPE) n = 11 8 19102 n = 12 8 20839 n = 13 8 22576 n = 14 8 24312 n = 15 8 26049 n = 16 8 27786 n = 17 8 29522 n = 18 8 31259 n = 19 8 32995 n = 20 8 34732 n = 21 8 36469 n = 22 8 38205 n = 23 8 39942 n = 24 8 41679 n = 25 8 43415 n = 26 8 45152 n = 27 8 46889 n = 28 8 48625 n = 29 8 50362 n = 30 8 52098 n = 31 8 53835
SCSI TID map for Fibre Channel adapters When an arbitrated loop (AL) is established or reestablished, the port addresses are assigned automatically to prevent duplicate TIDs. With the SCSI over Fibre Channel protocol (FCP), there is no longer a need for target IDs in the traditional sense. SCSI is a bus-oriented protocol requiring each device to have a unique address because all commands go to all devices. For Fibre Channel, the AL-PA is used instead of the TID to direct packets to the desired destination.
Windows This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 36 Emulation specifications (Windows) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors Capacity MB* 4 per track OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote 5 512 Footnote 6 15 96 Footnote 7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote 5 512 Footnote 6 15 96 Footnote 7 OPEN-9*n CVS SCSI disk OPEN-9*n-CVS Footnote 5 512 Footnote 6 15 96 Footnote 7 OPEN-E*n CVS SCSI disk OPEN-E*n-CVS Foo
For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.26 × 4 = 53 × 4 = 212 7 The capacity of an OPEN-3/8/9/E CVS volume is specified in MB, not number of cylinders.
Novell NetWare This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 38 Emulation specifications (Novell NetWare) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 CVS LUSE OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Note 6 15 96 Note 7 5 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote 512 Note 6 15 96 Note 7 OPEN-9*n CVS SCSI disk OPEN-9*n-CVS Footnote5 512 Note 6 15 96 Note 7 OPEN-E*n CVS SCSI disk OPEN-E*n-CVS Footnote5 512 Note 6
Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.26 × 4 = 53 × 4 = 212 7 The capacity of an OPEN-3/8/9/E CVS volume is specified in MB, not number of cylinders.
NonStop This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
OpenVMS This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 42 Emulation specifications (OpenVMS) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors Capacity MB* per track 4 OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footnote7 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Footnote6 15 96
OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) × 16/15 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 = 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.
VMware This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 44 Emulation specifications (VMware) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-8 CVS SCSI disk OPEN-8-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-9 CVS SCSI disk OPEN-9-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footnote7
For an OPEN-3 CVS volume with capacity = 37 MB: # of cylinders = 37 × 1024/720 = 52.62 (rounded up to next integer) = 53 cylinders OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) × 16/15 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 = 52.
Linux This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 46 Emulation specifications (Linux) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Note 6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Note 6 15 96 Footnote7 OPEN-9*n CVS SCSI disk OPEN-9*n-CVS Footnote5 512 Note 6 15 96 Footnote7 OPEN-E*n CVS SCSI disk OPEN-E*n-CVS Footnote5 512 Note 6 15 96 F
For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.26 × 4 = 53 × 4 = 212 7 The capacity of an OPEN-3/8/9/E CVS volume is specified in MB, not number of cylinders.
Solaris This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 48 Emulation specifications (Solaris) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-9*n CVS SCSI disk OPEN-9*n-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-E*n CVS SCSI disk OPEN-E*n-CVS Footnote5 512 Footno
For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.26 × 4 = 53 × 4 = 212 7 The capacity of an OPEN-3/8/9/E CVS volume is specified in MB, not number of cylinders.
IBM AIX This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 50 Emulation specifications (IBM AIX) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Note 5 512 Footnote6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Note 5 512 Footnote6 15 96 Footnote7 OPEN-9*n CVS SCSI disk OPEN-9*n-CVS Note 5 512 Footnote6 15 96 Footnote7 OPEN-E*n CVS SCSI disk OPEN-E*n-CVS Note 5 512 Footnote6 15 96
For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.26 × 4 = 53 × 4 = 212 7 The capacity of an OPEN-3/8/9/E CVS volume is specified in MB, not number of cylinders.
Table 51 OPEN-3 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-3 OPEN-3*n (n=2 to 36) OPEN-3 CVS OPEN-3 CVS*n (n=2 to 36) pf f partition size Set optionally Set optionally Set optionally Set optionally pg g partition size Set optionally Set optionally Set optionally Set optionally ph h partition size Set optionally Set optionally Set optionally Set optionally ba a partition block size 8,192 8,192 8,192 8,192 bb b partition block size 8,192 8,
Table 52 OPEN-8 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-8 OPEN-8*n (n=2 to 36) OPEN-8 CVS OPEN-8 CVS*n (n=2 to 36) oc c partition offset (Starting block in c partition) 0 0 0 0 od d partition offset (Starting block in d partition) Set optionally Set optionally Set optionally Set optionally oe e partition offset (Starting block in e partition) Set optionally Set optionally Set optionally Set optionally of f partition offset (Starting block in f
Table 52 OPEN-8 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter fh h partition fragment size OPEN-8 OPEN-8*n (n=2 to 36) OPEN-8 CVS OPEN-8 CVS*n (n=2 to 36) 1,024 1,024 1,024 1,024 See “Notes for disk parameters” (page 146).
Table 53 OPEN-9 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-9 OPEN-9*n (n=2 to 36) OPEN-9 CVS OPEN-9 CVS*n (n=2 to 36) ba a partition block size 8,192 8,192 8,192 8,192 bb b partition block size 8,192 8,192 8,192 8,192 bc c partition block size 8,192 8,192 8,192 8,192 bd d partition block size 8,192 8,192 8,192 8,192 be e partition block size 8,192 8,192 8,192 8,192 bf f partition block size 8,192 8,192 8,192 8,192 bg g partition
Table 54 OPEN-E parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-E OPEN-E*n (n=2 to OPEN-E CVS 36) OPEN-E CVS*n (n=2 to 36) oe e partition offset (Starting block in e partition) Set optionally Set optionally Set optionally Set optionally of f partition offset (Starting block in f partition) Set optionally Set optionally Set optionally Set optionally og g partition offset (Starting block in g partition) Set optionally Set optionally Set optionally Set opti
pc = nc * nt * ns The nc of OPEN-x CVS corresponds to the capacity specified by SVP or remote console. The CVS size of OPEN-x is specified by capacity (megabyte), not by number of cylinders. The number of cylinders of an OPEN-x CVS volume can be obtained by the following calculation ( ... means round up to integer). The number of cylinders = (specified capacity in megabytes from SVP or remote console) x 1,024 / 720 .
Table 55 Byte information (IBM AIX) (continued) Category LU product name OPEN-3/8/9 CVS OPEN-3 CVS OPEN-8 CVS OPEN-9 CVS OPEN-E CVS OPEN-K 4096 CVS OPEN-3/8/9*n CVS 35 to 64800 4096 64801 to 126000 8192 126001 and higher 16384 OPEN-E OPEN-E*2-OPEN-E*4 4096 OPEN-E*5 to OPEN-E*9 8192 OPEN-E*10 to OPEN-E*18 16384 OPEN-L 4096 OPEN-L*2 to OPEN-L*3 8192 OPEN-L*4 to OPEN-L*7 16384 OPEN-3 CVS, OPEN-9 CVS, OPEN-E CVS, OPEN-V CVS 4096 OPEN-E OPEN-L OPEN-x CVS 148 Disk array supported emul
Physical partition size table Table 56 Physical partition size (IBM AIX) Category LU product name Physical partition size in megabytes OPEN-3 OPEN-3 4 OPEN-3*2 to OPEN-3*3 8 OPEN-3*4 to OPEN-3*6 16 OPEN-3*7 to OPEN-3*13 32 OPEN-3*14 to OPEN-3*27 64 OPEN-3*28 to OPEN-3*36 128 OPEN-8 8 OPEN-8*2 16 OPEN-8*3 to OPEN-8*4 32 OPEN-8*5 to OPEN-8*9 64 OPEN-8*10 to OPEN-8*18 128 OPEN-8*19 to OPEN-8*36 256 OPEN-9 8 OPEN-9*2 16 OPEN-9*3 to OPEN-9*4 32 OPEN-9*5 to OPEN-9*9 64 OPEN-9*1
Table 56 Physical partition size (IBM AIX) (continued) Category 150 Disk array supported emulations LU product name Physical partition size in megabytes 259201 - 518400 512 518401 and higher 1024
D Using Veritas Cluster Server to prevent data corruption Using VCS I/O fencing By issuing a Persistent Reserve SCSI-3 command, VCS employs an I/O fencing feature that prevents data corruption from occurring if cluster communication stops. To accomplish I/O fencing, each node of VCS registers reserve keys for each disk in a disk group that is imported. The reserve key consists of a unique value for each disk group and a value to distinguish nodes.
Figure 11 Nodes and ports Table 57 Port 1A Key Registration Entries Entry Reserve key in registration table WWN visible to Port-1A LU - Disk Group 0 APGR0001 WWNa0 0, 1, 2 - Disk Group 1 1 APGR0003 WWNa0 8, 9 - Disk Group 3 2 BPGR0001 WWNb0 0, 1, 2 - Disk Group 1 3 BPGR0002 WWNb0 4, 5,6 - Disk Group 2 4 – – – : : : : 127 – – – Table 58 Port 1A Key Registration Entries 152 Entry Reserve key in registration table WWN visible to Port-2A LU - Disk Group 0 APGR0001 WWNa1
Table 58 Port 1A Key Registration Entries (continued) Entry Reserve key in registration table WWN visible to Port-2A LU - Disk Group : : : : 127 – – – Using VCS I/O fencing 153
E Reference information for the HP System Administration Manager (SAM) The HP System Administration Manager (SAM) is used to perform HP-UX system administration functions, including: • Setting up users and groups • Configuring the disks and file systems • Performing auditing and security activities • Editing the system kernel configuration This appendix provides instructions for: • Using SAM to configure the disk devices • Using SAM to set the maximum number of volume groups Configuring the devi
To configure the newly-installed disk array devices: 1. Select Disks and File Systems, then select Disk Devices. 2. 3. Verify that the new disk array devices are displayed in the Disk Devices window. Select the device to configure, select the Actions menu, select Add, and then select Using the Logical Volume Manager. In the Add a Disk Using LVM window, select Create... or Extend a Volume Group.
F HP Clustered Gateway deployments Windows The HP Cluster Gateway and HP Scalable NAS software both use the HP PolyServe software as their underlying clustering technology and both have similar requirements for the P9000 disk array. They have both been tested with the P9000 disk arrays and this appendix details configuration requirements specific to P9000 deployments using HP PolyServe Software on Windows.
For details on importing and deporting disks, dynamic volume creation and configuration, and file system creation and configuration, see the HP StorageWorks Scalable NAS File Serving Software Administration Guide . Linux The HP Cluster Gateway and HP Scalable NAS software both use the HP PolyServe software as their underlying clustering technology and both have similar requirements for the P9000 disk array.
of RAID Manager, with both local and remote HORCM instances running on each server, and with all file system LUNs (P-VOLs) controlled by the local instance and all snapshot V-VOLs (S-VOLs) controlled by the remote instance. Dynamic volume and file system creation When the LUNs have been presented to all nodes in the cluster, import them into the cluster using the GUI or the mx command.
Glossary AL-PA Arbitrated loop physical address. A 1-byte value that the arbitrated loop topology uses to identify the loop ports. This value becomes the last byte of the address identifier for each public port on the loop. command device A volume in the disk array that accepts Continuous Access, Business Copy, or P9000 for Business Continuity Manager control operations, which are then executed by the array. CU Control unit. CVS Custom volume size.
port A physical connection that allows data to pass between a host and a disk array. R-SIM Remote service information message. SIM Service information message. SNMP Simple Network Management Protocol. A widely used network monitoring and control protocol. Data is passed from SNMP agents, which are hardware and/or software processes reporting activity in each network device (hub, router, bridge, and so on) to the workstation console used to oversee the network.
Index A Array Manager, 11, 14, 29, 35, 47, 48, 51, 52, 59, 67, 74, 85, 97 auto-mount parameters, setting, 28 B Business Copy, 13 C client, verifying operations, 42 clustering, 16, 37, 42, 44, 49, 55, 61, 69, 81, 87, 99 command device(s) one LDEV as a, 13 RAID Manager, 13 Command View Advanced Edition, 11, 13, 14, 16, 29, 31, 35, 36, 47, 48, 49, 54, 59, 60, 67, 68, 74, 76, 85, 87, 97, 98 configuration cluster services, 44 device, 19, 38, 50, 57, 71, 82, 89, 101 emulation types, 12 recognition, 18, 70, 100
F fabric environment zoning, 16, 37, 49, 55, 61, 69, 81, 87, 99 failover, 12, 13, 42 host, 42 multi-path, 43 FCA(s) configuring, 16, 36, 55, 60, 69, 77, 87, 99 Emulex, 79 installation, verifying, 17 multiple with shared LUNs, 43 Oracle, 78 QLogic, 80 supported, 77 verify driver installation, 70, 100 verifying configuration, 80 FCSA(s) configuring, 49 supported, 49 features, disk array, 10 Fibre Channel adapters, configuring, 31 adapters, SCSI TID map, 11, 119 connection speed, 11 interface, 11 ports, config
parameter tables byte information, 147 physical partition size, 149 parity error, 110 partitioning devices, 82 partitions creating, 39 path(s) adding, 81 defining, 15, 29, 35, 48, 52, 59, 74, 85, 97 SCSI, 81 worksheet, 113 physical volume(s) creating, 22 creating groups, 23 port(s) Fibre Channel, 16, 31, 36, 49, 54, 60, 76, 87, 98 Host Mode, setting, 15, 30, 36, 53, 60, 68, 98 R configuration, 83 Virtual Machines setup, 63 volume(s) assigning, 40 groups creating, 23 setting maximum number, 155 groups, ass