HP XP P9000 Disk Array Configuration Guide Abstract This guide provides requirements and procedures for connecting a HP XP P9000 disk array to a host system, and for configuring the disk array for use with a specific operating system. This document is intended for system administrators, HP representatives, and authorized service providers who are involved in installing, configuring, and operating HP XP P9000 disk arrays.
© Copyright 2010, 2012 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Overview..................................................................................................9 What's in this guide..................................................................................................................9 Audience.................................................................................................................................9 Features and requirements.............................................................................................
Creating and formatting disk partitions.................................................................................33 Verifying file system operations ...........................................................................................33 4 NonStop.................................................................................................34 Installation roadmap...............................................................................................................
7 Linux.......................................................................................................50 Installation roadmap...............................................................................................................50 Installing and configuring the disk array....................................................................................50 Defining the paths..............................................................................................................
Configuring the Fibre Channel ports.....................................................................................70 Installing and configuring the host.............................................................................................70 Loading the operating system and software...........................................................................70 Installing and configuring the FCAs .....................................................................................
Supported emulations.......................................................................................................103 Emulation specifications....................................................................................................103 NonStop.............................................................................................................................106 Supported emulations.....................................................................................................
Contents 8 Contents
1 Overview What's in this guide This guide includes information on installing and configuring P9000 disk arrays. The following operating systems are covered: • HP-UX • Windows • NonStop • OpenVMS • VMware • Linux • Solaris • IBM AIX For additional information on connecting disk arrays to a host system and configuring for a mainframe, see the HP StorageWorks P9000 Mainframe Host Attachment and Operations Guide.
• HP StorageWorks P9000 Array Manager Software • Check with your HP representative for other P9000 software available for your system. NOTE: • Linux, NonStop, and Novell NetWare: Make sure you have superuser (root) access. • OpenVMS firmware version: Alpha System firmware version 5.6 or later for Fibre Channel support. Integrity servers have no minimum firmware version requirement. • HP does not support using Command View Advanced Edition Suite Software from a Guest OS.
Device emulation types The P9000 family of disk arrays supports these device emulation types: • OPEN-x devices: OPEN-x logical units represent disk devices. Except for OPEN-V, these devices are based on fixed sizes. OPEN-V is a user-defined size based on a CVS device. Supported emulations include OPEN-3, OPEN-8, OPEN-9, OPEN-E, OPEN-L, and OPEN-V devices.
Your HP representative might need to set specific disk array system modes for these products. Check with your HP representative for the current versions supported. • For I/O path failover, different products are available from Oracle, Veritas, and HP. Oracle supplies software called STMS for Solaris 8/9 and Storage Multipathing for Solaris 10. Veritas offers VxVM, which includes DMP. HP supplies HDLM.
2 HP-UX You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 13) 2. 3. 4.
Defining the paths Use Command View Advanced Edition or Remote Web Console to define paths between hosts and volumes (LUNs) in the disk array. This process is also called “LUN mapping.
Table 2 System option modes (NonStop) SystemOption Mode Minimum microcode version P9500 142 Available from initial release 454 Available from initial release 6851 N/A 724 N/A HP also recommends setting host group mode 13 with P9000 storage systems that are connected to HP NonStop systems. Configuring the Fibre Channel ports Configure the disk array Fibre Channel ports by using Command View Advanced Edition or Remote Web Console. Select the settings for each port based on your SAN topology.
Figure 2 Multi-cluster environment (HP-UX) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration.
Use the ioscan –f command, and verify that the rows shown in the example are displayed. If these rows are not displayed, check the host adapter installation (hardware and driver installation) or the host configuration. Example # ioscan Class ... fc lan fcp ext bus –f I H/W Path Driver S/W State H/W Type Description 0 1 0 2 fcT1 fcT1_cntl fcp fcpdev CLAIMED CLAIMED CLAIMED CLAIMED HP Fibre ... HP Fibre ... FCP Proto... FCP Devic... 8/12 8/12.5 8/12.8 8/12.8.0.255.
• z = LUN • c stands for controller • t stands for target ID • d stands for device The numbers x, y, and z are hexadecimal. Table 4 Device file name example (HP-UX) 5. SCSI bus instance number Hardware path SCSI TID LUN File name 00 14/12.6.0 6 0 c6t0d0 00 14/12.6.1 6 2 c6t0d1 Verify that the SCSI TIDs correspond to the assigned port address for all connected ports (see mapping tables in “SCSI TID map for Fibre Channel adapters (HP-UX)” (page 98), for values).
Verifying the device files and drivers The device files for new devices are usually created automatically during HP-UX startup. Each device must have a block-type device file in the /dev/dsk directory and a character-type device file in the /dev/rdsk directory. However, some HP-compatible systems do not create the device files automatically. If verification shows that the device files were not created, follow the instructions in “Creating the device files” (page 19).
Example # insf -e insf: Installing special files for mux2 instance 0 address 8/0/0 : : : : : : : : # Failure of the insf –e command indicates a SAN problem. If the device files for the new disk array devices cannot be created automatically, you must create the device files manually using the mknodcommand as follows: 1. Retrieve the device information you recorded earlier. 2. Construct the device file name for each device, using the device information, and enter the file names in your table.
6. Create the device files for all disk array devices (SCSI disk and multiplatform devices) using the mknodcommand. Create the block-type device files in the /dev/dsk directory and the character-type device files in the /dev/rdsk directory. Example # cd /dev/dsk Go to /dev/dsk directory. # mknod /dev/dsk/c2t6d0 b 31 0x026000 Create block-type file. File name, b=block-type, 31=major #, 0x026000= minor # # cd /dev/rdsk Go to /dev/rdsk directory.
The physical volumes that make up one volume group can be located either in the same disk array or in other disk arrays. To allow more volume groups to be created, use SAM to modify the HP-UX system kernel configuration. See “Reference information for the HP System Administrator Manager SAM” (page 131) for details. The newer releases of HP-UX have deprecated the SAM tool and replaced it with the System Management Homepage (SMH) tool. To create volume groups: 1. 2. 3.
9. Use vgdisplay –v to verify that the volume group was created correctly. The –v option displays the detailed volume group information.
To create logical volumes: 1. Use the lvcreate –L command to create a logical volume. Specify the volume size (in megabytes) and the volume group for the new logical volume. HP-UX assigns the logical volume numbers automatically (lvol1, lvol2, lvol3).
Creating the file systems Create the file system for each new logical volume on the disk array. The default file system types are: • HP-UX OS version 10.20 = hfs or vxfs, depending on entry in the /etc/defaults/fs file. • HP-UX OS version 11.0 = vxfs • HP-UX OS version 11.i = vxfs To create file systems: 1. Use the newfs command to create the file system using the logical volume as the argument.
# pvchange -t 60 /dev/dsk/c0t6d0 Physical volume "/dev/dsk/c0t6d0" has been successfully changed. Volume Group configuration for /dev/vg06 has been saved in /etc/lvmconf/vg06.conf. 3. Verify that the new I/O timeout value is 60 seconds using the pvdisplay command: Example # pvdisplay /dev/dsk/c0t6d0 --- Physical volumes --PV Name /dev/dsk/c0t6d0 VG Name /dev/vg06 PV Status available : Stale PE 0 IO Timeout (Seconds) 60 [New I/O timeout value] 4.
: /ldev/vg06/lvol1 4. 2348177 9 2113350 0% /AHPMD-LU00 As a final verification, perform some basic UNIX operations (for example file creation, copying, and deletion) on each logical device to make sure that the devices on the disk array are fully operational. Example #cd /AHPMD-LU00 #cp /bin/vi /AHPMD-LU00/vi.back1 #ls -l drwxr-xr-t 2 root root 8192 Mar 15 11:35 lost+found -rwxr-xr-x 1 root sys 217088 Mar 15 11:41 vi.back1 #cp vi.back1 vi.
3 Windows You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 28) 2. 3.
In Command View Advanced Edition, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For more information about LUN mapping, see the HP StorageWorks P9000 Provisioning for Open Systems User Guide or Remote Web Console online help. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.
Table 9 Host group modes (options) Windows Host Group Mode 6 Function Default Parameter Setting Failure for TPRLO Inactive When using the Emulex FCA in the Windows environment, the parameter setting for TPRLO failed. After receiving TPRLO and FCP_CMD, respectively. PRLO will respond when HostMode=0x0C/ 0x2C and HostModeOption=0x06. (MAIN Ver.50-03-14-00/00 and later) 13 SIM report at link failure.
Fabric zoning and LUN security By using appropriate zoning and LUN security, you can connect various servers with various operating systems to the same switch and fabric with the following restrictions: • Storage port zones can overlap if more than one operating system needs to share an array port. • Heterogeneous operating systems can share an array port if you set the appropriate host group and mode. All others must connect to a dedicated array port.
1. 2. 3. Verifying operational status of the disk array channel adapters, LDEVs, and paths. Connecting the Fibre Channel cables between the disk array and the fabric switch or host. Verifying the ready status of the disk array and peripherals. Verifying the host recognizes array devices 1. 2. 3. 4. 5. 6. 7. Log into the host as an administrator. Right-click My Computer , and then click Manage. Click Device Manager. Click SCSI and then RAID Controllers.
Creating and formatting disk partitions Dynamic Disk is supported with no restrictions for a disk array connected to a Windows 2000/2003/2008 system. For more information, see Microsoft's online help. CAUTION: Do not partition or create a file system on a device that will be used as a raw device (for example, some database applications use raw devices.) 1. 2. In the Disk Management main window, select the unallocated area for the SCSI disk you want to partition.
4 NonStop You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. The HP NonStop operating system runs on HP S-series and Integrity NonStop servers to provide continuous availability for applications, databases, and devices.
two different clusters of the disk array, and give each host group access to separate but identical LUNs. This arrangement minimizes the shared components among the four paths, providing both mirroring and greater failure protection. NOTE: For the highest level of availability and fault tolerance, HP recommends the use of two P9000 disk arrays, one for the Primary disks and one for the Mirror disks. This process is also called “LUN mapping.
Table 11 System option modes (NonStop) SystemOption Mode Minimum microcode version P9500 142 Available from initial release 454 Available from initial release 6851 N/A 724 N/A HP also recommends setting host group mode 13 with P9000 storage systems that are connected to HP NonStop systems. Contact your HP storage service representative for information about these configuration options.
Table 12 Fabric zoning and LUN security settings (NonStop) Environment Fabric Zoning LUN Security Single node SAN Not required Must be used Multiple node SAN Not required Must be used Connecting the disk array The 1. 2. 3. HP service representative performs the following steps to connect the disk array to the host: Verifying operational status of the disk array channel adapters, LDEVs, and paths. Connecting the Fibre Channel cables between the disk array and the fabric switch or host.
5 OpenVMS You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 38) 2. 3. 4.
NOTE: As illustrated in “Microprocessor port sharing (OpenVMS)” (page 39), there is no microprocessor sharing with 8-port module pairs. With 16- and 32-port module pairs, alternating ports are shared. Table 13 Microprocessor port sharing (OpenVMS) Channel adapter Model Description Nr.
For all systems in an OpenVMS cluster: $run sys$system:sysman sysman> set environment/cluster sysman> io autoconfigure/log 5. Verify the online status of the P9000 LUNs, and confirm that all expected LUNs are shown online. Setting the host mode for the disk array ports After the disk array is installed, you must set the host mode for each host group that is configured on a disk array port to match the host OS. Set the host mode using LUN Manager in Remote Web Console or Command View Advanced Edition.
If host mode option 33 is not set, then the default behavior is to present the volumes to the OpenVMS host by calculating the decimal value of the hexadecimal CU:LDEV value. That calculated value will be the value of the DGA device number. CAUTION: • The UUID (or by default the decimal value of the CU:LDEV value) must be unique across the SAN for the OpenVMS host and/or OpenVMS cluster. No other SAN storage controllers should present the same value. If this value is not unique, data loss will occur.
Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer. Installing and configuring the FCAs Install and configure the Fibre Channel adapters using the FCA manufacturer's instructions. Clustering and fabric zoning If you plan to use clustering, install and configure the clustering software on the servers.
Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows: • Storage port zones can overlap if more than one operating system needs to share an array port. • Heterogeneous operating systems can share an array port if you set the appropriate host group and mode. All others must connect to a dedicated array port.
Configuring disk array devices Configure the disk array devices in the same way you would configure any new disk on the host server. Creating scripts to configure all devices at once could save you considerable time.
4. Verify that this directory exists: Example $ show default $1$dga100:[user] If the user directory does not exist, OpenVMS returns an error. 5. Create a test user file: Example $ create test.txt this is a line of text for the test file test.txt [Control-Z] The create command creates a file with data entered from the terminal. Control-z terminates the data input. 6. Verify whether the file is created: Example $ directory Directory $1$DGA100:[USER] TEST.TXT;1 Total of 1 file. 7.
6 VMware You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 46) 2. 3. 4.
In Command View Advanced Edition, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.
Clustering is the organization of multiple servers into groups. Within a cluster, each server is a node. Multiple clusters compose a multi-cluster environment. The following example shows a multi-cluster environment with three clusters, each containing two nodes. The nodes share access to the disk array. Figure 5 Multi-cluster environment (VMware) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems).
Setting up virtual machines (VMs) and guest operating systems Setting the SCSI disk timeout value for Windows VMs To ensure Windows VM’s (Windows 2000 and Windows Server 2003) wait at least 60 seconds for delayed disk operations to complete before generating errors, you must set the SCSI disk timeout value to 60 seconds by editing the registry of the guest operating system as follows: CAUTION: file. 1. 2. 3.
7 Linux You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 50) 2. 3. 4.
This process is also called “LUN mapping.
Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Installing and configuring the FCAs Install and configure the Fibre Channel adapters using the FCA manufacturer's instructions. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host.
Table 18 Fabric zoning and LUN security settings (Linux) Environment OS Mix Fabric Zoning Standalone SAN (non-clustered) homogeneous (a single OS type present Not required in the SAN) Clustered SAN heterogeneous (more than one OS type Required present in the SAN) Multi-Cluster SAN LUN Security Must be used when multiple hosts or cluster nodes connect through a shared port Connecting the disk array The 1. 2. 3.
3. Verify that the system recognizes the disk array partitions by viewing the /proc/partitions file.
4. 5. Select w to write the partition information to disk and complete the fdisk command. Other commands that you might want to use include: d to remove partitions q to stop a change 6. Repeat steps 1–5 for each device. Creating the file systems The supported file system for Linux is ext3. Creating file systems with ext3 1. Enter mkfs –t ext3 /dev/device_name. Example # mkfs –t ext3 /dev/sdd 2. Repeat step 1 for each device on the disk array.
1. Enter mkdir /mnt/mount_point. Example # mkdir /mnt/A5700F_LU00 2. Repeat step 1 for each device on the disk array. Creating the mount table Add the new devices to the /etc/fstab file to specify the automount parameters for each device. 1. Edit the /etc/fstab file to add one line for each device to be automounted. Each line of the file contains: (A) device name, (B) mount point, (C) file system type (“ext3”), (D) mount options (“defaults”), (E) enhance parameter (“1”), and (F) fsck pass 2.
8 Solaris You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 57) 2. 3. 4. 5.
This process is also called “LUN mapping.
Table 19 Host group modes (options) Solaris (continued) Host Group Mode 13 Function Default Comments SIM report at link failure Inactive Optional This mode is common to all host platforms. Select HMO 13 to enable SIM notification when the number of link failures detected between ports exceeds the threshold.
Installing and configuring the FCAs Install and configure the FCA driver software and setup utilities according to the manufacturer's instructions. Configuration settings specific to the P9000 array differ depending on the manufacturer. Specific configuration information is detailed in the following sections. WWN The FCA configuration process might require you to enter the WWN for the array port(s) to which the FCA connects.
3. To set the queue depth, add the following to the /etc/system file: set sd:sd_max_throttle = x set ssd:ssd_max_throttle = x for Oracle generic FCA (for x value, see Table 20 (page 60)) Example: set sd:sd_max_throttle = 16 set ssd:ssd_max_throttle = 16 <— Add this line to /etc/system <— Add this line to /etc/system (for Oracle generic FCA) Configuring FCAs with the Oracle SAN driver stack Oracle branded FCAs are only supported with the Oracle SAN driver stack.
• For Solaris 8/9, perform a reconfiguration reboot of the host to implement changes to the configuration file. For Solaris 10, use the stmsboot command which will perform the modifications and then initiate a reboot. • For Solaris 8/9, after you have rebooted and the LDEV has been defined as a LUN to the host, use the cfgadm command to configure the controller instances for SAN connectivity. The controller instance (c#) may differ between systems.
name="sd" parent="lpfc" target=30 lun=1; name="sd" parent="lpfc" target=30 lun=2; • Perform a reconfiguration reboot to implement the changes to the configuration files. • If LUNs have been preconfigured in the /kernel/drv/sd.conf file, use the devfsadm command to perform LUN rediscovery after configuring LUNs as explained in “Defining the paths” (page 14).
Clustering and fabric zoning If you plan to use clustering, install and configure the clustering software on the servers. Clustering is the organization of multiple servers into groups. Within a cluster, each server is a node. Multiple clusters compose a multi-cluster environment. The following example shows a multi-cluster environment with three clusters, each containing two nodes. The nodes share access to the disk array.
Pre-configure additional LUNs (not yet made available) to avoid unnecessary reboots. See “Installing and configuring the FCAs” (page 60) for individual driver requirements. Verifying host recognition of disk array devices Verify that the host recognizes the disk array devices as follows: 1. Use format to display the device information. 2. Check the list of disks to verify the host recognizes all disk array devices.
6. 7. 8. If you are not using Veritas Volume Manager or Solaris Volume Manager with named disk sets, use the partition command to create or adjust the slices (partitions) as necessary. Repeat this labeling procedure for each new device (use the disk command to select another disk). When you finish labeling the disks, enter quit or press Ctrl-D to exit the format utility. For further information, see the System Administration Guide - Devices and File Systems at: http://www.oracle.
VxVM 3.2 and later use ASL to configure the DMP feature and other parameters. The ASL is required for all arrays. With VxVM 5.0 or later, the ASL is delivered with the Volume Manager and does not need to be installed separately. With VxVM 4.x versions, you need to download and install the ASL from the Symantec/Veritas support website (http://support.veritas.com): 1. Select Volume Manager for Unix/Linux as product and search the P9000 array model for Solaris as the platform. 2.
9 IBM AIX You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 68) 2. 3.
• Assigning Fibre Channel adapter WWNs to host groups • Mapping volumes (LDEVs) to host groups (by assigning LUNs) In Command View Advanced Edition, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.
Setting the system option modes The HP service representative sets the system option mode(s) based on the operating system and software configuration of the host. Notify your HP representative if you install storage agnostic software (such as backup or cluster software) that might require specific settings. Configuring the Fibre Channel ports Configure the disk array Fibre Channel ports by using Command View Advanced Edition or Remote Web Console.
Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows: • Storage port zones can overlap if more than one operating system needs to share an array port. • Heterogeneous operating systems can share an array port if you set the appropriate host group and mode. All others must connect to a dedicated array port.
Configuring disk array devices Disks in the disk array are configured using the same procedure for configuring any new disk on the host. This includes the following procedures: • “Changing the device parameters” (page 72) • “Assigning the new devices to volume groups” (page 74) • “Creating the journaled file systems” (page 76) • “Mounting and verifying the file systems” (page 78) Creating scripts to configure all devices at once can save you considerable time.
For example, enter the following command to change the queue depth for the device hdisk3: # chdev –l hdisk3 –a queue_depth='2' 4. Verify that the parameters for all devices were successfully changed. For example, enter the following command to verify the parameter change for the device hdisk3: # lsattr –E –l hdisk3 5. Repeat these steps for each OPEN-x device on the disk array.
Use QERR Bit Device CLEARS its Queue on Error READ/WRITE time out value START unit time out value REASSIGN time out value APPLY change to DATABASE only 7. [yes] [no] [60] [60] [120] no Repeat these steps for each OPEN-x device on the disk array. Assigning the new devices to volume groups Assign the new devices to volume groups using the AIX system's Logical Volume Manager (accessed from within SMIT). This operation is not required when the volumes are used as raw devices.
List All Volume Groups Add a Volume Group Set Characteristics of a Volume Group List Contents of a Volume Group Remove a Volume Group Activate a Volume Group Deactivate a Volume Group Import a Volume Group Export a Volume Group Mirror a Volume Group *1 Unmirror a Volume Group *1 Synchronize LVM Mirrors *1 Back Up a Volume Group Remake a Volume Group List Files in a Volume Group Backup Restore Files in a Volume Group Backup 6.
Creating the journaled file systems Create the journaled file systems using SMIT. This operation is not required when the volumes are used as raw devices. The largest file system permitted in AIX is 64 GB. 1. Start SMIT. 2. Select System Storage Management (Physical & Logical Storage). Example System Management Move cursor to desired item and press Enter.
6. Select Add a Journaled File System. Example Journaled File System Move cursor to desired item and press Enter. Add a Journaled File System Add a Journaled File System on a Previously Defined Logical Volume Change / Show Characteristics of a Journaled File System Remove a Journaled File System Defragment a Journaled File System 7. Select Add a Standard Journaled File System. Example Add a Journaled File System Move cursor to desired item and press Enter.
10. Press Enter to create the Journaled File System. The Command Status screen appears. Wait for “OK” to appear on the Command Status line. 11. To continue creating Journaled File Systems, press the F3 screen. key until you return to the Add a Journaled File System screen. Repeat steps 2 through 10 for each Journaled File System to be created. 12. To exit SMIT, press the F10 key.
5. Use the df command to verify that the file systems have successfully automounted after a reboot. Any file systems that were not automounted can be set to automount using the SMIT Change a Journaled File System screen. If you are using HACMP or HAGEO, do not set the file systems to automount.
10 Citrix XenServer Enterprise You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: 1. “Installing and configuring the disk array” (page 80) 2. 3. 4.
• Creating host groups • Assigning Fibre Channel adapter WWNs to host groups • Mapping volumes (LDEVs) to host groups (by assigning LUNs) In Command View Advanced Edition, LUN mapping includes: • Configuring ports • Creating storage groups • Mapping volumes and WWN/host access permissions to the storage groups For details, see the HP StorageWorks P9000 Provisioning for Open Systems User Guide.
Installing and configuring the FCAs Install and configure the Fibre Channel adapters using the FCA manufacturer's instructions. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer. Clustering and fabric zoning If you plan to use clustering, install and configure the clustering software on the servers.
Table 26 Fabric zoning and LUN security settings (Linux) Environment OS Mix Fabric Zoning Standalone SAN (non-clustered) homogeneous (a single OS type present Not required in the SAN) Clustered SAN heterogeneous (more than one OS type Required present in the SAN) Multi-Cluster SAN LUN Security Must be used when multiple hosts or cluster nodes connect through a shared port Connecting the disk array The 1. 2. 3.
host1 qlogic QLogic HBA Driver 1 host0 qlogic QLogic HBA Driver 0 [root@cb-xen-srv31 ~]# Configuring disk array devices Disks in the disk array are configured using the same procedure for configuring any new disk on the host. This includes the following procedures: 1. Configuring multipathing 2. Creating a Storage Repository 3.
3. Click Enter Maintenance Mode . 4. Select the General tab and then click Properties.
5. Select the Multipathing tab, check the Enable multipathing on this server check box, and then click OK. 6. Right-click the domU that was placed in maintenance mode and select Exit Maintenance Mode.
7. Open a command line interface to the dom0 and edit the /etc/multipath-enable.conf file with the appropriate array. NOTE: HP recommends that you use the RHEL 5.x device mapper config file and multipathing parameter settings on HP.com. Use only the array-specific settings, and not the multipath.conf file bundle into the device mapper kit. All array host modes for Citrix XenServer are the same as Linux. 8.
3. Select the type of virtual disk storage for the storage array and then click Next. NOTE: 88 For Fibre Channel, select Hardware HBA.
4. Complete the template and then click Finish. Adding a Virtual Disk to a domU After the Storage Repository has been created on the dom0, the vdisk from the Storage Repository can be assigned to the domU. This section describes how to pass vdisks to the domU. HP Proliant Virtual Console can be used with HP Integrated CitrixXen Server Enterprise Edition to complete this process.
1. Select the domU. 2. Select the Storage tab and then click Add.
3. Type a name, description, and size for the new disk and then click Add. Adding a dynamic LUN To add a LUN to a dom0 dynamically, follow these steps. 1. Create and present a LUN to a dom0 from the array. 2. Enter the following command to rescan the sessions that are connected to the arrays for the new LUN: xe sr-probe type=lvmohba. NOTE: To create a new Storage Repository, see “Creating a Storage Repository” (page 87).
11 Troubleshooting This chapter includes resolutions for various error conditions you may encounter. If you are unable to resolve an error condition, ask your HP support representative for assistance.
Table 27 Error conditions (continued) Error condition Recommended action The host detects a parity error. Check the FCA and make sure it was installed properly. Reboot the host. The host hangs or devices are declared and the host hangs. Make sure there are no duplicate disk array TIDs and that disk array TIDs do not conflict with any host TIDs.
12 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
• 1 GB (gigabyte) = 1,0003 bytes • 1 TB (terabyte) = 1,0004 bytes • 1 PB (petabyte) = 1,0005 bytes • 1 EB (exabyte) = 10006 bytes HP P9000 storage systems use the following values to calculate logical storage capacity values (logical devices): • 1 block = 512 bytes • 1 KB (kilobyte) = 1,024 (210) bytes • 1 MB (megabyte) = 1,0242 bytes • 1 GB (gigabyte) = 1,0243 bytes • 1 TB (terabyte) = 1,0244 bytes • 1 PB (petabyte) = 1,0245 bytes • 1 EB (exabyte) = 10246 bytes Conventions for storage
A Path worksheet Worksheet Table 28 Path worksheet LDEV (CU:LDEV) (CU = control unit) 0:00 0:01 0:02 0:03 0:04 0:05 0:06 0:07 0:08 0:09 0:10 96 Path worksheet Device Type SCSI Bus Number Path 1 Alternate Paths TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN: TID: TID: TID: LUN: LUN: LUN:
B Path worksheet (NonStop) Worksheet Table 29 Path worksheet (NonStop) LUN # CU:LDEV ID Array Group Emulation type Array Array Port Port WWN NSK Server NSK SAC name (G-M-S-S) NSK SAC WWN Example: 00 01:00 1-11 OPEN-E 1A /OSDNSK3 110-2-3-1 50060B00 $XPM001 50060E80 0437B000 NSK volume name Path P 002716AC Worksheet 97
C Disk array supported emulations HP-UX This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 31 Emulation specifications (HP-UX) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote 7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footnote 7 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Footnote6 15 96 Footnote 7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Footnote6 15 96
OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.
Table 32 LUSE device parameters (HP-UX) (continued) Device type OPEN-L*n Physical extent size (PE) Max physical extent size (MPE) n = 11 8 19102 n = 12 8 20839 n = 13 8 22576 n = 14 8 24312 n = 15 8 26049 n = 16 8 27786 n = 17 8 29522 n = 18 8 31259 n = 19 8 32995 n = 20 8 34732 n = 21 8 36469 n = 22 8 38205 n = 23 8 39942 n = 24 8 41679 n = 25 8 43415 n = 26 8 45152 n = 27 8 46889 n = 28 8 48625 n = 29 8 50362 n = 30 8 52098 n = 31 8 53835
SCSI TID map for Fibre Channel adapters When an arbitrated loop (AL) is established or reestablished, the port addresses are assigned automatically to prevent duplicate TIDs. With the SCSI over Fibre Channel protocol (FCP), there is no longer a need for target IDs in the traditional sense. SCSI is a bus-oriented protocol requiring each device to have a unique address because all commands go to all devices. For Fibre Channel, the AL-PA is used instead of the TID to direct packets to the desired destination.
Windows This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 35 Emulation specifications (Windows) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote 5 512 Footnote 6 15 96 Footnote 7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote 5 512 Footnote 6 15 96 Footnote 7 OPEN-9*n CVS SCSI disk OPEN-9*n-CVS Footnote 5 512 Footnote 6 15 96 Footnote 7 OPEN-E*n CVS SCSI disk OPEN-E*n-CVS Foot
For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.26 × 4 = 53 × 4 = 212 7 The capacity of an OPEN-3/8/9/E CVS volume is specified in MB, not number of cylinders.
NonStop This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
OpenVMS This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 39 Emulation specifications (OpenVMS) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors Capacity MB* per track 4 OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footnote7 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Footnote6 15 96
OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) × 16/15 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 = 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.
VMware This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 41 Emulation specifications (VMware) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-8 CVS SCSI disk OPEN-8-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-9 CVS SCSI disk OPEN-9-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-E CVS SCSI disk OPEN-E-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-V SCSI disk OPEN-V Footnote5 512 Footnote6 15 128 Footnote7
For an OPEN-3 CVS volume with capacity = 37 MB: # of cylinders = 37 × 1024/720 = 52.62 (rounded up to next integer) = 53 cylinders OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) × 16/15 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 = 52.
Linux This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 43 Emulation specifications (Linux) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Note 6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Note 6 15 96 Footnote7 OPEN-9*n CVS SCSI disk OPEN-9*n-CVS Footnote5 512 Note 6 15 96 Footnote7 OPEN-E*n CVS SCSI disk OPEN-E*n-CVS Footnote5 512 Note 6 15 96 F
For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.26 × 4 = 53 × 4 = 212 7 The capacity of an OPEN-3/8/9/E CVS volume is specified in MB, not number of cylinders.
Solaris This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 45 Emulation specifications (Solaris) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-9*n CVS SCSI disk OPEN-9*n-CVS Footnote5 512 Footnote6 15 96 Footnote7 OPEN-E*n CVS SCSI disk OPEN-E*n-CVS Footnote5 512 Footno
For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.26 × 4 = 53 × 4 = 212 7 The capacity of an OPEN-3/8/9/E CVS volume is specified in MB, not number of cylinders.
IBM AIX This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
Table 47 Emulation specifications (IBM AIX) (continued) Emulation1 Category2 Product name3 Blocks Sector size # of (512 bytes) (bytes) cylinders Heads Sectors per track Capacity MB* 4 OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Note 5 512 Footnote6 15 96 Footnote7 OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Note 5 512 Footnote6 15 96 Footnote7 OPEN-9*n CVS SCSI disk OPEN-9*n-CVS Note 5 512 Footnote6 15 96 Footnote7 OPEN-E*n CVS SCSI disk OPEN-E*n-CVS Note 5 512 Footnote6 15 96
For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 16/15 × n Example For an OPEN-V CVS LUSE volume with capacity = 49 MB and n = 4: # of cylinders = 49 × 16/15 × 4 = 52.26 × 4 = 53 × 4 = 212 7 The capacity of an OPEN-3/8/9/E CVS volume is specified in MB, not number of cylinders.
Table 48 OPEN-3 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-3 OPEN-3*n (n=2 to 36) OPEN-3 CVS OPEN-3 CVS*n (n=2 to 36) pf f partition size Set optionally Set optionally Set optionally Set optionally pg g partition size Set optionally Set optionally Set optionally Set optionally ph h partition size Set optionally Set optionally Set optionally Set optionally ba a partition block size 8,192 8,192 8,192 8,192 bb b partition block size 8,192 8,
Table 49 OPEN-8 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-8 OPEN-8*n (n=2 to 36) OPEN-8 CVS OPEN-8 CVS*n (n=2 to 36) oc c partition offset (Starting block in c partition) 0 0 0 0 od d partition offset (Starting block in d partition) Set optionally Set optionally Set optionally Set optionally oe e partition offset (Starting block in e partition) Set optionally Set optionally Set optionally Set optionally of f partition offset (Starting block in f
Table 49 OPEN-8 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter fh h partition fragment size OPEN-8 OPEN-8*n (n=2 to 36) OPEN-8 CVS OPEN-8 CVS*n (n=2 to 36) 1,024 1,024 1,024 1,024 See “Notes for disk parameters” (page 126).
Table 50 OPEN-9 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-9 OPEN-9*n (n=2 to 36) OPEN-9 CVS OPEN-9 CVS*n (n=2 to 36) ba a partition block size 8,192 8,192 8,192 8,192 bb b partition block size 8,192 8,192 8,192 8,192 bc c partition block size 8,192 8,192 8,192 8,192 bd d partition block size 8,192 8,192 8,192 8,192 be e partition block size 8,192 8,192 8,192 8,192 bf f partition block size 8,192 8,192 8,192 8,192 bg g partition
Table 51 OPEN-E parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-E OPEN-E*n (n=2 to OPEN-E CVS 36) OPEN-E CVS*n (n=2 to 36) oe e partition offset (Starting block in e partition) Set optionally Set optionally Set optionally Set optionally of f partition offset (Starting block in f partition) Set optionally Set optionally Set optionally Set optionally og g partition offset (Starting block in g partition) Set optionally Set optionally Set optionally Set opti
pc = nc * nt * ns The nc of OPEN-x CVS corresponds to the capacity specified by SVP or remote console. The CVS size of OPEN-x is specified by capacity (megabyte), not by number of cylinders. The number of cylinders of an OPEN-x CVS volume can be obtained by the following calculation ( ... means round up to integer). The number of cylinders = (specified capacity in megabytes from SVP or remote console) x 1,024 / 720 .
Table 52 Byte information (IBM AIX) (continued) Category LU product name OPEN-3/8/9 CVS OPEN-3 CVS OPEN-8 CVS OPEN-9 CVS OPEN-E CVS OPEN-K 4096 CVS OPEN-3/8/9*n CVS 35 to 64800 4096 64801 to 126000 8192 126001 and higher 16384 OPEN-E OPEN-E*2-OPEN-E*4 4096 OPEN-E*5 to OPEN-E*9 8192 OPEN-E*10 to OPEN-E*18 16384 OPEN-L 4096 OPEN-L*2 to OPEN-L*3 8192 OPEN-L*4 to OPEN-L*7 16384 OPEN-3 CVS, OPEN-9 CVS, OPEN-E CVS, OPEN-V CVS 4096 OPEN-E OPEN-L OPEN-x CVS 128 Disk array supported emul
Physical partition size table Table 53 Physical partition size (IBM AIX) Category LU product name Physical partition size in megabytes OPEN-3 OPEN-3 4 OPEN-3*2 to OPEN-3*3 8 OPEN-3*4 to OPEN-3*6 16 OPEN-3*7 to OPEN-3*13 32 OPEN-3*14 to OPEN-3*27 64 OPEN-3*28 to OPEN-3*36 128 OPEN-8 8 OPEN-8*2 16 OPEN-8*3 to OPEN-8*4 32 OPEN-8*5 to OPEN-8*9 64 OPEN-8*10 to OPEN-8*18 128 OPEN-8*19 to OPEN-8*36 256 OPEN-9 8 OPEN-9*2 16 OPEN-9*3 to OPEN-9*4 32 OPEN-9*5 to OPEN-9*9 64 OPEN-9*1
Table 53 Physical partition size (IBM AIX) (continued) Category 130 Disk array supported emulations LU product name Physical partition size in megabytes 259201 - 518400 512 518401 and higher 1024
D Reference information for the HP System Administration Manager (SAM) The HP System Administration Manager (SAM) is used to perform HP-UX system administration functions, including: • Setting up users and groups • Configuring the disks and file systems • Performing auditing and security activities • Editing the system kernel configuration This appendix provides instructions for: • Using SAM to configure the disk devices • Using SAM to set the maximum number of volume groups Configuring the devi
To configure the newly-installed disk array devices: 1. Select Disks and File Systems, then select Disk Devices. 2. 3. Verify that the new disk array devices are displayed in the Disk Devices window. Select the device to configure, select the Actions menu, select Add, and then select Using the Logical Volume Manager. In the Add a Disk Using LVM window, select Create... or Extend a Volume Group.
E HP Clustered Gateway deployments Windows The HP Cluster Gateway and HP Scalable NAS software both use the HP PolyServe software as their underlying clustering technology and both have similar requirements for the P9000 disk array. They have both been tested with the P9000 disk arrays and this appendix details configuration requirements specific to P9000 deployments using HP PolyServe Software on Windows.
For details on importing and deporting disks, dynamic volume creation and configuration, and file system creation and configuration, see the HP StorageWorks Scalable NAS File Serving Software Administration Guide . Linux The HP Cluster Gateway and HP Scalable NAS software both use the HP PolyServe software as their underlying clustering technology and both have similar requirements for the P9000 disk array.
of RAID Manager, with both local and remote HORCM instances running on each server, and with all file system LUNs (P-VOLs) controlled by the local instance and all snapshot V-VOLs (S-VOLs) controlled by the remote instance. Dynamic volume and file system creation When the LUNs have been presented to all nodes in the cluster, import them into the cluster using the GUI or the mx command.
Glossary AL-PA Arbitrated loop physical address. A 1-byte value that the arbitrated loop topology uses to identify the loop ports. This value becomes the last byte of the address identifier for each public port on the loop. command device A volume in the disk array that accepts Continuous Access, Business Copy, or P9000 for Business Continuity Manager control operations, which are then executed by the array. CU Control unit. CVS Custom volume size.
port A physical connection that allows data to pass between a host and a disk array. R-SIM Remote service information message. SIM Service information message. SNMP Simple Network Management Protocol. A widely used network monitoring and control protocol. Data is passed from SNMP agents, which are hardware and/or software processes reporting activity in each network device (hub, router, bridge, and so on) to the workstation console used to oversee the network.
Index A Array Manager, 10, 13, 28, 34, 35, 38, 39, 46, 50, 57, 68, 80 auto-mount parameters, setting, 27 B Business Copy, 12 C clustering, 15, 36, 42, 48, 52, 64, 70, 82 command device(s) one LDEV as a, 12 RAID Manager, 12 Command View Advanced Edition, 9, 12, 13, 15, 28, 30, 34, 35, 36, 41, 46, 47, 50, 51, 57, 59, 68, 70, 80, 81 configuration device, 18, 37, 44, 54, 65, 72, 84 emulation types, 11 recognition, 17, 53, 83 using SAM, 131 disk array, 13, 28, 34, 46, 50, 57, 68, 80 FCAs, 15, 42, 47, 52, 60, 7
Oracle, 61 QLogic, 63 supported, 60 verify driver installation, 53, 83 verifying configuration, 63 FCSA(s) configuring, 36 supported, 36 features, disk array, 9 Fibre Channel adapters, configuring, 30 adapters, SCSI TID map, 9, 102 connection speed, 10 interface, 10 ports, configuring, 15, 30, 36, 41, 47, 51, 59, 70, 81 supported elements, 10 switches, 43 file system(s) creating, 55, 66, 76 for logical volumes, 25 journaled, 76 mounting, 26, 78 not mounted after rebooting, 92 verify operations, 33 verifying
S SAM (HP System Administrator Manager) configuring devices using, 131 reference information, 131 volume groups, setting maximum number, 132 SCSI disk, 10 SCSI TIP map, 102 security, LUN, 15, 36, 42, 48, 52, 64, 70, 82 server restarting, 53, 83 server support, 9 SIMS, 92 storage capacity, 9 storage capacity values conventions, 94 Subscriber's Choice, HP, 94 system option mode, setting, 30, 41, 47, 51, 59, 70, 81 T technical support HP, 94 troubleshooting, 92 error conditions, 92 U UNIX, supported versions