HP P63x0/P65x0 Enterprise Virtual Array User Guide Abstract This document describes the hardware and general operation of the P63x0/P65x0 EVA.
© Copyright 2011, 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Contents 1 P63x0/P65x0 EVA hardware....................................................................13 SAS disk enclosures................................................................................................................13 Small Form Factor disk enclosure chassis...............................................................................13 Front view....................................................................................................................13 Rear view............
Reserving adequate free space............................................................................................36 Using SAS-midline disk drives..............................................................................................36 Failback preference setting for HSV controllers.......................................................................36 Changing virtual disk failover/failback setting..................................................................38 Implicit LUN transition..
Updating the AlphaServer console code, Integrity Server console code, and Fibre Channel FCA firmware...........................................................................................................................58 Verifying the Fibre Channel adapter software installation........................................................58 Console LUN ID and OS unit ID...........................................................................................59 Adding OpenVMS hosts...........................
Apple Mac OS X iSCSI initiator rules and guidelines..............................................................91 Microsoft Windows iSCSI Initiator rules and guidelines...........................................................91 Linux iSCSI Initiator rules and guidelines ..............................................................................92 Solaris iSCSI Initiator rules and guidelines.............................................................................
Linux version...................................................................................................................132 ATTO Macintosh Chap restrictions .....................................................................................132 Recommended CHAP policies ...........................................................................................132 iSCSI session types ..........................................................................................................
Requirements...................................................................................................................159 HBA configuration............................................................................................................160 Risks..............................................................................................................................160 Limitations..........................................................................................................
Issue: Initiator logs in to iSCSI or iSCSI/FCoE controller target but EVA assigned LUNs are not appearing on the initiator............................................................................................176 Issue: EVA presented virtual disk is not seen by the initiator...............................................176 Issue: Windows initiators may display Reconnecting if NIC MTU changes after connection has logged in.................................................................................
Spanish laser notice.........................................................................................................206 Recycling notices..................................................................................................................206 English recycling notice....................................................................................................206 Bulgarian recycling notice..............................................................................................
Initiator...........................................................................................................................223 Logout............................................................................................................................225 Lunmask.........................................................................................................................225 Passwd........................................................................................................
Restoring iSCSI or iSCSI/FCoE module configuration and persistent data................................267 E Simple Network Management Protocol......................................................269 SNMP parameters................................................................................................................269 SNMP trap configuration parameters.......................................................................................269 Management Information Base ...........................
1 P63x0/P65x0 EVA hardware The P63x0/P65x0 EVA contains the following components: • EVA controller enclosure — Contains HSV controllers, power supplies, cache batteries, and fans. Available in FC and iSCSI options NOTE: Compared to older models, the HP P6350 and P6550 employ newer batteries and a performance enhanced management module. They require XCS Version 11000000 or later on the P6350 and P6550 and HP P6000 Command View Version 10.1 or later on the management module.
Rear view 1. Power supply 1 4. I/O module A 7. UID push button and LED 2. Power supply 2 5. I/O module B 8. Enclosure status LEDs 3. Fan 1 6. Fan 2 9. Power push button and LED Drive bay numbering Disk drives mount in bays on the front of the enclosure. Bays are numbered sequentially from top to bottom and left to right. Bay numbers are indicated on the left side of each drive bay. Large Form Factor disk enclosure chassis Front view 14 1. Rack-mounting thumbscrew 3. UID push button and LED 2.
Rear view 1. Power supply 1 4. I/O module A 7. UID push button and LED 2. Power supply 2 5. I/O module B 8. Enclosure status LEDs 3. Fan 1 6. Fan 2 9. Power push button and LED Drive bay numbering Disk drives mount in bays on the front of the enclosure. Bays are numbered sequentially from top to bottom and left to right. A drive-bay legend is included on the left bezel. Disk drives Disk drives are hot-pluggable. A variety of disk drive models are supported for use.
LED LED color LED status Description 1. Locate/Fault Blue Slow blinking (0.5 Hz) Locate drive Amber Solid Drive fault Green Blinking (1 Hz) Drive is spinning up or down and is not ready Fast blinking (4 Hz) Drive activity Solid Ready for activity 2. Status Disk drive blanks To maintain the proper enclosure air flow, a disk drive or a disk drive blank must be installed in each drive bay. The disk drive blank maintains proper airflow within the disk enclosure.
Unit identification (UID) button The unit identification (UID) button helps locate an enclosure and its components. When the UID button is activated, the UID on the front and rear of the enclosure are illuminated. NOTE: A remote session from the management utility can also illuminate the UID. • To turn on the UID light, press the UID button. The UID light on the front and the rear of the enclosure will illuminate solid blue. (The UID on cascaded storage enclosures are not illuminated.
Fan module LED One bi-color LED provides module status information. LED color LED status Description Off Off No power Green Blinking The module is being identified Solid Normal, no fault conditions Blinking Fault conditions detected Solid Problems detecting the module Amber I/O module The I/O module provides the interface between the disk enclosure and the host. Each I/O module has two ports that can transmit and receive data for bidirectional operation. 1.
I/O module LEDs LEDs on the I/O module provide status information about each I/O port and the entire module. NOTE: The following image illustrates LEDs on the Small Form Factor I/O module. LED LED icon LED color LED status Description 1.
Rear power and UID module LEDs LED LED icon 1. UID 2. Health 3. Fault 4.
Unit identification (UID) button The unit identification (UID) button helps locate an enclosure and its components. When the UID button is activated, the UID on the front and rear of the enclosure are illuminated. NOTE: A remote session from the management utility can also illuminate the UID. • To turn on the UID light, press the UID button. The UID light on the front and the rear of the enclosure will illuminate solid blue. (The UID on cascaded storage enclosures are not illuminated.
Figure 1 Controller enclosure (front bezel) 1. Enclosure status LEDs 2. Front UID push button Figure 2 Controller enclosure (front view with bezel removed) 1. Rack-mounting thumbscrew 8. Fan 1 normal operation LED 2. Enclosure product number (PN) and serial number 9. Fan 1 fault LED 3. World Wide Number (WWN) 10. Fan 2 4. Battery 1 11. Battery 2 5. Battery normal operation LED 12. Enclosure status LEDs 6. Battery fault LED 13. Front UID push button 7.
Figure 3 P6000 EVA FC controller enclosure (rear view) 1. Power supply 1 9. Enclosure power push button 2. Controller 1 10. Power supply 2 3. Management module status LEDs 11. DP-A and DP-B, connection to back end (storage) 4. Ethernet port 12. FP1 and FP2, connection to front end (host or SAN) 5. Management module 13. FP3 and FP4, connection to front end (host or SAN) 6. Controller 2 14. Manufacturing diagnostic port 7. Rear UID push button 15. Controller status and fault LEDs 8.
Figure 5 P6000 EVA iSCSI/FCoE controller enclosure (rear view) 1. Power supply 1 10. Power supply 2 2. Controller 1 11. 10GbE ports 1–2 3. Management module status LEDs 12. DP-A and DP-B, connection to back-end (storage) 4. Ethernet port 13. Serial port 5. Management module 14. FP3 and FP4, connection to front end (host or SAN) 6. Controller 2 15. SW Management port 7. Rear UID push button 16. Manufacturing diagnostic port 8. Enclosure status LEDs 17. Controller status and fault LEDs 9.
Table 1 HSV340/360 controller port status indicators Port Fibre Channel host ports Description • Green — Normal operation • Amber — No signal detected • Off — No SFP1 detected or the Direct Connect HP P6000 Control Panel setting is incorrect Fibre Channel device ports • Green — Normal operation • Amber — No signal detected or the controller has failed the port • Off — No SFP1 detected 1 On copper Fibre Channel cables, the SFP is integrated into the cable connector.
Table 3 Controller status LEDs (continued) Item LED 3 4 Indication Flashing amber indicates a controller termination, or the system is inoperative and attention is required. Solid amber indicates that the controller cannot reboot, and that the controller should be replaced. If both the solid amber and solid blue LEDs are lit, the controller has completed a warm removal procedure, and can be safely swapped. MEZZ Only used on the FC-iSCSI and iSCSI/FCoE controllers (not on the FC controller).
Table 4 Power supply LED status LED color Description Amber • The power supply is powered up but not providing output power. • The power supply is plugged into a running chassis, but is not receiving AC input power (the fan and LED on the supply receive power from the other power supply in this situation). Green Normal, no fault conditions Battery module Battery modules provide power to the controllers in the enclosure. Figure 10 Battery module pulled out 1. Green—Normal operation LED 2.
Figure 11 Fan module pulled out 1. Green—Fan normal operation LED 2. Amber—Fan fault LED Table 6 Fan status indicators Status indicator On left—Green On right—Amber Fault indicator Description Solid green Normal operation. Blinking Maintenance in progress. Off Amber is on or blinking, or the enclosure is powered down. On Fan failure. Green will be off. (Green and amber are not on simultaneously except for a few seconds after power-up.
Reset the iSCSI or iSCSI/FCoE module and boot the primary image Use a pointed nonmetallic tool to briefly press the maintenance button for a duration of two seconds and release it. The iSCSI or iSCSI/FCoE module responds as follows: 1. The amber MEZZ status LED illuminates once. NOTE: Holding the maintenance button for more than two seconds but less than six seconds or until the MEZZ status LED illuminates twice, boots a secondary image, and is not recommended for field use. 2. 3.
Y-cables (Figure 12 (page 30)) are used to connect the P6500 EVA and enable each controller data port to act as two ports. Figure 12 P6500 Y-cable 1. Pull tab (may also be a release bar) 2. Port number label Storage system racks All storage system components are mounted in a rack. Each configuration includes one controller enclosure holding both controllers (the controller pair) and the disk enclosures. Each controller pair and all associated disk enclosures form a single storage system.
http://h18004.www1.hp.com/products/servers/proliantstorage/racks/index.html Power distribution units AC power is distributed to the rack through a dual Power Distribution Unit (PDU) assembly mounted at the bottom rear of the rack (modular PDU) or on the rack (monitored PDU). The modular PDU may be mounted back-to-back either vertically (AC receptacles facing down and circuit breaker switches facing up) or horizontally (AC receptacles facing front and circuit breaker switches facing rear).
A PDU 2 failure: • Disables the power distribution circuit • Removes power from the right side of the PDM pairs • Disables drive enclosures PS 2 • Disables the controller PS 2 PDMs Depending on the rack, there can be up to eight PDMs mounted in the rear of the rack: • The PDMs on the left side of the PDM pairs connect to PDU 1. • The PDMs on the right side of the PDM pairs connect to PDU 2. Each PDM has seven AC receptacles. The PDMs distribute the AC power from the PDUs to the enclosures.
Rack AC power distribution The power distribution in a rack is the same for all variants. The site AC input voltage is routed to the dual PDU assembly mounted in the bottom rear of the rack. Each PDU distributes AC to a maximum of four PDMs mounted in pairs on the left vertical rail (see Figure 14 (page 33)). • PDMs 1–1 through 1–4 connect to receptacles A through D on PDU A.
Moving the rack requires a clear, uncarpeted pathway that is at least 80 cm (31.5 in) wide for the 60.3 cm (23.7 in) wide, 42U rack. A vertical clearance of 203.2 cm (80 in) should ensure sufficient clearance for the 200 cm (78.7 in) high, 42U rack. CAUTION: Ensure that no vertical or horizontal restrictions exist that would prevent rack movement without damaging the rack. Make sure that all four leveler feet are in the fully raised position.
Figure 16 Raising a leveler foot 1. Hex nut 2. Leveler foot 3. To 1. 2. 3. Carefully move the rack to the installation area and position it to provide the necessary service areas (see Figure 15 (page 34)). stabilize the rack when it is in the final installation location: Use a wrench to lower the foot by turning the leveler foot hex nut clockwise until the caster does not touch the floor. Repeat for the other feet. After lowering the feet, check the rack to ensure it is stable and level.
2 P63x0/P65x0 EVA operation Best practices For useful information on managing and configuring your storage system, see the HP P6300/P6500 Enterprise Virtual Array configuration best practices white paper available at: http://h18006.www1.hp.com/storage/arraywhitepapers.html Operating tips and information Reserving adequate free space To ensure efficient storage system operation, reserve some unallocated capacity, or free space, in each disk group.
Table 7 Failback preference settings (continued) Setting Point in time Behavior Otherwise, the units are brought online to Controller 2. Path A Failover/Failback Path B Failover/Failback On controller failover All LUNs are brought online to the surviving controller. On controller failback All LUNs remain on the surviving controller. There is no failback except if a host moves the LUN using SCSI commands. At initial presentation The units are brought online to Controller 1.
Table 8 Failback settings by operating system (continued) Operating system Default behavior Supported settings Path A/B – Failover only Path A/B – Failover/Failback OpenVMS Host follows the unit1 No preference Path A/B – Failover only Path A/B – Failover/Failback (recommended) Oracle Solaris Host follows the unit1 No preference Path A/B – Failover only Path A/B – Failover/Failback VMware Host follows the unit1 No preference Path A/B – Failover only Path A/B – Failover/Failback Windows Failback
When creating a virtual disk, one controller is selected to manage the virtual disk. Only this managing controller can issue I/Os to a virtual disk in response to a host read or write request. If a read I/O request arrives on the non-managing controller, the read request must be transferred to the managing controller for servicing.
The transceiver dust caps protect the transceivers from contamination. Do not discard the dust covers. CAUTION: To avoid damage to the connectors, always install the dust covers or dust caps whenever a transceiver or a fiber cable is disconnected. Remove the dust covers or dust caps from transceivers or fiber cable connectors only when they are connected. Do not discard the dust covers.
Powering off disk enclosures CAUTION: Be sure that the server controller is the first unit to be powered down and the last to be powered back up. Taking this precaution ensures that the system does not erroneously mark the disk drives as failed when the server is later restarted. It is recommended to perform this action with P6000 Command View (see below). IMPORTANT: If installing a hot-plug device, it is not necessary to power down the enclosure. To power off a disk enclosure: 1.
2. 3. 4. 5. Ensure all power cords are connected to the controller enclosure and disk enclosures. Apply power to the rack PDUs. Apply power to the controller enclosure (rear panel on the enclosure). The disk enclosures will power on automatically. Wait for a solid green status LED on the controller enclosure and disk enclosures (approximately five minutes). Wait (up to five minutes) for the array to complete its startup routine.
3. Select Restart on the iSCSI Controller Shutdown Options window (Figure 17 (page 46)).
Figure 18 Management module 1. Status LEDs 3. Reset button 2. Ethernet jack Connecting through a public network 1. 2. 3. Initialize the P63x0 EVA or P65x0 EVA storage system using HP P6000 Command View. If it is currently connected, disconnect the public network LAN cable from the back of the management module in the controller enclosure. Press and hold the recessed Reset button (3, Figure 18 (page 44)) for 4 to 5 seconds.
9. Remove the LAN cable to the private network or laptop and reconnect the cable to the public network. 10. From a computer on the public network, browse to https://new IP:2373 and log in. The HP P6000 Control Panel GUI appears. Connecting through a private network 1. 2. Press and hold the recessed Reset button (3, Figure 18 (page 44)) for 4 to 5 seconds. The green LED on the management module (1, Figure 18 (page 44)) blinks to indicate the configuration reset has started.
NOTE: Change your browser settings for the HP P6000 Control Panel as described in the HP P6000 Command View Installation Guide. You must have administrator privilege to change the settings in the HP P6000 Control Panel. To change the default operating mode: 1. Connect to the management module using one of the methods described in “Connecting through a public network” (page 44) or “Connecting through a private network” (page 45). 2. Log into the HP P6000 Control Panel as an HP P6000 administrator.
NOTE: For more information on using the utility, see the HP Storage System Scripting Utility Reference. See “Related documentation” (page 197). 1. 2. 3. Double-click the SSSU desktop icon to run the application. When prompted, enter Manager (management server name or IP address), User name, and Password. Enter LS SYSTEM to display the storage systems managed by the management server. Enter SELECT SYSTEM system name, where system name is the name of the storage system.
Example 1 Saving configuration data on a Windows host 1. 2. 3. 4. 5. Double-click on the SSSU desktop icon to run the application. When prompted, enter Manager (management server name or IP address), User name, and Password. Enter LS SYSTEM to display the storage systems managed by the management server. Enter SELECT SYSTEM system name, where system name is the name of the storage system.
Figure 20 iSCSI Controller Configuration Selection window NOTE: A Restore action will reboot the module.
3 Configuring application servers Overview This chapter provides general connectivity information for all the supported operating systems. Where applicable, an OS-specific section is included to provide more information. Clustering Clustering is connecting two or more computers together so that they behave like a single computer. Clustering is used for parallel processing, load balancing, and fault tolerance.
Testing connections to the array After installing the FCAs, you can create and test connections between the host server and the array. For all operating systems, you must: • Add hosts • Create and present virtual disks • Verify virtual disks from the hosts The following sections provide information that applies to all operating systems. For OS-specific details, see the applicable operating system section. Adding hosts To add hosts using HP P6000 Command View: 1.
Creating and presenting virtual disks To create and present virtual disks to the host server: 1. From HP P6000 Command View, create a virtual disk on the storage system. 2. Specify values for the following parameters: 3. 4. • Virtual disk name • Vraid level • Size Present the virtual disk to the host you added. If applicable (AIX or OpenVMS) select a LUN number if you chose a specific LUN on the Virtual Disk Properties window.
# ioscan -fnCdisk # ioscan -fnCdisk Class I H/W Patch Driver S/W H/W Type Description State ======================================================================================== ba 3 0/6 lba CLAIMED BUS_NEXUS Local PCI Bus Adapter (782) fc 2 0/6/0/0 td CLAIMED INTERFACE HP Tachyon XL@ 2 FC Mass Stor Adap /dev/td2 fcp 0 0/6/0/0.39 fcp CLAIMED INTERFACE FCP Domain ext_bus 4 0/6/00.39.13.0.0 fcparray CLAIMED INTERFACE FCP Array Interface target 5 0/6/0/0.39.13.0.0.0 tgt CLAIMED DEVICE ctl 4 0/6/0/0.39.13.0.
IBM AIX Accessing IBM AIX utilities You can access IBM AIX utilities such as the Object Data Manager (ODM), on the following website: http://www.hp.com/support/downloads In the Search products box, enter MPIO, and then click AIX MPIO PCMA for HP Arrays. Select IBM AIX, and then select your software storage product.
Linux Driver failover mode If you use the INSTALL command without command options, the driver’s failover mode depends on whether a QLogic driver is already loaded in memory (listed in the output of the lsmod command). Possible driver failover mode scenarios include: • If an hp_qla2x00src driver RPM is already installed, the new driver RPM uses the failover of the previous driver package. • If there is no QLogic driver module (qla2xxx module) loaded, the driver defaults to failover mode.
# modprobe qla2400 To reboot the server, enter the reboot command. CAUTION: 7. If the boot device is attached to the SAN, you must reboot the host. To verify which RPM versions are installed, use the rpm command with the -q option. For example: # rpm -q hp_qla2x00src # rpm –q fibreutils Upgrading Linux components If you have any installed components from a previous solution kit or driver kit, such as the qla2x00 RPM, invoke the INSTALL script with no arguments, as shown in the following example: # .
# ./INSTALL -F Compiling the driver for multiple kernels If your system has multiple kernels installed on it, you can compile the driver for all the installed kernels by setting the INSTALLALLKERNELS environmental variable to y and exporting it by issuing the following commands: # INSTALLALLKERNELS=y # export INSTALLALLKERNELS You can also use the -a option of the INSTALL script as follows: # .
"Wrote: ...rpm". This line identifies the location of the binary RPM. 4. Copy the binary RPM to the production servers and install it using the following command: # rpm -ivh hp_qla2x00-version-revision.architecture.rpm HBA drivers For most configurations and latest version of linux distributions, native HBA drivers are the supported drivers. Native driver means the driver that is included with the OS distribution. NOTE: The term inbox driveris also sometimes used and means the same as native driver.
Console LUN ID and OS unit ID HP P6000 Command View software contains a box for the Console LUN ID on the Initialized Storage System Properties window. It is important that you set the Console LUN ID to a number other than zero (0). If the Console LUN ID is not set or is set to zero (0), the OpenVMS host will not recognize the controller pair. The Console LUN ID for a controller pair must be unique within the SAN. Table 11 (page 59) shows an example of the Console LUN ID.
6. Using the driver and device handle, enter the drvdfg —s driver_handle device_handle command to invoke the EFI Driver configuration utility. For example: Shell> drvcfg -s 22 25 7. From the Fibre Channel Driver Configuration Utility list, select item 8 (Info) to find the WWN for that particular port.
NOTE: Restarting the host system shows any newly presented virtual disks because a hardware scan is performed as part of the startup. If you are unable to access the virtual disk, do the following: • Check the switch zoning database. • Use HP P6000 Command View to verify the host presentations. • Check the SRM console firmware on AlphaServers. • Ensure that the correct host is selected for this virtual disk and that a unique OS Unit ID is used in HP P6000 Command View.
Loading the operating system and software Follow the manufacturer’s instructions for loading the operating system (OS) and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer. Configuring FCAs with the Oracle SAN driver stack Oracle-branded FCAs are supported only with the Oracle SAN driver stack. The Oracle SAN driver stack is also compatible with current Emulex FCAs and QLogic FCAs.
1. Ensure that you have the latest supported version of the lpfc driver (see http://www.hp.com/ storage/spock). You must sign up for an HP Passport to enable access. For more information on how to use SPOCK, see the Getting Started Guide (http://h20272.www2.hp.com/Pages/spock_overview/ introduction.html). 2. Edit the following parameters in the /kernel/drv/lpfc.conf driver configuration file to set up the FCAs for a SAN infrastructure: topology=2; scan-down=0; nodev-tmo=60; linkdown-tmo=60; 3.
7. 8. Reboot the server to implement the changes to the configuration files. If LUNs have been preconfigured in the /kernel/drv/sd.conf file, use the devfsadm command to perform LUN rediscovery after configuring the file. NOTE: The lpfc driver is not supported for Oracle StorEdge Traffic Manager/Oracle Storage Multipathing. To configure an Emulex FCA using the Oracle SAN driver stack, see “Configuring FCAs with the Oracle SAN driver stack” (page 62).
hba1-SCSI-target-id-31-fibre-channel-port-name="50001fe10027093b"; NOTE: 6. Replace the WWPNs in the example with the WWPNs of your array ports. If the qla2300 driver is version 4.13.01 or earlier, for each LUN that users will access, add an entry to the /kernel/drv/sd.conf file: name="sd" class="scsi" target=20 lun=1; name="sd" class="scsi" target=21 lun=1; name="sd" class="scsi" target=30 lun=1; name="sd" class="scsi" target=31 lun=1; If LUNs are preconfigured in the/kernel/drv/sd.
Configuring with Veritas Volume Manager The Dynamic Multipathing (DMP) feature of Veritas Volume Manager (VxVM) can be used for all FCAs and all drivers. EVA disk arrays are certified for VxVM support. When you install FCAs, ensure that the driver parameters are set correctly. Failure to do so can result in a loss of path failover in DMP. For information about setting FCA parameters, see “Configuring FCAs with the Oracle SAN driver stack” (page 62) and the FCA manufacturer’s instructions.
Example 4 Setting the I/O policy # vxdmpadm getattr arrayname EVA8100 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EVA81000 Round-Robin Round-Robin # vxdmpadm setattr arrayname EVA81000 iopolicy=adaptive # vxdmpadm getattr arrayname EVA8100 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EVA81000 Round-Robin Adaptive Configuring virtual disks from the host The procedure used to configure the LUN path to the array depends on the FCA driver
50001fe1002709e9,5 • Emulex (lpfc)/QLogic (qla2300) drivers: ◦ You can retrieve the WWPN by checking the assignment in the driver configuration file (the easiest method, because you then know the assigned target) or by using HBAnyware/SANSurfer. ◦ You can retrieve the WWLUN ID by using HBAnyware/SANSurfer. You can also retrieve the WWLUN ID as part of the format -e (scsi, inquiry) output; however, it is cumbersome and difficult to read.
Example 5 Format command # format Searching for disks...done c2t50001FE1002709F8d1: configured c2t50001FE1002709F8d2: configured c2t50001FE1002709FCd1: configured c2t50001FE1002709FCd2: configured c3t50001FE1002709F9d1: configured c3t50001FE1002709F9d2: configured c3t50001FE1002709FDd1: configured c3t50001FE1002709FDd2: configured with with with with with with with with capacity capacity capacity capacity capacity capacity capacity capacity of of of of of of of of 1008.00MB 1008.00MB 1008.00MB 1008.
7. 8. 9. For each new device, use the disk command to select another disk, and then repeat 1 through 6. Repeat this labeling procedure for each new device. (Use the disk command to select another disk.) When you finish labeling the disks, enter quit or press Ctrl+D to exit the format utility. For more information, see the System Administration Guide: Devices and File Systems for your operating system, available on the Oracle website: http://www.oracle.com/technetwork/ indexes/documentation/index.html.
Setting the multipathing policy You can set the multipathing policy for each LUN or logical drive on the SAN to one of the following: • Most recently used (MRU) • Fixed • Round robin To change multipathing policy, use the VMware vSphere GUI interface under the Configuration tab and select Storage. Then select Devices. Figure 22 Setting multipathing policy Use the GUI to change policies, or you can use the following commands from the CLI: ESX 4.
You can also set the multipathing policy from the VMware Management User Interface (MUI) by clicking the Failover Paths tab in the Storage Management section and then selecting Edit… link for each LUN whose policy you want to modify. ESXi 5.x commands 72 • The # esxcli storage nmp device set --device naa.6001438002a56f220001100000710000 --psp VMW_PSP_MRU command sets device naa.6001438002a56f220001100000710000 with an MRU multipathing policy. • The # esxcli storage nmp device set --device naa.
Verifying virtual disks from the host Use the VMware vCenter management GUI to check all devices (see figure below). HP P6000 EVA Software Plug-in for VMware VAAI The vSphere Storage API for Array Integration (VAAI) is included in VMware vSphere solutions. VAAI can be used to offload certain functions from the target VMware host to the storage array. With the tasks being performed more efficiently by the array instead of the target VMware host, performance can be greatly enhanced.
NOTE: By default, the four VAAI primitives are enabled. NOTE: The EVA VAAI Plug-In is required with vSphere 4.1 in order to permit discovery of the EVA VAAI capability. This is not required for vSphere 5 or later. 1. 2. Install the XCS controller software. Enable the primitives from the ESX server. Enable and disable these primitives through the following advanced settings: • DataMover.HardwareAcceleratedMove (full copy) • DataMover.HardwareAcceleratedInit (block zeroing) • VMFS3.
3. 4. Placing the target VMware host in maintenance mode. Invoking the software tool to install the HP VAAI Plug-in. Automated installation steps include: a. Installing the HP VAAI plug-in driver (hp_vaaip_p6000) on the target VMware host. b. Adding VIB details to the target VMware host. c. Creating VAAI claim rules. d. Loading and executing VAAI claim rules. 5. 6. Restarting the target VMware host. Taking the target VMware host out of maintenance mode.
4. Verify the installation: a. Check for new HP P6000 claim rules. Using the service console, enter: esxcli corestorage claimrule list -c VAAI The return display will be similar to the following: Rule Class VAAI VAAI b. Rule 5001 5001 Class runtime file Type vendor vendor Plugin hp_vaaip_p6000 hp_vaaip_p6000 Matches vendor=HP model=HSV vendor=HP model=HSV Check for claimed storage devices.
2. Enter maintenance mode. 3. Enter a command using the following syntax: vicfg-hostops.pl --server Host_IP_Address --username User_Name--password Account_Password -o enter Install the VAAI Plug-in using vihostupdate. 4. Enter a command using the following syntax: vihostupdate.pl --server Host_IP_Address --username User_Name --password Account_Password --bundle hp_vaaip_p6000_offline-bundle-xyz --install Restart the target VMware host. 5. Enter a command using the following syntax: vicfg-hostops.
NOTE: VAAI device status will be "Unknown" until all VAAI primitives are attempted by ESX on the device and completed successfully. Upon completion, VAAI device status will be “Supported." Installing the VAAI Plug-in using VUM NOTE: • This installation method is supported for use with VAAI Plug-in versions 1.00 and 2.00, in ESX/ESXi 4.1 environments. • Installing the plug-in using VMware Update Manager is the recommended method. Installing the VAAI Plug-in using VUM consists of two steps: 1.
4. Create a new Baseline set for this offline plug-in: a. Select the Baselines and Groups tab. b. Above the left pane, click Create. c. In the New Baseline window: d. • Enter a name and a description. (Example: HP P6000 Baseline and VAAI Plug-in for HP EVA) • Select Host Extension. • Click Next to proceed to the Extensions window. In the Extensions window: • Select HP EVA VAAI Plug-in for VMware vSphere x.x, where x.x represents the plug-in version.
NOTE: • In the Tasks & Events section, the following tasks should have a Completed status: Remediate entry, Install, and Check. • If any of the above tasks has an error, click the task to view the detail events information. Verifying VAAI status 1. 2. 3. From the vCenter Server, click the Home Navigation bar and then click Hosts and Clusters. Select the target VMware host from the list and then click the Configuration tab. Click the Storage Link under Hardware.
Uninstalling VAAI Plug-in using VMware native tools (esxupdate) 1. 2. Enter maintenance mode. Query the installed VAAI Plug-in to determine the name of the VAAI Plug-in bulletin to uninstall. 3. Enter a command using the following syntax: $host# esxupdate --vib-view query | grep hp-vaaip-p6000 Uninstall the VAAI Plug-in. 4. 5. Enter a command using the following syntax: $host# esxupdate remove -b VAAI_Plug_In_Bulletin_Name --maintenancemode Restart the host. Exit maintenance mode.
4 Replacing array components Customer self repair (CSR) Table 16 (page 83) and Table 17 (page 84) identify hardware components that are customer replaceable. Using HP Insight Remote Support software or other diagnostic tools, a support specialist will work with you to diagnose and assess whether a replacement component is required to address a system problem. The specialist will also help you determine whether you can perform the replacement.
Figure 23 Example of typical product label 1. Spare component number Replaceable parts This product contains the replaceable parts listed in “Controller enclosure replacement parts ” (page 83) and “Disk enclosure replaceable parts ” (page 84). Parts that are available for customer self repair (CSR) are indicated as follows: ✓ Mandatory CSR where geography permits. Order the part directly from HP and repair the product yourself. On-site or return-to-depot repair is not provided under warranty.
Table 16 Controller enclosure replacement parts (continued) Description Spare part number CSR status Array riser assembly 461491–005 • Array power UID 466264–001 • P6300 bezel assembly 583395–001 ✓ P6500 bezel assembly 583396–001 ✓ P63x0 bezel assembly 676972-001 ✓ P65x0 bezel assembly 676973-001 ✓ Y-cable, 2 m 583399–001 • SAS cable, SPS-CA, EXT Mini SAS, 2M 408767-001 • Table 17 Disk enclosure replaceable parts 84 Description Spare part number Disk drive, 300 GB, 10K, SFF, 6
Table 17 Disk enclosure replaceable parts (continued) Description Spare part number CSR status External mini-SAS Cable, 0.5m 408765-001 • Rackmount kit, 1U/2U 519318-001 • For more information about CSR, contact your local service provider or see the CSR website: http://www.hp.com/go/selfrepair To determine the warranty service provided for this product, see the warranty information website: http://www.hp.
• HP Controller Enclosure Battery Replacement Instructions • HP Controller Enclosure Cache DIMM Replacement Instructions • HP Controller Enclosure Fan Module Replacement Instructions • HP Controller Enclosure LED Display Replacement Instructions • HP Controller Enclosure Management Module Replacement Instructions • HP Controller Enclosure Midplane Replacement Instructions • HP Controller Enclosure Power Supply Replacement Instructions • HP Controller Enclosure Riser Assembly Replacement I
5 iSCSI or iSCSI/FCoE configuration rules and guidelines This chapter describes the iSCSI configuration rules and guidelines for the HP P6000 iSCSI and iSCSI/FCoE modules. iSCSI or iSCSI/FCoE module rules and supported maximums The iSCSI or iSCSI/FCoE modules are configured in a dual-controller configuration in the HP P6000. Dual-controller configurations provide for high availability with failover between iSCSI or iSCSI/FCoE modules. All configurations are supported as redundant pairs only.
Figure 24 Mixed FC and FCoE storage configuration using FC and FCoE storage targets Ethernet network B-series or C-series CN switches P6300 EVA P6500 EVA BLADE servers w/CNAs and Pass-Thru modules or ProCurve 6120XG* FIP SNOOPING DCB switches (*with C-series FCoE switches only) FCoE/iSCSI/FC EVA/SAS storage 10-GbE FCoE/iSCSI connection 10-GbE connection Figure 25 FCoE support 88 iSCSI or iSCSI/FCoE configuration rules and guidelines 26659b
The following is an example of a Mixed FC and FCoE storage configuration: Figure 26 Mixed FC and FCoE storage configuration BLADE Servers w/CNAs and Pass-Thru modules or ProCurve 6120XG* FIP SNOOPING DCB switches (*with C-series FCoE switches only) FCoE switches FC switches 3PAR F-Class or T-Class P6300 EVA P6500 EVA FCoE/iSCSI/FC EVA/SAS storage 10-GbE FCoE/iSCSI connection 10-GbE connection Fibre Channel 26660a The following is an example of an FC and FCoE storage with Cisco Fabic Extender for HP Blad
NOTE: HP recommends that at least one zone be created for the FCoE WWNs from each port of the HP P6000 with the iSCSI/FCoE modules. The zone should also contain CNA WWNs. Zoning should include member WWNs from each one of the iSCSI/FCoE modules to ensure configuration of multipath redundancy. Operating system and multipath software support This section describes the iSCSI or iSCSI/FCoE module's operating system, multipath, and cluster support.
Table 18 Operating system and multipath software support Operating system Multipath software Clusters Connectivity EVA storage system Apple Mac OS X None None iSCSI Microsoft Windows Server 2008, 2003, Hyper-V, and 2012 MPIO with HP DSM MSCS iSCSI, FCoE EVA4400/4400 with the embedded switch Red Hat Linux, SUSE Linux Device Mapper None iSCSI, FCoE Solaris Solaris MPxIO None iSCSI VMware VMware MPxIO None iSCSI, FCoE MPIO with Microsoft DSM EVA4000/4100/6000/6100/8000/8100 EVA6400/8
iSCSI Initiator operating system considerations: • Host mode setting – Microsoft Windows 2012, Windows 2008 or Windows 2003 • TCPIP parameter Tcp1323Opts must be entered in the registry with a value of DWord=2 under the registry setting# HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Ser¬vices\Tcpip\Parameters. • The TimeOutValue parameter should be entered in the registry with a value of DWord=120 under the registry setting #HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Services\Disk.
VMware iSCSI Initiator rules and guidelines The VMware iSCSI Initiator supports the following: • Native iSCSI software initiator in VMware ESX 4.0/3.
Set up the iSCSI Initiator Windows For Windows Server 2012 and Windows Server 2008, the iSCSI initiator is included with the operating system. For Windows Server 2003, you must download and install the iSCSI initiator (version 2.08 recommended). HP recommends the following Windows HKEY_LOCAL_MACHINE Registry settings: Tcp1323opts = "2" TimeOutvalue = "120" NOTE: Increasing the TimeOutvalue from the default of 60 to 120 will avoid initiator I/O timeouts during controller code loads and synchronizations.
1. Install the HP P6000 iSCSI/FCoE and MPX200 Multifunction Router kit. a. Start the installer by running Launch.exe; if you are using a CD-ROM, the installer should start automatically. b. Click Install iSCSI/FCoE software package (see Figure 28 (page 95) and Figure 29 (page 95)).
Figure 30 iSCSI Initiator Installation c. Click the Microsoft iSCSI Initiator icon to open the Control Panel applet. The iSCSI Initiator Properties window opens. d. Click the Discovery tab (see Figure 31 (page 96)). Figure 31 iSCSI Initiator Properties—Discovery tab e. In the Target Portals section, click Add. A dialog box opens to enter the iSCSI port IP Address. f. Click OK. The Discovery is now complete. 2.
Figure 32 iSCSI Initiator Properties—Discovery tab (Windows 2008) a. From HP P6000 Command View, click the EVA storage system icon to start the iSCSI storage presentation. In adding a host, the iSCSI or iSCSI/FCoE modules are the target EVA storage system. Figure 33 Add a host b. b. Select the Hosts folder.
c. c. To create iSCSI Initiator host, click Add host. A dialog box opens. d. • Enter a name for the initiator host in the Name box. • Select iSCSI as the Type. • Select the initiator iSCSI qualified name (IQN) from the iSCSI node name list. Or, you can enter a port WWN • Select an OS from the Operating System list. Create a virtual disk and present it to the host you created in Step 2.c. Note the numbers in the target IQN; these target WWNs will be referenced during Initiator login.
3. Set up the iSCSI disk on the iSCSI Initiator: a. Open the iSCSI Initiator Control Panel applet. b. Click the Targets tab and then the Refresh button to see the available targets (Figure 36 (page 99)). The status should be Inactive. Figure 36 iSCSI Initiator Properties—Targets tab c. Select the target IQN, keying off the module 1 or 2 field and the WWN field, noted in Step 2.d, and click Log On. A dialog box opens. d.
Microsoft MPIO support allows the initiator to log in to multiple sessions to the same target and aggregate the duplicate devices into a single device exposed to Windows. Each session to the target can be established using different NICs, network infrastructure, and target ports. If one session fails, another session can continue processing I/O without interruption to the application. The iSCSI target must support multiple sessions to the same target.
1. Check the box for Multipath I/O in the Add Features page. Figure 37 Add Features page 2. 3. Click Next and then click Install. After the server reboots, add support for iSCSI Devices using the MPIO applet.
Figure 38 MPIO Properties page before reboot NOTE: You must present a virtual disk to the initiator to enable the Add support for iSCSI devices checkbox. Figure 39 MPIO Properties page after reboot 4. A final reboot is required to get the devices MPIO-ed.
Installing the MPIO feature for Windows Server 2008 NOTE: Microsoft Windows 2008 includes a separate MPIO feature that requires installation for use. Microsoft Windows Server 2008 also includes the iSCSI Initiator. Download or installation is not required. Installing the MPIO feature for Windows Server 2008: 1. Check the box for Multipath I/O in the Add Features page (Figure 37 (page 103)). Figure 40 Add Features page 2. 3. Click Next and then click Install.
Figure 42 MPIO Properties page after reboot 4. A final reboot is required to get the devices MPIO-ed. Installing the MPIO feature for Windows Server 2003 For Windows Server 2003, if you are installing the initiator for the first time, check all the installation option checkboxes and then click Next to continue (Figure 43 (page 104)).
About Microsoft Windows Server 2003 scalable networking pack The Microsoft Windows Server 2003 Scalable Networking Pack (SNP) contains functionality for offloading TCP network processing to hardware. TCP Chimney is a feature that allows TCP/IP processing to be offloaded to hardware. Receive Side Scaling allows receive packet processing to scale across multiple CPUs. HP’s NC3xxx Multifunction Gigabit server adapters support TCP offload functionality using Microsoft’s Scalable Networking Pack (SNP).
Set up the iSCSI Initiator for Apple Mac OS X 1. 2. Install the ATTO iSCSI Macintosh Initiator v3.10 following the install instructions provided by the vendor. Run the Xtend SAN application to discover and configure the EVA iSCSI targets. The Xtend SAN iSCSI Initiator can discover targets either by static address or iSNS. For static address discovery: a. Select Discover Targets and then select Discover by DNS/IP (Figure 44 (page 106)). Figure 44 Discover targets b.
3. For iSNS discovery: a. Select Initiator and then enter the iSNS name or IP address in the iSNS Address field (Figure 46 (page 107)). Figure 46 iSNS discovery and verification b. Test the connection from the initiator to the iSNS server by selecting Verify iSNS. If successful, select Save. If necessary, working on the iSNS server, make the appropriate edits to add the Xtend SAN iSCSI Initiator to any iSNS discovery domains that include iSCSI module targets. c. d. Select Discover Targets.
i. Select Status, select Network Node, and then select Login to connect to the module's target (Figure 48 (page 108)). The Network Node displays a status of Connected and the target status light turns green.
Storage setup for Apple Mac OS X 1. 2. Present LUNs using HP P6000 Command View. Verify that the EVA LUNs are presented to the Macintosh iSCSI Initiator: a. Open the Xtend SAN iSCSI application. b. Select the iSCSI or iSCSI/FCoE module target entry under the host name. c. Click the LUNs button. A list of presented EVA LUNs is displayed (Figure 49 (page 109)). Figure 49 Presented EVA LUNs NOTE: If no LUNs appear in the list, log out and then log in again to the target, or a system reboot may be required.
Figure 50 Configure initiator and targets 3. Click the Discovered Targets tab and enter your iSCSI target IP address (Figure 51 (page 110)). Figure 51 Discovered Targets tab 4. 110 Log in to the target (Figure 52 (page 111)).
Figure 52 Target login 5. Click the Connected Targets tab, and then click the Toggle Start-Up button on each target listed so the targets start automatically (Figure 53 (page 111)). Figure 53 Connected Targets tab Installing and configuring for Red Hat 5 To install and configure for Red Hat 5: NOTE: The iSCSI driver package is included but is not installed by default. Install the package iscsi—initiator—utils during or after operating system installation.
1. Use the iscsiadm command to control discovery and connectivity: # iscsiadm –m discovery –t st –p 10.6.0.33:3260 2. Edit the initiator name: # vi /etc/iscsi/initiatorname.iscsi 3. To start the iSCSI service use the service command: # service iscsi start 4. Verify that the iSCSI service autostarts: #chkconfig iscsi on NOTE: utility.
applications or operating system utilities to use the standard SCSI device nodes to access iSCSI devices can result in sending SCSI commands to the wrong target or logical unit. To provide consistent naming, the iSCSI driver scans the system to determine the mapping from SCSI device nodes to iSCSI targets. The iSCSI driver creates a tree of directories and symbolic links under /dev/iscsi to make it easier to use a particular iSCSI target's logical unit.
NOTE: Because of the way Linux dynamically allocates SCSI device nodes as SCSI devices are found, the driver does not and cannot ensure that any particular SCSI device node /dev/sda, for example, always maps to the same iSCSI TargetName. The symlinks described in “Assigning device names” (page 112) are intended to provide application and fstab file persistent device mapping and must be used instead of direct references to particular SCSI device nodes.
Presenting EVA storage for Linux To set up LUNs using HP P6000 Command View: 1. Set up LUNs using HP P6000 Command View. For procedure steps, see Step 2. 2. Set up the iSCSI drive on the iSCSI Initiator: a. Restart the iSCSI services: /etc/rc.d/initd/iscsi restart b.
Figure 55 Firewall Properties dialog box d. e. 4. Select the Software iSCSI check box for to enable iSCSI traffic. Click OK. Enable the iSCSI software initiators: a. In the VMware VI client, select the server from the inventory panel. b. Click the Configuration tab, and then click Storage Adapters under Hardware. c. Under iSCSI Software Adapter, choose the available software initiator. d. Click the Properties link of the software adapter. The iSCSI Initiator Properties dialog box is displayed. e.
Figure 57 Add Send Target Server dialog box d. e. 6. Enter the iSCSI IP address of the iSCSI or iSCSI/FCoE module. Click OK. To verify that the LUNs are presented to the VMware host, rescan for new iSCSI LUNs: a. In VMware’s VI client, select a server and click the Configuration tab. b. Choose Storage Adapters in the hardware panel and click Rescan above the Storage Adapters panel. The Rescan dialog box is displayed (see Figure 58 (page 117)). Figure 58 Rescan dialog box c. d.
MPxIO overview The Oracle multipathing software (MPxIO) provides basic failover and load-balancing capability to HP P6000, and EVA4x00/6x00/8x00 storage systems. MPxIO allows the merging of multiple SCSI layer paths, such as an iSCSI device exposing the same LUN via several different iSCSI target names. Because MPxIO is independent of transport, it can multipath a target that is visible on both iSCSI and FC ports.
2. Modify load balancing to none: load-balance="none"; 3. Modify auto-failback to disable: auto-failback="disable"; 4. Add the following lines to cover the 4x00/6x00/8x00/P6000 HP arrays: device-type-scsi-options-list = “HP HSV“, “symmetric-option“; symmetric-option = 0x1000000; NOTE: You must enter six spaces between HP and HSV, as shown. Example: HP storage array settings in /kernel/drv/scsi_vhci.conf: # # Copyright 2004 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms.
. . . # devices on your system. Please refer to sgen(7d) for details. # # sgen may be configured to bind to SCSI devices exporting a particular device # type, using the device-type-config-list, which is a ',' delimited list of # strings. # device-type-config-list="array_ctrl"; . . . # After configuring the device-type-config-list and/or the inquiry-config-list, # the administrator must uncomment those target/lun pairs at which there are # devices for sgen to control.
To enable iSCSI target discovery: 1. Enable Sendtargets discovery: # iscsiadm modify discovery –t enable 2. Verify SendTargets setting is enabled: # iscsiadm list discovery 3. The iSCSI or iSCSI/FCoE module has multiple iSCSI ports available to the Solaris iSCSI initiator. To discover the targets available, enter the following command for each iSCSI port IP address that the iSCSI initiator will access: #iscsiadm add discovery-address ‘iscsi port IP address’ 4.
Target: iqn.2004-09.com.hp.fcgw.mez50.1.01.
3. mpathadm show lu ‘logical-unit’ This command lists details regarding a specific logical unit. This command can help verify symmetric mode, load balancing, and autofailback settings, as well as path and target port information. Example: #mpathadm show lu /dev/rdsk/c5t600508B4000B15A200005000038E0000d0s2 Logical Unit: /dev/rdsk/c5t600508B4000B15A200005000038E0000d0s2 mpath-support: libmpscsi_vhci.
7. Select the desired options on the Load Balance Policy menu to set the policy. Figure 59 iSCSI Initiator MPIO properties Load balancing features of Microsoft MPIO for iSCSI The features of Microsoft MPIO for iSCSI include the following: • Failover Only. No load balancing is performed. There is a single active path and the rest of the paths are standby paths. The active path is used for sending all I/O. If the active path fails, one of the standby paths is used.
Microsoft MPIO with QLogic iSCSI HBA The QLogic iSCSI HBA is supported in a multipath Windows configuration that is used in conjunction with Microsoft iSCSI Initiator Services and Microsoft MPIO. Because the iSCSI driver resides on board the QLogic iSCSI HBA, it is not necessary to install the Microsoft iSCSI Initiator. Installing the QLogic iSCSI HBA Install the QLogic iSCSI HBA hardware and software following the instructions in the QLogic installation manual.
2. Click Yes to start the general configuration wizard (see Figure 62 (page 126)). Use the Wizard to: • Choose iSCSI HBA port to configure the QLogic iSCSI HBA. • Configure HBA Port network settings. • Configure HBA Port DNS settings (optional). • Configure SLP Target Discovery settings (optional). • Configure iSNS Target Discovery settings (optional).
Figure 63 HBA Port Target Configuration 3. 4. 5. 6. Repeat Steps 1 and 2 to add each additional iSCSI or iSCSI/FCoE target iSCSI port. Click Next. To enable the changes, enter the SMS password: config. Select the Target Settings tab. Verify that the HBA state is Ready, Link Up and each target entry’s state is Session Active (Figure 64 (page 127)).
1. 2. Follow procedures in Step 2 to: • Create an iSCSI host. • Present LUNs to the iSCSI host. On the iSCSI HBA tab (Figure 65 (page 128) verify that the QLogic iSCSI HBA is connected to the iSCSI LUNs in SMS under the HBA iSCSI port. Figure 65 HBA iSCSI port connections Use Microsoft’s iSCSI services to manage the iSCSI target login and LUN load balancing policies. Installing the HP MPIO Full Featured DSM for EVA Follow the steps in the Installation and Reference Guide located at: http://h20000.
Figure 66 Example: HP MPIO DSM Manager with iSCSI devices Microsoft Windows Cluster support Microsoft Cluster Server for Windows 2003 iSCSI failover clustering is supported by the iSCSI or iSCSI/FCoE modules. For more information, see: http://www.microsoft.com/windowsserver2003/technologies/storage/ iscsi/iscsicluster.mspx Requirements • Operating system: Windows Server 2003 Enterprise, SP2, R2, x86/x64 • Firmware: minimum version—3.1.0.
Figure 67 iSCSI Persistent Reservation Setup window 3. Click Done to finish. Each cluster is required to have its own value, and each node of a single cluster must have its own value. For example, Cluster A could have the default setting of AABBCCCCBBAA. Possible node settings: Node 1 1 Node 2 2 Node 3 3 Node 4 4 When the HP Full Featured DSM for EVA is installed, it sets up Persistent Reservation in the registry by default. For more information on the HP DSM, see: http://h20000.www2.hp.
Setting up authentication Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol used for secure logon between the iSCSI Initiator and iSCSI target. CHAP uses a challenge-response security mechanism for verifying the identity of an initiator without revealing a secret password that is shared by the two entities. It is also referred to as a three-way handshake.
Linux version • CHAP is supported with Linux open-iscsi Initiator and the iSCSI or iSCSI/FCoE modules. • CHAP setup with Linux iSCSI Initiator is not supported with the iSCSI or iSCSI/FCoE modules. ATTO Macintosh Chap restrictions The ATTO Macintosh iSCSI Initiator does not support CHAP at this time. Recommended CHAP policies • The same CHAP secret should not be configured for authentication of multiple initiators or multiple targets.
Table 22 iSCSI or iSCSI/FCoE module secret settings iSCSI or iSCSI/FCoE module secret settings Source MS Initiator secret settings Setting (example) Action Setting (example) iSCSI Port N/A General Tab Secret N/A Discovered iSCSI Initiator CHAPsecret01 Add Target Portal CHAPsecret01 iSCSI Presented Target N/A Log on to Target CHAPsecret01 NOTE: These are examples of secret settings. Configure CHAP with settings that apply to your specific network environment. 1.
2. Enable CHAP for the Microsoft iSCSI Initiator: a. Click Discovery. • For a. b. c. d. e. manually discovering iSCSI target portals: Click Add under Target Portals. Enter the IP address of the iSCSI port of the iSCSI or iSCSI/FCoE module. Click Advanced. Select the CHAP Login Information check box. Enter the CHAP secret for the iSCSI or iSCSI/FCoE modules discovered iSCSI Initiator in the Target Secret box. For example: CHAPsecret01 f. • b. c. d. e. f. g. h. i.
Enable CHAP for the Microsoft iSCSI Initiator 1. 2. 3. 4. 5. 6. 7. Click Discovery. For manually discovering iSCSI target portals: a. Click Add under Target Portals. b. Enter the IP address of the iSCSI port of the iSCSI or iSCSI/FCoE module. c. Click Advanced. d. Select the CHAP Login Information checkbox. e. Enter the CHAP secret for the iSCSI or iSCSI/FCoE module's-discovered iSCSI Initiator in the Target Secret box, for example, CHAPsecret01. f. Click OK and the initiator completes Target discovery.
6. Using the iscsiadm do a login into the iSCSI Target. For example: [root@sanergy33 iscsi]# iscsiadm --mode node --targetname iqn.2004-09.com.hp.fcgw.mez50.1.01.50014380025da538 --login The following is a sample iscsid.conf file for CHAP: # ************* # CHAP Settings # ************* # To enable CHAP authentication set node.session.auth.authmethod # to CHAP. The default is None. #node.session.auth.authmethod = CHAP node.session.auth.
1. Enable CHAP for the iSCSI or iSCSI/FCoE controller-discovered iSCSI Initiator entry. CHAP can be enabled via CLI only. To enable CHAP for the iSCSI or iSCSI/FCoE controller-discovered iSCSI Initiator entry using the iSCSI or iSCSI/FCoE controller CLI: a. If the iSCSI Initiator is not listed under the set chap command: b. 2. • HP Command View Option: add the initiator iqn name string via HP Command View’s Add Host tab.
3. Enable CHAP for the Microsoft iSCSI Initiator. a. Click the General tab. b. Click Secret in the middle of the screen. c. Click Reset. d. Enter the iSCSI or iSCSI/FCoE controller iSCSI Presented Target CHAP secret. For example: hpstorageworks. e. Click Discovery. • For a. b. c. d. e. f. • manually discovering iSCSI target portals: Click Add under Target Portals. Enter the IP address of the iSCSI port of the iSCSI or iSCSI/FCoE controller. Click Advanced. Select the CHAP Login Information check box.
1. Enable CHAP for the iSCSI or iSCSI/FCoE controller discovered iSCSI Initiator entry. CHAP can be enabled via CLI only. To enable CHAP for the iSCSI or iSCSI/FCoE controller discovered iSCSI Initiator entry using the iSCSI or iSCSI/FCoE controller CLI: a. If the iSCSI Initiator is not listed under the set chap command: • HP Command View Option: add the initiator iqn name string via the HP Command View Add Host tab.
3. Enable CHAP for the Microsoft iSCSI Initiator. a. Click the General tab. b. Click Secret in the middle of the screen. c. Click Reset. d. Enter the iSCSI or iSCSI/FCoE controller iSCSI Presented Target CHAP secret. For example: hpstorageworks. e. Click OK. f. Click Discovery. • For a. b. c. d. e. • Using iSNS for Target discovery: a. Click Add under iSNS Servers. b. Enter the IP address of the iSNS server. c. Click OK. manually discovering iSCSI target portals: Click Add under Target Portals.
1. Enable CHAP for the iSCSI or iSCSI/FCoE controller discovered iSCSI Initiator entry. CHAP can be enabled via CLI only. To enable CHAP for the iSCSI or iSCSI/FCoE controller discovered iSCSI Initiator entry using the iSCSI or iSCSI/FCoE controller CLI: a. If the iSCSI Initiator is not listed under set chap command: b. 2. HP Command View Option: add the initiator iqn name string via Command View’s Add Host tab.
4. Enable CHAP for the Microsoft iSCSI Initiator. a. Click the General tab. b. Click Secret in the middle of the screen. c. Click Reset. d. Enter the iSCSI or iSCSI/FCoE controller iSCSI Presented Target CHAP secret. For example: hpstorageworks. e. Click OK. f. Click Discovery. • For a. b. c. d. e. • Using iSNS for target discovery: a. Click Add under iSNS Servers. b. Enter the IP address of the iSNS server. c. Click OK. manually discovering iSCSI target portals: Click Add under Target Portals.
3. Setup Username and Password for Initiator and Portal for Discovery Session. For example: # To set a discovery session CHAP username and password for the initiator # authentication by the target(s), uncomment the following lines: #discovery.sendtargets.auth.username = username #discovery.sendtargets.auth.password = password discovery.sendtargets.auth.username = iqn.1994-05.com.redhat:fc813cac13.sanergy33 #discovery.sendtargets.auth.
iSCSI and FCoE thin provision handling iSCSI and FCoE presented LUNs which experience the thin provision (TP) Overcommitted state, as detected by P6000 Command View and illustrated in Figure 68 (page 144) will generally be write-protected until the Overcommitted state is cleared.
Figure 69 or Windows 2008 initiator iSCSIpresented LUN reported as TP Overcommitted Lists of all presented LUNs, per Virtual Port Group, are always available by navigating to the HOSTs tab and then to the one of four iSCSI HOSTs VPgroups, as illustrated in Figure 70 (page 146).
Figure 70 iSCSI Host presented LUNs list Figure 71 (page 147) shows an iSCSI LUN being re-presented.
Figure 71 iSCSI LUN re-presented to iSCSI initiator, after clearing TP Overcommitted state The normal condition is illustrated in Figure 72 (page 148).
Figure 72 Normal view of iSCSI LUN presented to iSCSI initiator 148 iSCSI or iSCSI/FCoE configuration rules and guidelines
6 Single path implementation This chapter provides guidance for connecting servers with a single path host bus adapter (HBA) to the Enterprise Virtual Array (EVA) storage system with no multipath software installed. A single path HBA is defined as: • A single HBA port to a switch with no multipathing software installed • A single HBA port to a switch with multipathing software installed HBA LUNs are not shared by any other HBA in the server or in the SAN.
Because of the risks of using servers with a single path HBA, HP recommends the following actions: • Use servers with a single path HBA that are not mission-critical or highly available. • Perform frequent backups of the single path server and its storage. Supported configurations All examples detail a small homogeneous Storage Area Network (SAN) for ease of explanation. Mixing of dual and single path HBA systems in a heterogeneous SAN is supported.
Figure 73 Single path HBA server without OpenVMS 1. Network interconnection 6. SAN switch 1 2. Single HBA server (Host 1) 7. SAN switch 2 3. Dual HBA server (Host 2) 8. Fabric zone 4. Management server 9. Controller A 5. Multiple single HBA paths 10. Controller B Figure 74 Single path HBA server with OpenVMS 1. Network interconnection 6. SAN switch 1 2. Single HBA server (Host 1) 7. SAN switch 2 3. Dual HBA server (Host 2) 8. Fabric zone 4. Management server 9. Controller A 5.
HP-UX configuration Requirements • Proper switch zoning must be used to ensure each single path HBA has an exclusive path to its LUNs. • Single path HBA server can be in the same fabric as servers with multiple HBAs. • Single path HBA server cannot share LUNs with any other HBAs. • In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller.
Figure 75 HP-UX configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4. Management server 8. Controller B Windows Server 2003 (32-bit) ,Windows Server 2008 (32–bit) , and Windows Server 2012 (32–bit) configurations Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
NOTE: For additional risks, see “Windows Servers” (page 165). Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single path HBA server is not part of a cluster. • Booting from the SAN is not supported on single path HBA servers. Figure 76 Windows Server 2003 (32-bit) and Windows 2008 (32–bit) configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4.
Risks • Single path failure will result in loss of connection with the storage system. • Single path failure may cause the server to reboot. • Controller shutdown puts controller in a failed state that results in loss of data accessibility and loss of host data that has not been written to storage. NOTE: For additional risks, see “Windows Servers” (page 165). Limitations • HP P6000 Continuous Access is not supported with single path configurations.
becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk. • HBA must be properly configured to work in a single HBA server configuration. The user is required to: ◦ Download and extract the contents of the TAR file. HBA configuration • Host 1 is a single path HBA host. • Host 2 is a multiple HBA host with multipathing software. See Figure 78 (page 156).
OpenVMS configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • All nodes with direct connection to a disk must have the same access paths available to them. • Single path HBA server can be in the same fabric as servers with multiple HBAs.
Limitations • HP P6000 Continuous Access is not supported with single path configurations. Figure 79 OpenVMS configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4. Management server 8. Controller B Xen configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
Risks • Single path failure may result in data loss or disk corruption. Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single path HBA server is not part of a cluster. • Booting from the SAN is not supported. Figure 80 Xen configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4. Management server 8.
HBA configuration • Host 1 is a single path HBA. • Host 2 is a dual HBA host with multipathing software. See Figure 81 (page 160). Risks • Single path failure may result in data loss or disk corruption. NOTE: For additional risks, see “Linux” (page 166). Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single HBA path at the host server is not part of a cluster, unless in a Linux High Availability Cluster.
controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk. • Linux 64-bit servers can support up to 14 single or dual path HBAs per server. Switch zoning and SSP are required to isolate the LUNs presented to each HBA from each other. HBA configuration • Host 1 is a single path HBA. • Host 2 is a dual HBA host with multipathing software.
IBM AIX configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • Single path HBA server can be in the same fabric as servers with multiple HBAs. • Single path HBA server cannot share LUNs with any other HBAs. • In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller.
Figure 83 IBM AIX Configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4. Management server 8. Controller B VMware configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • All nodes with direct connection to a disk must have the same access paths available to them.
Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single HBA path at the host server is not part of a cluster, unless in a VMware High Availability Cluster. • Booting from the SAN is supported on single path HBA servers. Figure 84 VMware configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4. Management server 8.
Fault stimulus Failure effect Server path failure Short term: Data transfer stops. Possible I/O errors. Long term: Job hangs, cannot umount disk, fsck failed, disk corrupted, need mkfs disk. Storage path failure Short term: Data transfer stops. Possible I/O errors. Long term: Job hangs, replace cable, I/O continues. Without cable replacement job must be aborted; disk seems error free.
Fault stimulus Failure effect to a different served path. When the single-path node crashes, only the processes executing on that node fail. In either case, no data is lost or corrupted. Switch failure (SAN switch disabled) I/O is suspended or process is terminated across this HBA until switch is back online. No data is lost or corrupted. The operating system will report the volume in a Mount Verify state until the MVTIMEOUT limit is exceeded, when it then marks the volume as Mount Verify Timeout.
Fault stimulus Failure effect Server path failure Short: I/O suspended, possible data loss. Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting. Storage path failure Short: I/O suspended, possible data loss. Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting.
Fault stimulus Failure effect Server path failure Short: I/O suspended, possible data loss. Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting. Storage path failure Short: I/O suspended, possible data loss. Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting.
7 Troubleshooting If the disk enclosure does not initialize IMPORTANT: After a power failure, the system automatically returns to the last-powered state (On or Off) when A/C power is restored. 1. 2. 3. Ensure that the power on/standby button was pressed firmly and held for approximately three seconds. Verify that the power on/standby button LED is green. Verify that the power source is working: a. Verify that the power supplies are working by viewing the power supply LEDs.
Is the power on/standby button LED amber? Answer Possible Reasons Possible Solutions No System functioning properly. No action required. Yes • The power on/standby button has • Firmly press the power on/standby not been pressed firmly or held button and hold for approximately long enough. three seconds. • The system midplane and/or • Be sure that all components are fully power button/LED assembly might seated. need to be replaced. • Contact an authorized service provider for assistance.
Is the fan LED amber? Answers Possible Reasons Actions No Functioning properly. No action required Yes Fan might not be inserted properly, • Be sure that the fan is undamaged might have a damaged connector, or and is fully seated. might have failed. • Contact an authorized service provider for assistance. Effects of a disk drive failure When a disk drive fails, all virtual disks that are in the same array are affected.
To minimize the likelihood of fatal system errors, take these precautions when removing failed drives: • Do not remove a degraded drive if any other drive in the array is offline (the online LED is off). In this situation, no other drive in the array can be removed without data loss. • Exceptions: • ◦ When RAID1+0 is used, drives are mirrored in pairs.
When automatic data recovery has finished, the online LED of the replacement drive stops blinking and begins to glow steadily. Failure of another drive during rebuild If a non-correctable read error occurs on another physical drive in the array during the rebuild process, the Online LED of the replacement drive stops blinking and the rebuild abnormally terminates. If this situation occurs, restart the server. The system might temporarily become operational long enough to allow recovery of unsaved data.
Table 26 Controller status LEDs (continued) Item LED 3 Indication Flashing amber indicates a controller termination, or the system is inoperative and attention is required. Solid amber indicates that the controller cannot reboot, and that the controller should be replaced. If both the solid amber and solid blue LEDs are lit, the controller has completed a warm removal procedure, and can be safely swapped.
2. 2. In HP P6000 Command View, click the General tab and then click the Locate button. Use the Locate ON and Locate OFF buttons to control the blue LED (see Figure 87 (page 175)). Figure 87 Locate Hardware Device iSCSI or iSCSI/FCoE module's log data The iSCSI or iSCSI/FCoE modules maintain logs that can be displayed or collected through the CLI. The log is persistent through reboots or power cycles. To view the log use the CLI command show logs.
Issue: Initiator cannot login to iSCSI or iSCSI/FCoE module target Solution 1: Ensure the correct iSCSI port IP address is used Solution 2: In HP P6000 Command View, for each iSCSI controller 01 and 02, click the IP ports tab, then expand the TCP properties under the Advanced Settings. There should be available connections; if not, choose another IP port to log in to or reduce the connections from other initiators by logging out from unused connections (see Figure 89 (page 176)).
Figure 90 Host details Figure 91 Target tab Issue: Windows initiators may display Reconnecting if NIC MTU changes after connection has logged in. Solution. Log out of those sessions and Log On again to re-establish the Connected state. Issue: When communication between HP P6000 Command View and iSCSI or iSCSI/FCoE module is down, use following options: Solution 1. Refresh using Hardware > iSCSI Devices > iSCSI Controller 01 or 02 > Refresh button. Solution 2.
2. Enter a valid IPv4 mgmt Ip address under Mgmt Port and click the Save changes button. If only IPv6 mgmt port IP address is set, enter a valid lPv6 management IP address under Mgmt Port and click the Save changes button. NOTE: If you configure IPv6 on any iSCSI or iSCSI/FCoE module’s iSCSI port, you must also configure IPv6 on the HP P6000 Command View EVA management server. HP P6000 Command View issues and solutions Issue Solution Discovered iSCSI Controller not found with selected EVA.
Volume information mismatch across cveva and Optimize ReTrim used space There can be a mismatch on the Vdisk allocated size in comparison with the host volume size shown by optimizer (slab count and volume information). Space reclaim is very minimal for iSCSI LUN during the file deletion. Based on the controller load, the efficiency of space reclamation might vary and the reclamation not start immediately.
8 Error messages This list of error messages is in order by status code value, 0 to 243. Table 27 Error Messages Status code value Meaning How to correct 0 Successful Status The SCMI command completed successfully. No corrective action required. 1 Object Already Exists The object or relationship already exists. Delete the associated object and try the operation again.
Table 27 Error Messages (continued) Status code value 12 Invalid Parameter handle Meaning How to correct The supplied handle is invalid. This can indicate a user error, program error, or a storage cell in an uninitialized state. In the following cases, the storage cell is in an uninitialized state, but no action is required: In the following cases, the message can occur because the operation is not allowed when the storage cell is in an uninitialized state.
Table 27 Error Messages (continued) Status code value Meaning How to correct 25 Objects in your system are in use, and their state prevents the operation you wish to perform. Several states can cause this message: Case 1: The operation cannot be performed because an association exists a related object, or the object is in a progress state. Case 1: Either delete the associated object or resolve the in progress state. Case 2: Report the error to product support.
Table 27 Error Messages (continued) Status code value Meaning How to correct 27 Target Object Does Not Exist The operation cannot be performed because Report the error to product support. the object does not exist. This can indicate a user or program error. 28 Timeout A timeout has occurred in processing the request. Verify the hardware connections and that communication to the device is successful. 29 Unknown Id This error is no longer supported. Report the error to product support.
Table 27 Error Messages (continued) Status code value Meaning How to correct 45 Not DR group member The operation cannot be performed because Configure the virtual disk to be a the virtual disk is not a member of a member of a Continuous Access group Continuous Access group. and retry the request. 46 Invalid DR mode The operation cannot be performed because Configure the Continuous Access the Continuous Access group is not in the group correctly and retry the request. required mode.
Table 27 Error Messages (continued) Status code value Meaning How to correct 59 Maximum Number of Objects Exceeded. Case 1: The maximum number of items allowed has been reached. Case 1: If this operation is still desired, delete one or more of the items and Case 2: The maximum number of EVA hosts retry the operation. Case 2: If this operation is still desired, has been reached. delete one or more of the EVA hosts Case 3: The maximum number of port and retry the operation. WWNs has been reached.
Table 27 Error Messages (continued) Status code value Meaning How to correct 68 Obsolete This error is no longer supported. Report the error to product support. 69 Obsolete This error is no longer supported. Report the error to product support. 70 Image incompatible The firmware image file is incompatible with Retrieve a valid firmware image file the current system configuration. Version and retry the request conflict in upgrade or downgrade not allowed.
Table 27 Error Messages (continued) Status code value 80 Invalid Volume Usage Meaning How to correct The disk volume is already a part of a disk group. Resolve the condition by setting the usage to a reserved state, wait for the usage to change to this state, and retry the request. 81 The disk volume usage cannot be modified, Resolve the condition by adding Minimum Volumes In Disk Group as the minimum number of disks exist in the additional disks and retry the request. disk group.
Table 27 Error Messages (continued) Status code value Meaning How to correct 95 Unknown remote DR group The remote Continuous Access group specified does not exist. Correctly select the remote Continuous Access group retry the request. 96 PLDMC failed This error is no longer supported. Report the error to product support. 97 Storage system could not be locked. System busy. Try command again. Another process has already taken the SCMI Retry the request later. lock on the storage system.
Table 27 Error Messages (continued) Status code value Meaning How to correct Snapclone Active 111 EMU Load Busy The operation cannot be completed while the drive enclosures are undergoing code load. Wait several minutes for the drive enclosure code load to finish, then retry the operation. 112 Duplicate User Name An existing Continuous Access group already has this user name. Change the user name for the new Continuous Access group or delete the existing Continuous Access group with the same name.
Table 27 Error Messages (continued) Status code value Meaning How to correct 128 OCP Error EVA 6400/8400 only. A generic error was Ensure other OCP is on and try again. detected with the OCP interface. If the problem persists, report the error to product support. 129 Mirror Temporarily Offline The virtual disk is not mirrored to the other controller. 130 Failsafe Mode Enabled Cannot perform operation because FAILSAFE Disable Failsafe mode on Group. is enabled on Group.
Table 27 Error Messages (continued) Status code value Meaning How to correct 146 Specified Option Iis Not Yet Implemented An unsupported code load attempt was made. Code load the EVA firmware with a supported method. 147 DRM Group Is Already “Present Only” Data replication group is already present_only. Disable active-active or read-only and retry operation. 148 The Presented Unit Identifier Is Invalid This error is no longer supported. Report the error to product support.
Table 27 Error Messages (continued) Status code value Meaning How to correct 165 Too Many Port WWNs The system has reached the limit of client Remove an adapter connection before adapters, so the command attempted cannot attempting the command again. add another. 166 Port WWN Not Found The port WWN supplied with the command Retry the command with an accurate is not correct. port WWN. 167 No Virtual Disk For Presented Unit The virtual disk identifier supplied with the command is not correct.
Table 27 Error Messages (continued) Status code value Meaning How to correct 182 Mixed Drive Types The supplied list of drives contained multiple Correct the list such that only one type drive types. of drive is used. 183 Already On An attempt to enable the OCP Locate LED failed because the LED is already enabled. 184 Already Off An attempt to disable the OCP Locate LED No corrective action required. failed because the LED is already disabled.
Table 27 Error Messages (continued) Status code value Meaning How to correct 201 No Path To DR Destination Attempt to create a data replication group failed because of a loss of communication with the remote site. Verify/re-establish communication to the remote site. 202 Nonexistent Group This error is no longer supported. Report the error to product support. 203 Invalid Asynch Log Size This error is no longer supported. Report the error to product support.
Table 27 Error Messages (continued) Status code value Meaning How to correct 222 Fail Not Locked Storage Cell Not Locked. The requestor must Retry the operation later. If the error have a valid command lock before persists, report the error to product attempting this command. support. 223 Fail Lock Busy Storage Cell Lock Busy. The requestor does not have the command lock to perform this command. Retry the operation later. If the error persists, report the error to product support.
Table 27 Error Messages (continued) Status code value 196 Meaning How to correct 242 Event Not Found The event was not found. 243 Unsupported Drive There were not enough drives to complete Replace the unsupported drives with the operation and some unsupported drives supported drives and retry. were detected. Error messages Report the error to product support.
9 Support and other resources Contacting HP HP technical support For worldwide technical support information, see the HP support website: http://www.hp.
• HP Software Downloads: http://www.hp.com/support/manuals • HP Software Depot: http://www.software.hp.com • HP Single Point of Connectivity Knowledge (SPOCK): http://www.hp.com/storage/spock • HP SAN manuals: http://www.hp.com/go/sdgmanuals Typographic conventions Table 28 Document conventions Convention Element Blue text: Table 28 (page 198) Cross-reference links and e-mail addresses Blue, underlined text: http://www.hp.
parts do not qualify for CSR. Your HP-authorized service provider will determine whether a repair can be accomplished by CSR. For more information about CSR, contact your local service provider, or see the CSR website: http://www.hp.com/go/selfrepair Rack stability Rack stability protects personnel and equipment. WARNING! To reduce the risk of personal injury or damage to equipment: • Extend leveling jacks to the floor. • Ensure that the full weight of the rack rests on the leveling jacks.
A Regulatory compliance notices Regulatory compliance identification numbers For the purpose of regulatory compliance certifications and identification, this product has been assigned a unique regulatory model number. The regulatory model number can be found on the product nameplate label, along with all required approval markings and information. When requesting compliance information for this product, always refer to this regulatory model number.
off and on, the user is encouraged to try to correct the interference by one or more of the following measures: • Reorient or relocate the receiving antenna. • Increase the separation between the equipment and receiver. • Connect the equipment into an outlet on a circuit that is different from that to which the receiver is connected. • Consult the dealer or an experienced radio or television technician for help.
This compliance is indicated by the following conformity marking placed on the product: This marking is valid for non-Telecom products and EU harmonized Telecom products (e.g., Bluetooth). Certificates can be obtained from http://www.hp.com/go/certificates.
Class B equipment Taiwanese notices BSMI Class A notice Taiwan battery recycle statement Turkish recycling notice Türkiye Cumhuriyeti: EEE Yönetmeliğine Uygundur Vietnamese Information Technology and Communications compliance marking Taiwanese notices 203
Laser compliance notices English laser notice This device may contain a laser that is classified as a Class 1 Laser Product in accordance with U.S. FDA regulations and the IEC 60825-1. The product does not emit hazardous laser radiation. WARNING! Use of controls or adjustments or performance of procedures other than those specified herein or in the laser product's installation guide may result in hazardous radiation exposure.
German laser notice Italian laser notice Japanese laser notice Laser compliance notices 205
Spanish laser notice Recycling notices English recycling notice Disposal of waste equipment by users in private household in the European Union This symbol means do not dispose of your product with your other household waste. Instead, you should protect human health and the environment by handing over your waste equipment to a designated collection point for the recycling of waste electrical and electronic equipment.
Dutch recycling notice Inzameling van afgedankte apparatuur van particuliere huishoudens in de Europese Unie Dit symbool betekent dat het product niet mag worden gedeponeerd bij het overige huishoudelijke afval. Bescherm de gezondheid en het milieu door afgedankte apparatuur in te leveren bij een hiervoor bestemd inzamelpunt voor recycling van afgedankte elektrische en elektronische apparatuur. Neem voor meer informatie contact op met uw gemeentereinigingsdienst.
Hungarian recycling notice A hulladék anyagok megsemmisítése az Európai Unió háztartásaiban Ez a szimbólum azt jelzi, hogy a készüléket nem szabad a háztartási hulladékkal együtt kidobni. Ehelyett a leselejtezett berendezéseknek az elektromos vagy elektronikus hulladék átvételére kijelölt helyen történő beszolgáltatásával megóvja az emberi egészséget és a környezetet.További információt a helyi köztisztasági vállalattól kaphat.
Portuguese recycling notice Descarte de equipamentos usados por utilizadores domésticos na União Europeia Este símbolo indica que não deve descartar o seu produto juntamente com os outros lixos domiciliares. Ao invés disso, deve proteger a saúde humana e o meio ambiente levando o seu equipamento para descarte em um ponto de recolha destinado à reciclagem de resíduos de equipamentos eléctricos e electrónicos. Para obter mais informações, contacte o seu serviço de tratamento de resíduos domésticos.
Battery replacement notices Dutch battery notice French battery notice 210 Regulatory compliance notices
German battery notice Italian battery notice Battery replacement notices 211
Japanese battery notice Spanish battery notice 212 Regulatory compliance notices
B Non-standard rack specifications The appendix provides information on the requirements when installing the P63x0/P65x0 EVA in a non-standard rack. All the requirements must be met to ensure proper operation of the storage system. Internal component envelope EVA component mounting brackets require space to be mounted behind the vertical mounting rails. Room for the mounting of the brackets includes the width of the mounting rails and needed room for any mounting hardware, such as screws, clip nuts, etc.
Weights, dimensions and component CG measurements Cabinet CG dimensions are reported as measured from the inside bottom of the cabinet (Z), the leading edge of the vertical mounting rails (Y), and the centerline of the cabinet mounting space (X). Component CG measurements are measured from the bottom of the U space the component is to occupy (Z), the mounting surface of the mounting flanges (Y), and the centerline of the component (X).
Table 29 HP UPS models and capacities UPS Model Capacity (in watts) R1500 1340 R3000 2700 R5500 4500 R12000 12000 Table 30 UPS operating time limits Minutes of operation Load (percent) With standby battery With 1 ERM With 2 ERMs R1500 100 5 23 49 80 6 32 63 50 13 57 161 20 34 146 290 R3000 100 5 20 80 6.
Table 31 Operating Shock/Vibration Shock test with half sine pulses of 10 G magnitude and 10 ms duration applied in all three axes (both positive and negative directions). Sine sweep vibration from 5 Hz to 500 Hz to 5 Hz at 0.1 G peak, with 0.020” displacement limitation below 10 Hz. Sweep rate of 1 octave/minute. Test performed in all three axes. Random vibration at 0.25 G rms level with uniform spectrum in the frequency range of 10 to 500 Hz. Test performed for two minutes each in all three axes.
C Command reference This chapter lists and describes the P6000 iSCSI and iSCSI/FCoE module's CLI commands in alphabetical order. Each command description includes its syntax, keywords, notes, and examples. Command syntax The HP P6000 iSCSI or iSCSI/FCoE module's CLI command syntax uses the following format: Command keyword keyword [value] keyword [value1] [value2] The command is followed by one or more keywords. Consider the following rules and conventions: • Commands and keywords are case insensitive.
Admin Opens and closes an administrator (admin) session. Any command that changes the iSCSI or iSCSI/FCoE module's configuration must be entered in an Admin session. An inactive Admin session times out after 15 minutes. Authority Admin session Syntax admin start (or begin) end (or stop) cancel Keywords start (or begin) Opens the Admin session. end (or stop) Closes the Admin session. The logout, shutdown, and reset commands also end an Admin session.
Keywords logs Clears all entries from the module's log file. stats Resets the statistic counters. Examples: The following examples show the clear commands: MEZ50 <1>(admin) #> clear logs MEZ50 <1>(admin) #> clear stats Date Displays or sets the date and time. To set the date and time, you must enter the information in the format MMDDhhmmCCYY (numeric representation of month-date-hour-minute-century-year). The new date and time takes effect immediately. Each module has its own independent date set.
FRU Saves and restores the module’s configuration. Authority Admin session to restore Syntax FRU restore save Keywords restore The fru restore command requires that you first FTP the tar file containing the configuration to the module. When you issue this command, the system prompts you to enter the restore level. You can fully restore the module’s configuration (all configuration parameters and LUN mappings) or restore only the LUN mappings.
CLI command iSCSI module CLI command qualifier image list image unpack [ initiator iSCSI/FCoE module CLI command qualifier image list image unpack [ ] ] [ add | mod | rm ] [ add | mod | rm ] [ add | rm ] [ add | rm ] reset [ factory | mappings ] [ factory | mappings ] save [ capture | logs | traces ] [ capture | logs | traces ] set [ alias | chap | fc | features | iscsi | isns | mgmt | ntp | properties | snmp | system ] set alias set chap set fc [ ] set isns set mgmt set ntp set
CLI command iSCSI module CLI command qualifier iSCSI/FCoE module CLI command qualifier traceroute iSCSI Server Connectivity Command Set: ======================================== lunmask [ add | rm ] show [initiators_lunmask | lunmask ] show initiators_lunmask show lunmask History Displays a numbered list of the previously entered commands.
Example 1: MEZ50_02 (admin) #> image cleanup MEZ50_02 (admin) #> image list No images found in system. Example 2: MEZ50_02 (admin) #> image list mez50-3_0_4_1.bin Only the file name is displayed as a response to this command. The software image file is placed using ftp to the iSCSI or iSCSI/FCoE module as shown in Figure 93 (page 223).
to do so. Only valid iSCSI name characters will be accepted. Valid characters include lower-case alphabetical (a-z), numerical (0-9), colon, hyphen, and period. iSCSI Initiator Name (Max = 223 characters) [ ]iqn.1995.com.microsoft:server1 OS Type (0=Windows, 1=Linux, 2=Solaris, 3=OpenVMS, 4=VMWare, 5=Mac OS X, 6=Windows2008, 7=Windows2012, 8=Other) [Windows ] 6 All attribute values that have been changed will now be saved.
Logout Exits the command line interface and returns you to the login prompt. Authority None Syntax logout Example: MEZ50 <1>(admin) #> logout (none) login: Lunmask Maps a target LUN to an initiator, and also removes mappings. The CLI prompts you to select from a list of virtual port groups, targets, LUNs, and initiators. Authority Admin session Syntax lunmask add remove Keywords add Maps a LUN to an initiator.
12 13 Please select a LUN to present to the initiator ('q' to quit): 12 All attribute values that have been changed will now be saved.
3 4 VPGROUP_3 VPGROUP_4 Multiple VpGroups are currently 'ENABLED'.
Index ----0 Initiator ----------------iqn.1991-05.com.microsoft:perf3.sanbox.com Please select an Initiator to remove ('a' to remove all, 'q' to quit): 0 All attribute values that have been changed will now be saved. Example 4: The following shows an example of the lunmask rm command with virtual port groups.
Example: MEZ50 <1>(admin) #> passwd Press 'q' and the ENTER key to abort this command. Select password to change (0=guest, 1=admin) : 1 account OLD password : ****** account NEW password (6-128 chars) : ****** please confirm account NEW password : ****** Password has been changed. Ping Verifies the connectivity of management and GE ports. This command works with both IPv4 and IPv6. Authority Admin session Syntax ping Example 1: Ping through an iSCSI data port to another iSCSI data port.
Reply Reply Reply Reply Reply from from from from from 10.6.0.194: 10.6.0.194: 10.6.0.194: 10.6.0.194: 10.6.0.194: bytes=56 bytes=56 bytes=56 bytes=56 bytes=56 time=0.1ms time=0.1ms time=0.1ms time=0.1ms time=0.1ms Ping Statistics for 10.6.0.194: Packets: Sent = 8, Received = 8, Lost = 0 Approximate round trip times in milli-seconds: Minimum = 0.1ms, Maximum = 1.3ms, Average = 0.2ms Quit Exits the command line interface and returns you to the login prompt (same as the exit command).
Save Saves logs and traces. Authority Admin session Syntax save capture logs traces Keywords capture The save capture command creates a debug file that captures all debug dump data. After the command completes, you must FTP the debug capture file from the module. logs The save logs command creates a tar file that contains the module’s log data, storing the file in the module’s /var/ftp directory. After the command completes, you must FTP the log’s tar file from the module.
Keywords alias Assigns alias name to a presented iSCSI target. See the “set alias command” (page 232) chap Sets the CHAP secrets. See the “set CHAP command” (page 233) fc [] Sets the FC port parameters. “set FC command” (page 233) features Applies license keys to the module. See the “set features command” (page 234) iscsi [] Sets the iSCSI port parameters. See the “set iSCSI command” (page 235) isns Sets the Internet simple name service (iSNS) parameters.
Set CHAP Provides for the configuration of the challenge handshake authentication protocol (CHAP). Authority Admin session Syntax set chap Example: MEZ50 <1>(admin) #> set chap A list of attributes with formatting and current values will follow. Enter a new value or simply press the ENTER key to accept the current value. If you wish to terminate this process before reaching the end of the list press 'q' or 'Q' and the ENTER key to do so. Index iSCSI Name ----- ---------0 iqn.1986-03.com.hp:fcgw.MEZ50.
Link Rate (0=Auto, 1=1Gb, 2=2Gb, 4=4Gb, 8=8GB) Frame Size (0=512B, 1=1024B, 2=2048B) Execution Throttle (Min=16, Max=65535) [Auto [2048 [256 ] ] ] All attribute values for Port 2 that have been changed will now be saved. Example 2: MEZ75 (admin) #> set fc A list of attributes with formatting and current values will follow. Enter a new value or simply press the ENTER key to accept the current value.
Enter feature key to be saved/activated: Set iSCSI Configures an iSCSI port. Authority Admin session Syntax set iscsi [] Keywords [] The iSCSI port to be configured. If not entered, all ports are selected as shown in the example. Example: MEZ50 (admin) #> set iscsi A list of attributes with formatting and current values will follow. Enter a new value or simply press the ENTER key to accept the current value.
Port Status (0=Enable, 1=Disable) Port Speed (0=Auto, 1=100Mb, 2=1Gb) MTU Size (0=Normal, 1=Jumbo, 2=Other) Window Size (Min=8192B, Max=1048576B) IPv4 Address IPv4 Subnet Mask IPv4 Gateway Address IPv4 TCP Port No. (Min=1024, Max=65535) IPv4 VLAN (0=Enable, 1=Disable) IPv6 Address 1 IPv6 Address 2 IPv6 Default Router IPv6 TCP Port No. (Min=1024, Max=65535) IPv6 VLAN (0=Enable, 1=Disable) iSCSI Header Digests (0=Enable, 1=Disable) iSCSI Data Digests (0=Enable, 1=Disable) [Enabled [Auto [Normal [32768 [0.0.
MEZ50 <1>(admin) #> set mgmt A list of attributes with formatting and current values will follow. Enter a new value or simply press the ENTER key to accept the current value. If you wish to terminate this process before reaching the end of the list press 'q' or 'Q' and the ENTER key to do so. WARNING: The following command might cause a loss of connections to the MGMT port.
MEZ50 (admin) #> set properties A list of attributes with formatting and current values will follow. Enter a new value or simply press the ENTER key to accept the current value. If you wish to terminate this process before reaching the end of the list press 'q' or 'Q' and the ENTER key to do so. CLI Inactivty Timer (0=Disable, 1=15min, 2=60min) CLI Prompt (Max=32 Characters) [Disabled] 0 [MEZ50 ] All attribute values that have been changed will now be saved.
------------------------------------Destination enabled (0=Enable, 1=Disable) [Disabled ] Configuring SNMP Trap Destination 8 : ------------------------------------Destination enabled (0=Enable, 1=Disable) [Disabled ] All attribute values that have been changed will now be saved. Set system Configures the module's system-wide parameters. Authority Admin session Syntax set system Example 1: MEZ50 (admin) #> set system A list of attributes with formatting and current values will follow.
------------------------Status (0=Enable, 1=Disable) [Enabled ] VpGroup Name (Max = 64 characters) [VPGROUP_1 ] All attribute values for VpGroup 1 that have been Configuring VpGroup: 2 ------------------------Status (0=Enable, 1=Disable) [Disabled ] 0 VpGroup Name (Max = 64 characters) [VPGROUP_2 ] All attribute values for VpGroup 2 that have been Configuring VpGroup: 3 ------------------------Status (0=Enable, 1=Disable) [Disabled ] 0 VpGroup Name (Max = 64 characters) [VPGROUP_3 ] All attribute values for
initiators [fc or iscsi] Displays SCSI initiator information: iSCSI or FC. See the “show initiators command” (page 244) initiators_lunmask Displays initiators and the LUNs to which they are mapped. See the “show initiators LUN mask command” (page 246) iscsi [port_num] Displays iSCSI port information and configuration. See the “show iSCSI command” (page 247) isns [port_num] Displays the module’s iSCSI name server (iSNS) configuration.
Show CHAP Displays CHAP configuration for iSCSI nodes. Authority None Syntax show chap Example: MEZ50 <1>(admin) #> show chap The following is a list of iSCSI nodes that have been configured with CHAP 'ENABLED': Type iSCSI Node -------- -----------Init iqn.1991-05.com.microsoft:server1 Show FC Displays FC port information for the specified port. If you do not specify a port, this command displays all ports.
Port ID WWNN WWPN Port ID WWNN WWPN Port ID WWNN WWPN Port ID Firmware Revision No. Frame Size Execution Throttle Connection Mode 00-00-ef (VPGROUP_1) 20:01:00:c0:dd:00:00:76 21:01:00:c0:dd:00:00:76 00-00-e8 (VPGROUP_2) 20:02:00:c0:dd:00:00:76 21:02:00:c0:dd:00:00:76 00-00-e4 (VPGROUP_3) 20:03:00:c0:dd:00:00:76 21:03:00:c0:dd:00:00:76 00-00-e2 (VPGROUP_4) 5.01.03 2048 256 Loop FC Port Port Status Port Mode Link Status Current Link Rate Programmed Link Rate WWNN WWPN Port ID Firmware Revision No.
Current Link Rate Programmed Link Rate WWNN WWPN Port ID Firmware Revision No. Frame Size Execution Throttle Connection Mode 4Gb Auto 20:00:00:c0:dd:00:01:50 21:00:00:c0:dd:00:01:50 00-00-ef 5.01.03 2048 256 Loop FC Port Port Status Link Status Current Link Rate Programmed Link Rate WWNN WWPN Port ID Firmware Revision No. Frame Size Execution Throttle Connection Mode 2 Enabled Up 4Gb Auto 20:00:00:c0:dd:00:01:51 21:00:00:c0:dd:00:01:51 00-00-ef 5.01.
OS Type Windows Initiator Name Alias IP Address Status OS Type iqn.1991-05.com.microsoft:perf3.sanbox.com Initiator Name Alias IP Address Status OS Type iqn.1995-12.com.attotech:xtendsan:sanlabmac-s09 33.33.52.17, 33.33.52.16 Logged In Windows 0.0.0.
Type OS Type FCOE Windows2008 WWNN WWPN Port ID Status Type OS Type 20:00:00:00:c9:95:b5:73 10:00:00:00:c9:95:b5:73 ef-1e-01 Logged In FCOE Windows2008 WWNN WWPN Port ID Status Type OS Type 20:00:f4:ce:46:fb:0a:4b 21:00:f4:ce:46:fb:0a:4b ef-10-01 Logged In FCOE Windows WWNN WWPN Port ID Status Type OS Type 20:00:f4:ce:46:fe:62:69 10:00:f4:ce:46:fe:62:69 ef-0e-01 Logged In FCOE Windows2008 WWNN WWPN Port ID Status Type OS Type 20:00:f4:ce:46:fe:62:6d 10:00:f4:ce:46:fe:62:6d ef-0a-01 Logged In FCOE O
Please select an Initiator from the list above ('q' to quit): Target(WWPN) -----------50:01:43:80:04:c6:89:68 50:01:43:80:04:c6:89:68 50:01:43:80:04:c6:89:68 50:01:43:80:04:c6:89:68 50:01:43:80:04:c6:89:68 50:01:43:80:04:c6:89:6c 50:01:43:80:04:c6:89:6c 50:01:43:80:04:c6:89:6c 50:01:43:80:04:c6:89:6c 50:01:43:80:04:c6:89:6c 0 (LUN/VpGroup) ------------0/VPGROUP_1 9/VPGROUP_1 10/VPGROUP_1 11/VPGROUP_1 12/VPGROUP_1 0/VPGROUP_1 9/VPGROUP_1 10/VPGROUP_1 11/VPGROUP_1 12/VPGROUP_1 Example 2: MEZ50 (admin) #> s
IPv4 Address IPv4 Subnet Mask IPv4 Gateway Address IPv4 Target TCP Port No. IPv4 VLAN IPv6 Address 1 IPv6 Address 2 IPv6 Link Local IPv6 Default Router IPv6 Target TCP Port No. IPv6 VLAN iSCSI Max First Burst iSCSI Max Burst iSCSI Header Digests iSCSI Data Digests 33.33.52.96 255.255.0.0 0.0.0.
iSCSI Header Digests iSCSI Data Digests Disabled Disabled iSCSI Port Port Status Link Status iSCSI Name Firmware Revision Current Port Speed Programmed Port Speed MTU Size Window Size MAC Address IPv4 Address IPv4 Subnet Mask IPv4 Gateway Address IPv4 Target TCP Port No. IPv4 VLAN IPv6 Address 1 IPv6 Address 2 IPv6 Link Local IPv6 Default Router IPv6 Target TCP Port No. IPv6 VLAN iSCSI Max First Burst iSCSI Max Burst iSCSI Header Digests iSCSI Data Digests GE4 Enabled Up iqn.2004-09.com.hp:fcgw.mez50.1.
Example: MEZ75 (admin) #> show logs 03/11/2011 22:18:42 UserApp 3 User has cleared the logs 03/11/2011 22:29:23 UserApp 3 qapisetpresentedtargetchapinfo_1_svc: Chap Configuration Changed 03/11/2011 22:31:22 UserApp 3 #1: qapisetfcinterfaceparams_1_svc: FC port configuration changed 03/11/2011 22:31:25 UserApp 3 #2: qapisetfcinterfaceparams_1_svc: FC port configuration changed 03/11/2011 22:31:26 UserApp 3 #3: qapisetfcinterfaceparams_1_svc: FC port configuration changed 03/11/2011 22:31:28 UserApp
Please select a LUN from the list above ('q' to quit): LUN Information ----------------WWULN LUN Number VendorId ProductId ProdRevLevel Portal Lun Size Lun State 10 60:05:08:b4:00:0f:1d:4f:00:01:50:00:00:cf:00:00 10 HP HSV340 0005 0 22528 MB Online LUN Path Information -------------------Controller Id ------------1 2 WWPN,PortId / IQN,IP --------------------------------50:01:43:80:04:c6:89:68, 00-00-aa 50:01:43:80:04:c6:89:6c, 00-00-b1 Path Status ----------Current Optimized Active Show LUNs Displays
VPGROUP_3 VPGROUP_4 0 0 Show lunmask Displays all initiators mapped to a user-specified LUN.
Buffer Pool Nic Buffer Pool Process Blocks Request Blocks Event Blocks Control Blocks 1K Buffer Pool 4K Buffer Pool Sessions 9812/9856 53427/81920 8181/8192 8181/8192 4096/4096 1024/1024 4096/4096 512/512 4096/4096 Connections: 10GE1 10GE2 2048/2048 2048/2048 Show mgmt Displays the module’s management port (10/100) configuration.
Show perf Displays the port, read, write, initiator, or target performance in bytes per second. Authority None Syntax show perf [byte | init_rbyte | init_wbyte | tgt_rbyte | tgt_wbyte ] Keywords byte Displays performance data (bytes per second) for all ports. init_rbyte Displays initiator mode read performance. init_wbyte Displays initiator mode write performance. tgt_rbyte Displays target mode read performance. tgt_wbyte Displays target mode write performance.
Show presented targets Displays targets presented by the module's FC, FCoE, or iSCSI or for all. Authority None Syntax show presented targets fc iscsi Keywords fc Specifies the display of FC presented targets. iscsi Specifies the display of iSCSI presented targets. Example 1: MEZ50 (admin) #> show presented_targets Presented Target Information -----------------------------iSCSI Presented Targets ------------------------Name iqn.2004-09.com.hp:fcgw.mez50.1.01.
Port Type WWNN WWPN VPGroup FC3 FCOE WWNN WWPN Port ID Port Type WWNN WWPN VPGroup 20:05:f4:ce:46:fb:0a:44 21:05:f4:ce:46:fb:0a:44 ef-09-03 FC4 FCOE WWNN WWPN Port ID Port Type WWNN WWPN VPGroup 20:06:f4:ce:46:fb:0a:43 21:06:f4:ce:46:fb:0a:43 ef-0d-04 FC3 FCOE WWNN WWPN Port ID Port Type WWNN WWPN VPGroup 20:06:f4:ce:46:fb:0a:44 21:06:f4:ce:46:fb:0a:44 ef-09-04 FC4 FCOE WWNN WWPN Port ID Port Type WWNN WWPN VPGroup 20:09:f4:ce:46:fb:0a:43 21:09:f4:c
VPGroup WWNN WWPN Port ID Port Type WWNN WWPN VPGroup 4 20:0b:f4:ce:46:fb:0a:44 21:0b:f4:ce:46:fb:0a:44 ef-09-06 FC4 FCOE 50:01:43:80:04:c6:89:60 50:01:43:80:04:c6:89:68 4 WWNN WWPN Port ID Port Type WWNN WWPN VPGroup 20:07:f4:ce:46:fb:0a:43 21:07:f4:ce:46:fb:0a:43 ef-0d-07 FC3 FCOE WWNN WWPN Port ID Port Type WWNN WWPN VPGroup 20:07:f4:ce:46:fb:0a:44 21:07:f4:ce:46:fb:0a:44 ef-09-07 FC4 FCOE WWNN WWPN Port ID Port Type WWNN WWPN VPGroup 20:0a:f4:ce:46:fb:0a:43
Port Type WWNN WWPN VPGroup FC4 FCOE 50:01:43:80:04:c6:89:60 50:01:43:80:04:c6:89:6c 4 iSCSI Presented Targets ------------------------Name iqn.2004-09.com.hp:fcgw.mez75.1.01.5001438004c68968 Alias WWNN 50:01:43:80:04:c6:89:60 WWPN 50:01:43:80:04:c6:89:68 VPGroup 1 Name iqn.2004-09.com.hp:fcgw.mez75.1.01.5001438004c6896c Alias foo2 WWNN 50:01:43:80:04:c6:89:60 WWPN 50:01:43:80:04:c6:89:6c VPGroup 1 Name iqn.2004-09.com.hp:fcgw.mez75.1.02.
MEZ75 (admin) #> show properties CLI Properties ---------------Inactivty Timer Prompt String Disabled MEZ75 Show SNMP Displays the module’s simple network management protocol (SNMP) and any configured traps. Authority None Syntax show snmp Example: MEZ75 (admin) #> show snmp SNMP Configuration -----------------Read Community Trap Community System Location System Contact Authentication traps System OID System Description public private Disabled 1.3.6.1.4.1.3873.1.
FC Port Interrupt Count Target Command Count Initiator Command Count Link Failure Count Loss of Sync Count Loss of Signal Count Primitive Sequence Error Count Invalid Transmission Word Count Invalid CRC Error Count FC3 292953354 129313203 0 0 0 0 0 0 0 FC Port Interrupt Count Target Command Count Initiator Command Count Link Failure Count Loss of Sync Count Loss of Signal Count Primitive Sequence Error Count Invalid Transmission Word Count Invalid CRC Error Count FC4 268764874 121869815 0 0 0 0 0 0 0 iS
Unexpected I/O Rcvd iSCSI Format Errors Header Digest Errors Data Digest Errors Sequence Errors IP Xmit Packets IP Xmit Byte Count IP Xmit Fragments IP Rcvd Packets IP Rcvd Byte Count IP Rcvd Fragments IP Datagram Reassembly Count IP Error Packets IP Fragment Rcvd Overlap IP Fragment Rcvd Out of Order IP Datagram Reassembly Timeouts TCP Xmit Segment Count TCP Xmit Byte Count TCP Rcvd Segment Count TCP Rcvd Byte Count TCP Persist Timer Expirations TCP Rxmit Timer Expired TCP Rcvd Duplicate Acks TCP Rcvd Pure
System Information -------------------Product Name Symbolic Name Controller Slot Target Presentation Mode Controller Lun AutoMap Target Access Control Serial Number HW Version SW Version Boot Loader Version No. of FC Ports No. of iSCSI Ports Log Level Telnet SSH FTP Temp (C) Uptime HP StorageWorks MEZ75 MEZ75-1 Left Auto Enabled Disabled PBGXEA1GLYG016 01 3.2.2.6 10.1.1.
VpGroup Information --------------------Index VpGroup Name Status WWPNs 1 VPGROUP_1 Enabled 21:00:00:c0:dd:00:00:75 21:00:00:c0:dd:00:00:76 Index VpGroup Name Status WWPNs 2 VPGROUP_2 Enabled 21:01:00:c0:dd:00:00:75 21:01:00:c0:dd:00:00:76 Index VpGroup Name Status WWPNs 3 VPGROUP_3 Enabled 21:02:00:c0:dd:00:00:75 21:02:00:c0:dd:00:00:76 Index VpGroup Name Status WWPNs 4 VPGROUP_4 Enabled 21:03:00:c0:dd:00:00:75 21:03:00:c0:dd:00:00:76 Example 2: The iSCSI module does not presently support VPgroups.
because the targets are auto detected and the show targets displayed information can be a helpful debug aid. Authority Admin session Syntax target add rm Keywords rm Removes a target from the module’s target database. Example: MEZ75 (admin) #> target rm Warning: This command will cause the removal of all mappings and maskings associated with the target that is selected.
D Using the iSCSI CLI The CLI enables you to perform a variety of iSCSI or iSCSI/FCoE module management tasks through an Ethernet or serial port connection. However, HP P6000 Command View should be the primary management tool for the iSCSI and ISCSI/FCoE modules. The CLI is a supplemental interface. Logging on to an iSCSI or iSCSI/FCoE module You can either use Telnet or Secure SHell (SSH) to log on to a module, or you can log on to the switch through the serial port.
System Information -------------------Product Name Symbolic Name System Mode Controller Slot Controller Lun AutoMap Target Access Control Serial Number HW Version SW Version Boot Loader Version No. of FC Ports No. of iSCSI Ports Telnet SSH Temp (C) MEZ50 (admin) #> HP StorageWorks MEZ50 MEZ50-1 iSCSI Server Connectivity Left Enabled Disabled 1808ZJ03297 01 3.0.3.9 1.1.1.
Modifying a configuration The module has the following major areas of configuration: • • • • Management port configuration requires the use of the following commands: ◦ The “set mgmt command” (page 236) ◦ The “show mgmt command” (page 253) iSCSI port configuration requires using the following commands: ◦ The “set iSCSI command” (page 235) ◦ The “show iSCSI command” (page 247) Virtual port groups configuration requires the following commands: ◦ The “set VPGroups command” (page 239) ◦ The “s
220 (none) FTP server (GNU inetutils 1.4.2) ready. User (172.17.137.102:(none)): ftp 331 Guest login ok, type your name as password. Password: ftp 230 Guest login ok, access restrictions apply. ftp> bin NOTE: Use of the CLI fru save does not capture all required P6000 information and a fru restore is likely to result in HP P6000 Command View inconsistencies which prevent normal operations. Use HP P6000 Command View for all normal save and restore operations. 200 Type set to I.
E Simple Network Management Protocol Simple network management protocol (SNMP) provides monitoring and trap functions for managing the module through third-party applications that support SNMP. The module firmware supports SNMP versions 1 and 2 and a QLogic management information base (MIB) (see “Management Information Base ” (page 270)). You may format traps using SNMP version 1 or 2. SNMP parameters You can set the SNMP parameters using the CLI.
Management Information Base This section describes the QLogic management information base (MIB). Network port table The network port table contains a list of network ports that are operational on the module. The entries in this table include the management port (labeled MGMT), and the Gigabit Ethernet ports (labeled GE1 and GE2). qsrNwPortTable Syntax SEQUENCE OF QsrNwPortEntry Access Not accessible Description Entries in this table include the management port, and the iSCSI ports on the module.
qsrNwPortAddressMode Syntax INTEGER 1 = Static 2 = DHCP 3 = Bootp 4 = RARP Access Read-only Description Method by which the port gets its IP address. qsrIPAddressType Syntax InetAddressType Access Read-only Description IP address type: ipv4 or ipv6. qsrIPAddress Syntax InetAddress Access Read-only Description IP address of the port. qsrNetMask Syntax InetAddress Access Read-only Description Subnet mask for this port.
qsrNwLinkRate Syntax QsrLinkRate Access Read-only Description Operational link rate for this port. FC port table This table contains a list of the Fibre Channel (FC) ports on the module. There are as many entries in this table as there are FC ports on the module. qsrFcPortTable Syntax SEQUENCE OF QsrFcPortEntry Access Not accessible Description A list of the FC ports on the module. The table contains as many entries as there are FC ports on the module.
qsrFcPortNodeWwn Syntax PhysAddress Access Read-only Description World wide name of the node that contains this port. qsrFcPortWwn Syntax PhysAddress Access Read-only Description World wide name for this port. qsrFcPortId Syntax PhysAddress Access Read-only Description Interface's 24-bit FC address identifier. qsrFcPortType Syntax Unsigned32 Access Read-only Description Type of FC port, as indicated by the use of the appropriate value assigned by IANA.
qsrIsInitEntry Syntax QsrIsInitEntry Access Not accessible Description Each entry (row) contains information about a specific iSCSI initiator.
qsrIsInitStatus Syntax Integer: 1 = unknown, 2 = loggedIn, 3 = loggedOut, 4 = recovery Access Read-only Description Status of the iSCSI initiator, that is, whether or not it is logged in to the module. qsrIsInitOsType Syntax SnmpAdminString Access Read-only Description The type of the iSCSI initiator's operating system. qsrIsInitChapEnabled Syntax Integer: 0 = enabled; 2 = disabled Access Read-only Description A value indicating whether CHAP is enabled or not for this iSCSI initiator.
qsrLunVPGroupid INTEGER qsrLunVPGroupname SnmpAdminString qsrLunWwuln Syntax PhysAddress Access Read-only Description The worldwide unique LUN name (WWULN) for the LUN. qsrLunVendorId Syntax SnmpAdminString Access Read-only Description Vendor ID for the LUN.
qsrLunVPGroupname OBJECT-TYPE Syntax SnmpAdminString Access Read-only Description VP group name to which this LUN belongs VP group table This table contains a list of virtual port groups (VPGs). There are four entries in this table at any point of time.
Access Read-only Description VP group name or host group name. qsrVPGroupWWNN Syntax VpGroupWwnnAndWwpn Access Read-only Description Worldwide port number (WWPN) qsrVPGroupStatus OBJECT-TYPE Syntax Integer: 0 = enabled; 1 = disabled Access Read-only Description Maintain the status of the VP group (enabled/disabled) Sensor table The sensor table lists all the sensors on the module. Each table row specifies a single sensor.
qsrSensorIndex Syntax Unsigned32 Access Not accessible Description A positive integer identifying each sensor of a given type. qsrSensorUnits Syntax INTEGER Celsius = 1 Access Read-only Description Unit of measurement for the sensor. qsrSensorValue Syntax Integer32 Access Read-only Description Current value of the sensor. qsrUpperThreshold Syntax Integer32 Access Read-only Description Upper-level threshold for this sensor.
System information objects System information objects provide the system serial number, version numbers (hardware/software/agent), and number of ports (FC/GE). qsrSerialNumber Syntax SnmpAdminString Access Read-only Description System serial number. qsrHwVersion Syntax SnmpAdminString Access Read-only Description System hardware version number. qsrSwVersion Syntax SnmpAdminString Access Read-only Description System software (firmware) version number.
Access Accessible for notify Description Indicates the severity of the event. The value clear specifies that a condition that caused an earlier trap is no longer present. qsrEventDescription Syntax SnmpAdminString Access Accessible for notify Description A textual description of the event that occurred. qsrEventTimeStamp Syntax DateAndTime Access Accessible for notify Description Indicates when the event occurred.
FC notifications are sent for the following events: • Fibre Channel port: down or up • down or up Port number (1–4) Target device discovery The Fibre Channel target device discovery notification indicates that the specified Fibre Channel target is online or offline.
VP group notifications are sent for the following events: • Change in name of a VP group • Enabling and disabling a VP group Sensor notification The sensor notification indicates that the state for the specified sensor is not normal. When the sensor returns to the normal state, this event is sent with the qsrEventSeverity object set to clear.
F iSCSI and iSCSI/FCoE module log messages This appendix provides details about messages logged to a file. The message log is persistent because it is maintained across module power cycles and reboots. Information in Table 35 (page 284) is organized as follows: • The ID column specifies the message identification numbers in ascending order. • The Log Message column indicates the message text displayed in the iSCSI or iSCSI/FCoE module's CLI.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 41067 QLBA_CreateLunObject: App LunObject memory unavailable Error Memory unavailable for LUN object. 41077 QLBA_CreateInitiatorObject: App Too many initiators Error Unable to create an object for initiator object; exceeded the maximum number of initiators. 41096 QLBA_DisplayTargetOperationStatus: App PCI Error, Status 0x%.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 41353 QLIS_LoginPduContinue: Session does not exist, invalid TSIH 0x%x App Error iSCSI login rejected due to a CHAP authentication error. 41354 QLIS_LoginPduContinue: App Unexpected CHAP key detected Error iSCSI login rejected due to a CHAP key error.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 42002 QLFC_Login: Can't open connection App Error Attempting login but FC connection cannot be opened. 42024 QLFC_Logout: No active path App to device. WWPN: %.2X%.2X%.2X%.2X%.2X%.2X%.2X%.2X Error Attempting logout of device for which there is no active path (WWPN not found). 42027 QLFC_Logout: VP Index 0x%x not configured App Error Logout attempted using FC VP index that has not been configured.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 53584 QLIS_LoginPduContinue: [0x%x] SES_STATE_LOGGED_IN NORMAL App Info iSCSI session full feature login. 53585 QLIS_LoginPduContinue: [0x%x] SES_STATE_LOGGED_IN DISCOVERY App Info iSCSI session discovery login. 53586 QLIS_LoginPduContinue: Initiator: %s App Info iSCSI login of Initiator: %s. 53587 QLIS_LoginPduContinue: Target: %s App Info iSCSI login of Target: %s.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 69653 #%d: qlutm_init: Diagnostic iSCSI failed, fail reboot Fatal iSCSI processor failed diagnostic reboot. 69654 #%d: qlutm_init: Diagnostic iSCSI failed, invalid NVRAM Fatal iSCSI processor failed NVRAM diagnostic. 69655 #%d: qlutm_init: Diagnostic iSCSI failed, invalid DRAM Fatal iSCSI processor failed DRAM diagnostic.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 70563 #%d: QLRebootTimer: Reboot failed! iSCSI Fatal iSCSI driver missed iSCSI processor heartbeat. iSCSI processor rebooted. 70564 #%d: QLRebootTimer: Reboot failed! iSCSI Fatal iSCSI processor failed to complete operation before timeout. 70609 #%d: QLRebootTimer: Reboot failed! iSCSI Fatal iSCSI processor system error restart. 70610 #%d: QLProcessSystemError: RebootHba failed iSCSI Fatal iSCSI processor reboot failed.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 74656 #%d: QLReadyTimer: Adapter missed heartbeat for %d seconds. Time left %d iSCSI Error Driver failed to receive a heartbeat from the iSCSI processor for the specified number of seconds. 74659 #%d: QLReadyTimer: Adapter missed heartbeat for 0x%x seconds iSCSI Error iSCSI processor (adapter) failed to provide a heartbeat for x seconds.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 102422 #%d: qlutm_init: Diagnostic FC failed, port 2 POST failed Fatal FC2 processor POST failed. 102423 #%d: qlutm_init: Failed to FC return diagnostic result to Bridge Fatal FC processor failed to return diagnostic results. 102656 #%d: QLInitializeAdapter: Reset ISP failed FC Fatal FC processor failed reset. 102657 #%d: QLInitializeAdapter: Load RISC code failed FC Fatal FC processor firmware load failed.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 106592 #%d: QLIoctlRunDiag: FC Diagnostic loopback command failed %x % %x %x Error FC processor failed the external loopback test. 106593 #%d: QLIoctlDisable: FC Re-initialize adapter failed Error FC processor failed to re-initialize in response to an IOCTL disable request. 106803 #%d: QLIsrEventHandler: Link FC down (%x) Error FC processor reported a link down condition.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 108049 #%d: QLVerifyMenloFw: FC EXECUTE_COMMAND_IOCB failed MB0 %x MB1 %x Error FC controller reported failure status for an Execute IOCB (input/output control block) command. 108050 #%d: QLVerifyMenloFw: EXECUTE_COMMAND_IOCB fatal error FC Error FC controller reported a fatal error while processing an Execute IOCB command.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 139282 QBRPC_Initialize:GetStats Mem Allocation error User Error Failed memory allocation for Get Statistics API. 139283 QBRPC_Initialize:InitListMem User Allocation error Error Failed memory allocation for Get Initiator List API. 139284 QBRPC_Initialize:TargetList User Mem Allocation error Error Failed memory allocation for Get Target List API.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 151890 #%d: User qapisetiscsiinterfaceparams_1_svc: iSCSI port configuration changed Info iSCSI port configuration has changed. 151891 #%d: qapisetisns_1_svc:iSNS User configuration changed Info iSNS configuration has changed. 151892 qapisetntpparams_1_svc: NTP User configuration changed Info NTP configuration has changed. 151893 #%d: qapisetvlanparams_1_svc: VLAN configuration changed User Info VLAN configuration has changed.
Table 35 iSCSI or iSCSI/FCoE module log messages (continued) 152133 sysTempMon: Power for Left PCM Plugged-in User Info Left PCM is connected AC power. 152134 sysTempMon: Power for Left PCM Un-plugged User Info Left PCM is not connected to AC power (unplugged). 152135 sysTempMon: Power for Right User PCM Plugged-in Info Right PCM is connected AC power. 152136 sysTempMon: Power for Right User PCM Un-plugged Info Right PCM is not connected to AC power (unplugged).
Glossary This glossary defines terms used in this guide or related to this product and is not a comprehensive glossary of computer terms. Symbols and numbers 3U A unit of measurement representing three “U” spaces. “U” spacing is used to designate panel or enclosure heights. Three “U” spaces is equivalent to 5.25 inches (133 mm). See also rack-mounting unit. µm A symbol for micrometer; one millionth of a meter. For example, 50 µm is equivalent to 0.000050 m.
asynchronous Events scheduled as the result of a signal requesting the event or that which is without any specified time relation. B backplane An electronic printed circuit board that distributes data, control, power, and other signals among components within an enclosure. bad block A data block that contains a physical defect. bad block replacement A replacement routine that substitutes defect-free disk blocks for those found to have defects.
console LUN A SCSI-3 virtual object that makes a controller pair accessible by the host before any virtual disks are created. Also called a communication LUN. console LUN ID The ID that can be assigned when a host operating system requires a unique ID. The console LUN ID is assigned by the user, usually when the storage system is initialized. container Virtual disk space that is preallocated for later use as a snapclone, snapshot, or mirrorclone.
disk group A named group of disks selected from all the available disks in a disk array. One or more virtual disks can be created from a disk group. Also refers to the physical disk locations associated with a parity group. disk migration state A physical disk drive operating state. A physical disk drive can be in a stable or migration state: • Stable—The state in which the physical disk drive has no failure nor is a failure predicted.
Enclosure Services Interface See ESI. Enclosure Services Processor See ESP. Enterprise Virtual Array The Enterprise Virtual Array is a product that consists of one or more storage systems. Each storage system consists of a pair of HSV controllers and the disk drives they manage. A storage system within the Enterprise Virtual Array can be formally referred to as an Enterprise storage system, or generically referred to as the storage system. environmental monitoring unit See EMU.
fiber The optical media used to implement Fibre Channel. fiber optic cable A transmission medium designed to transmit digital signals in the form of pulses of light. Fiber optic cable is noted for its properties of electrical isolation and resistance to electrostatic contamination. fiber optics The technology where light is transmitted through glass or plastic (optical) threads (fibers) for data communication or signaling purposes.
host-side ports See host port. hot-pluggable The ability to add and remove elements or devices to a system or appliance while the appliance is running and have the operating system automatically recognize the change. hub A communications infrastructure device to which nodes on a multi-point bus or loop are physically connected. It is used to improve the manageability of physical cables. I I/O module Input/Output module.
loop See arbitrated loop. loop ID Seven-bit values numbered contiguous from 0 to 126 decimal that represent the 127 valid AL_PA values on a loop (not all 256 hexadecimal values are allowed as AL_PA values per Fibre Channel). loop pair A Fibre Channel attachment a controller and physical disk drives. Physical disk drives connect to controllers through paired Fibre Channel arbitrated loops. There are two loop pairs, designated loop pair 1 and loop pair 2.
Network Storage Controller See NSC. node port A device port that can operate on the arbitrated loop topology. non-OFC (Open Fibre Control) A laser transceiver whose lower-intensity output does not require special open Fibre Channel mechanisms for eye protection. The Enterprise storage system transceivers are non-OFC compatible.
port A physical connection that allows data to pass between a host and a disk array. port-colored Pertaining to the application of the color of port or red wine to a CRU tab, lever, or handle to identify the unit as hot-pluggable. port_name A 64-bit unique identifier assigned to each Fibre Channel port. The port_name is communicated during the login and port discovery processes. power distribution module See PDM. power distribution unit See PDU.
redundant power configuration A capability of the Enterprise storage system racks and enclosures to allow continuous system operation by preventing single points of power failure. • For a rack, two AC power sources and two power conditioning units distribute primary and redundant AC power to enclosure power supplies. • For a controller or drive enclosure, two power supplies ensure that the DC power is available even when there is a failure of one supply, one AC source, or one power conditioning unit.
T TB Terabyte. A term defining either: • A data transfer rate. • A measure of either storage or memory capacity of 1,099,5111,627,776 (240) bytes. See also TBps. TBps Terabytes per second. A data transfer rate of 1,000,000,000,000 (1012) bytes per second. TC Termination Code. An Enterprise Storage System controller 8-character hexadecimal display that defines a problem causing controller operations to halt. Termination Code See TC.
Vraid6 Offers the features of Vraid5 while providing more protection for an additional drive failure, but uses additional physical disk space. W World Wide Name See WWN. write back caching A controller process that notifies the host that the write operation is complete when the data is written to the cache. This occurs before transferring the data to the disk. Write back caching improves response time since the write operation completes as soon as the data reaches the cache.
Index A AC power distributing, 31 accessing multipathing, 50 Secure Path, 50 add features page, 103 adding hosts, 51, 59 admin command, 218 agent shutdown notification, 281 agent startup notification, 281 Apple Mac iSCSI Initiator, 91, 105 storage setup, 109 authority requirements, 217 B bad image header, 185 bad image segment, 186 bad image size, 186 battery replacement notices, 210 beacon command, 218 C cables data, 29 handling fiber optic, 39 SAS, 21 Y-cable, 13, 22, 30 cabling controller, 29 Cache bat
EVA, 70 restoring, 267 saving and restoring, 267 Solaris, 66 connected targets tab, 111 connection suspended, 185 connectors protecting, 39 controller cabling, 29 connectors, 29 HSV340, 13 conventions document, 198 creating virtual disks, 52 volume groups, 53 customer self repair, 198 parts list, 83 D date command, 219 Declaration of Conformity, 201 device names Linux Initiator, 112 device names, assigning, 112 diagnostic steps, 169 if the enclosure does not initialize, 169 if the enclosure front fault LED
HP P6000 Command View adding hosts with, 51 creating virtual disk with, 52 troubleshooting, 175 using, 51 HP-UX create virtual disks, 52 creating volume groups, 53 failure scenarios, 164 single path implementation, 152 supported maximums, 87 VMware initiator, 93 Windows Server 2003 initiator, 94 iSCSI log messages, 284 iSCSI, locating, 174 iSCSI/FCoE rules, 87 I Korean notices, 202 I/O module defined, 18 LEDs, 19 IBM AIX adding hosts, 54 creating virtual disks, 54 failure scenarios, 167 single path impl
with QLogic iSCSI HBA, 125 MPxIO enabling for EVA, 118 multipath devices, monitoring, 122 multipathing, 99 accessing, 50 ESX server, 71 Solaris 10, 117 N network port down notification, 281 network port table, 270 no FC port, 181 no image, 181 no logical disk for Vdisk, 183 no more events, 183 no permission, 181 non-standard rack, specifications, 213 not a loop port, 181 not participating controller, 181 notifications agent shutdown, 281 agent startup, 281 FC port down, 281 generic, 283 network port down,
set command, 231 set fc command, 233 set features command, 234 set iscsi command, 235 set isns command, 236 set mgmt command, 236 set ntp command, 237 set properties command, 237 set snmp command, 238 set system command, 239 set vpgroups command, 239 show chap command, 242 show command, 240 show fc command, 242 show features command, 244 show initiators command, 244 show initiators lun mask command, 246 show iscsi command, 247 show isns command, 249 show logs command, 249 show luninfo command, 250 show lunm
V Vdisk DR group member, 184 DR log unit, 184 not presented, 184 Veritas Volume Manager, 66 version not supported, 183 vgcreate, 53 virtual disks configuring, 52, 61, 67 HP-UX, 52 IBM AIX, 54 Linux, 58 OpenVMS, 61 presenting, 52 Solaris, 67 verifying, 67, 68 VMware configuring servers, 70 failure scenarios, 167 iSCSI Initiator, 93 setting up iSCSI Initiator, 115 single path implementation, 163 VAAI Plug-in, 73 volume groups, 53 volume is missing, 183 VP group notification, 282 table, 277 W warning rack sta