HP 4400 Enterprise Virtual Array User Guide Abstract This document describes the HP 4400 Enterprise Virtual Array (EVA4400) and provides information about operating the EVA4400. It is intended for users who install, operate, and manage EVA4400 storage systems.
© Copyright 2008, 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Contents 1 EVA4400 hardware...................................................................................9 Physical layout of the storage system...........................................................................................9 M6412 disk enclosures............................................................................................................10 Enclosure layout.................................................................................................................
Connecting through a public network...................................................................................34 Connecting through a private network..................................................................................35 Changing the default operating mode.......................................................................................36 Accessing the HP P6000 Control Panel through HP P6000 Command View...................................37 Saving storage system configuration data.
Verifying connectivity.....................................................................................................60 Verifying virtual disks from the host.......................................................................................60 Verifying virtual disks from the host.......................................................................................60 HP EVA P6000 Software Plug-in for VMware VAAI.................................................................
HBA configuration..............................................................................................................84 Risks................................................................................................................................85 Limitations.........................................................................................................................85 Xen configuration..............................................................................................
FCC rating label..............................................................................................................110 Class A equipment......................................................................................................110 Class B equipment......................................................................................................110 Declaration of Conformity for products marked with the FCC logo, United States only...............111 Modification......................
German battery notice......................................................................................................121 Italian battery notice........................................................................................................121 Japanese battery notice....................................................................................................122 Spanish battery notice......................................................................................................
1 EVA4400 hardware The EVA4400 contains the following hardware components: • EVA controller enclosure—Contains power supplies, cache batteries, fans, and HSV controllers. • Fibre Channel disk enclosure—Contains disk drives, power supplies, fans, midplane, and I/O modules. • Fibre Channel Arbitrated Loop cables—Provide connectivity to the EVA controller enclosure and the Fibre Channel disk enclosures. • Rack—Several free standing racks are available.
M6412 disk enclosures The M6412 disk enclosure contains the disk drives used for data storage; a storage system contains multiple disk enclosures. The major components of the enclosure are: • 12-bay enclosure • Dual-loop, Fibre Channel disk enclosure I/O modules • Copper Fibre Channel cables • Fibre Channel disk drives and drive blanks • Power supplies • Fan modules NOTE: An EVA4400 requires a minimum of one disk shelf with eight disk drives.
Figure 4 Disk enclosure (rear view) 1. Power supply 1 5. Fan 1 status LED 9. Enclosure status LEDs 2. Power supply 1 status LED 6. I/O module A 10. Fan 2 3. Fan 1 7. I/O module B 11. Power push button 4. Enclosure product number and serial number 8. Rear UID push button 12. Power supply 2 I/O modules Two I/O modules provide the interface between the disk enclosure and the host controllers, see Figure 5 (page 11). For redundancy, only dual-controller, dual-loop operation is supported.
Table 2 I/O module status LEDs Status LED Description • Locate. • Flashing blue—Remotely asserted by application client. • Module health indicator: • Flashing green—I/O module powering up. • Solid green—Normal operation. • Green off—Firmware malfunction. • Fault indicator: • Flashing amber—Warning condition (not visible when solid amber showing). • Solid amber—Replace FRU. • Amber off—Normal operation.
performance comparable to fiber optic cables. Copper cable connectors differ from fiber optic small form-factor connectors (see Figure 7 (page 13)).
Controller enclosures The EVA4400 contains either the HSV300 or HSV300-S controller enclosure. Two interconnected controllers ensure that the failure of a controller component does not disable the system. A single controller can fully support an entire system until the defective controller, or controller component, is repaired. A single enclosure contains two controllers.
Figure 11 HSV300 controller enclosure (back view) 1. Power supply 1 9. Enclosure power push button 2. HSV300 controller 1 10. Power supply 2 3. Management module status LEDs 11. Host ports, FP1, FP2, connection to front end (host or SAN) 4. Ethernet port 12. DP1-A port, back-end connection to A loop 5. Management module 13. DP1-B port, back-end connection to B loop 6. HSV300 controller 2 14. Manufacturing diagnostic port 7. Rear UID push button 15. HSV300 controller status and fault LEDs 8.
Table 4 (page 16) describes the port LED indicators for the management module Ethernet port (callouts 3 and 4 in Figure 11 (page 15) and Figure 12 (page 15)). Table 4 Management module Ethernet port LED indicators LED color Location LED function LED state Status Green Left Link state indicator Off No link detected. Solid green Link detected. Off No activity. Blinking amber Normal activity.
Table 7 Embedded switch management Ethernet port LED indicators LED color Location LED function LED state Status Green Right Port speed indicator Off Port speed is 10 Mb/s and 100 Mb/s. Solid green No link detected. Solid amber No link detected. Blinking amber Link detected. Amber Left Link state or activity indicator HSV300 controller status LEDs Figure 13 (page 17) shows the location of the controller status LEDs; Table 8 (page 17) describes them.
Figure 14 Power supply 1. Power supply 4. Status indicator (green—Normal operation; amber—Failure or no power) 2. AC input connector 5. Handle 3. Latch Fan module Fan modules provide the cooling necessary to maintain the proper operating temperature within the controller enclosure. If one fan fails, the remaining fan is capable of cooling the enclosure. Figure 15 Fan module pulled out 1. Green—Fan normal operation LED 2.
Figure 16 Battery module pulled out 1. Green—Normal operation LED 2. Amber—Fault LED Each battery module provides power to the controller directly across from it in the enclosure. Table 10 Battery status indicators Status indicator Green Amber Fault indicator Description Solid green Normal operation. Blinking Maintenance in progress. Off Amber is on or blinking, or the enclosure is powered down. Solid amber Battery failure; no cache hold-up. Green will be off.
The rack provides the capability for mounting standard 483 mm (19 inch) wide controller and disk enclosures. NOTE: Racks and rack-mountable components are typically described using U measurements. U measurements are used to designate panel or enclosure heights. The U measurement is a standard of 41 mm (1.6 inches). The racks provide the following: • Unique frame and rail design—Allows fast assembly, easy mounting, and outstanding structural integrity.
NOTE: This section describes 30-A, single phase power. You can order other voltage, amperage, and phase configurations if you have a different power infrastructure. NEMA L6-30R receptacle, 3-wire, 30-A, 60-Hz NEMA L5-30R receptacle, 3-wire, 30-A, 60-Hz IEC 309 receptacle, 3-wire, 30-A, 50-Hz • The standard power configuration for any HP Enterprise Virtual Array rack is the fully redundant configuration.
PDUs Each Enterprise Virtual Array rack has either a 50- or 60-Hz, dual PDU mounted at the bottom rear of the rack. The PDU placement is back-to-back, plugs facing toward the front (Figure 17 (page 22)), with circuit breaker switches facing the back (Figure 18 (page 22)). • The standard 50-Hz PDU cable has an IEC 309, 3-wire, 30-A, 50-Hz connector. • The standard 60-Hz PDU cable has a NEMA L6-30P, 3-wire, 30-A, 60-Hz connector.
A PDU A failure: • Disables the power distribution circuit • Removes power from the left side of the rack • Disables disk enclosure PS 1 • Disables controller PS 1 PDU B PDU B connects to AC PDM B1–B4.
Rack AC power distribution The power distribution in an Enterprise Virtual Array rack is the same for all variants. The site AC input voltage is routed to the dual PDU assembly mounted in the rack lower rear. Each PDU distributes AC to a maximum of four PDMs mounted on the left and right vertical rails (see Figure 20 (page 24)). • PDMs A1 through A4 connect to receptacles A through D on PDU A.
Moving and stabilizing a rack WARNING! The physical size and weight of the rack requires a minimum of two people to move. If one person tries to move the rack, injury may occur. To ensure stability of the rack, always push on the lower half of the rack. Be especially careful when moving the rack over any bump (e.g., door sills, ramp edges, carpet edges, or elevator openings). When the rack is moved over a bump, there is a potential for it to tip over.
Figure 22 Raising a leveler foot 1. Hex nut 2. Leveler foot 3. To 1. 2. 3. 26 Carefully move the rack to the installation area and position it to provide the necessary service areas (see Figure 21 (page 25)). stabilize the rack when it is in the final installation location: Use a wrench to lower the foot by turning the leveler foot hex nut clockwise until the caster does not touch the floor. Repeat for the other feet. After lowering the feet, check the rack to ensure it is stable and level.
2 EVA4400 operation Best practices For useful information on managing and configuring your storage system, see the HP 4400/6400/8400 Enterprise Virtual Array configuration Best practices white paper available at: http://h18006.www1.hp.com/storage/arraywhitepapers.html Operating tips and information Reserving adequate free space To ensure efficient storage system operation, reserve some unallocated capacity, or free space, in each disk group.
The recommended setting is 16K. If this field is set to Default, you will receive the following error message: The format operation did not complete because the cluster count is higher than expected. Importing Windows dynamic disk volumes If you create a snapshot, snapclone, or mirrorclone with a Windows 2003 RAID-spanned dynamic volume on the source virtual disk, and then try to import the copy to a Windows 2003 x64 (64-bit) system, it will import with Dynamic Foreign status.
1. 2. 3. 4. Close the HP MPIO DSM Manager GUI. Close Disk Management. Stop and restart the Virtual Disk services. Open Disk Management, and then rescan or Diskpart rescan to enumerate the LUNs. If these steps are not successful, reboot the server. Host port connection limit on B-series 3200 and 3800 switches A maximum of three EVA4400 host ports are supported on a single B-Series 3200 or 3800 switch running version 3.2.x. HP recommends that you connect only one storage host port on a B-Series switch.
Failback preference setting for HSV controllers Table 11 (page 30) describes the failback preference mode for the controllers. Table 11 Failback preference settings Setting Point in time Behavior No Preference At initial presentation The units are alternately brought online to Controller 1 or to Controller 2. On dual boot or controller resynch If cache data for a LUN exists on a particular controller, the unit will be brought online there.
Table 12 Failback Settings by operating system Operating system Default behavior Supported settings HP-UX Host follows the unit1 No Preference Path A/B – Failover Only Path A/B – Failover/Failback Host follows the unit1 IBM AIX No Preference Path A/B – Failover Only Path A/B – Failover/Failback Linux Host follows the unit 1 No Preference Path A/B – Failover Only Path A/B – Failover/Failback OpenVMS Host follows the unit No Preference Path A/B – Failover Only Path A/B – Failover/Failback (reco
Implicit LUN transition Implicit LUN transition automatically transfers management of a virtual disk to the array controller that receives the most read requests for that virtual disk. This improves performance by reducing the overhead incurred when servicing read I/Os on the non-managing controller. Implicit LUN transition is enabled in XCS. When creating a virtual disk, one controller is selected to manage the virtual disk.
4. Under System Shutdown, click Power Down. If you want to delay the initiation of the shutdown, enter the number of minutes in the Shutdown delay field. The controllers complete an orderly shutdown and then power off. The disk enclosures then power off. Wait for the shutdown to complete. Shutting down the storage system from the array controller 1. 2. Push and hold the enclosure power button on the rear of the EVA4400 (see callout 9 in Figure 11 (page 15) or Figure 12 (page 15)). Wait 4 seconds.
that the disk enclosures are powered up first; otherwise, the controller boot up process may be interrupted. • After setting this HP P6000 Control Panel feature, if you have to shut down the array, perform the following steps: 1. Use HP P6000 Command View to shut down the controllers and disk enclosures. 2. Turn off power from the rack power distribution unit (PDU). 3. Turn on power from the rack PDU. After startup of the management module, the controllers will automatically start.
IMPORTANT: At initial setup, you cannot browse to the HP P6000 Control Panel until you perform this step. 4. Do one of the following: a. Temporarily connect a LAN cable from a private network to the management module. b. Temporarily connect a laptop computer to the management module using a LAN patch cable. 5. Browse to https://192.168.0.1:2373 or https://[fd50:f2eb:a8a::7]:2373/ and log in as an HP EVA administrator.
you are running a version earlier than HP Command View EVA 9.3 on the management module, the amber LED will flash momentarily when the reset is completed. 2. Browse to https://192.168.0.1:2373 and log in as an HP EVA administrator. HP recommends that you either change or delete the default IPv4 or IPv6 addresses to avoid duplicate address detection issues on your network. The default user name is admin. No password is required. The HP P6000 control panel GUI appears.
1. 2. 3. Connect to the management module using one of the methods described in “Connecting through a public network” (page 34) or “Connecting through a private network” (page 35). Log into the HP P6000 Control Panel as an administrator. The default username is admin and the password field is blank. For security reasons, change the password after you log in. Select Administrator Options > Configure controller host ports. The HP P6000 Control Panel screen appears.
NOTE: For more information on using the HP Storage System Scripting Utility, see the HP Storage System Scripting Utility Reference. See “Documents” (page 106). 1. 2. 3. Double-click the SSSU desktop icon to run the application. When prompted, enter Manager (management server name or IP address), User name, and Password. Enter LS SYSTEM to display the EVA storage systems managed by the management server. Enter SELECT SYSTEM system name, where system name is the name of the storage system.
Example 1 Saving configuration data on a Windows host 1. 2. 3. 4. 5. Double-click on the SSSU desktop icon to run the application. When prompted, enter Manager (management server name or IP address), User name, and Password. Enter LS SYSTEM to display the EVA storage systems managed by the management server. Enter SELECT SYSTEM system name, where system name is the name of the storage system.
dust covers or dust caps provided by the manufacturer. These covers are removed during installation, and should be installed whenever the transceivers or cables are disconnected. The transceiver dust caps protect the transceivers from contamination. Do not discard the dust covers. CAUTION: To avoid damage to the connectors, always install the dust covers or dust caps whenever a transceiver or a fiber cable is disconnected.
3 Configuring application servers Overview This chapter provides general connectivity information for all the supported operating systems. Where applicable, an OS-specific section is included to provide more information. NOTE: You can use HP P6000 SmartStart to configure Windows application servers. See the HP 4400 Enterprise Virtual Array Installation Guide or the HP P6000 SmartStart documentation for more information.
Testing connections to the EVA After installing the FCAs, you can create and test connections between the host server and the EVA. For all operating systems, you must: • Add hosts • Create and present virtual disks • Verify virtual disks from the hosts The following sections provide information that applies to all operating systems. For OS-specific details, see the applicable operating system section. Adding hosts To add hosts using HP P6000 Command View: 1.
1. 2. 3. 4. From HP P6000 Command View, create a virtual disk on the EVA4400. Specify values for the following parameters: • Virtual disk name • Vraid level • Size Present the virtual disk to the host you added. If applicable (OpenVMS) select a LUN number if you chose a specific LUN on the Virtual Disk Properties window. Verifying virtual disk access from the host To verify that the host can access the newly presented virtual disks, restart the host or scan the bus.
The following is a sample output from an ioscan command: # ioscan -fnCdisk # ioscan -fnCdisk Class I H/W Patch Driver S/W H/W Type Description State ======================================================================================== ba 3 0/6 lba CLAIMED BUS_NEXUS Local PCI Bus Adapter (782) fc 2 0/6/0/0 td CLAIMED INTERFACE HP Tachyon XL@ 2 FC Mass Stor Adap /dev/td2 fcp 0 0/6/0/0.39 fcp CLAIMED INTERFACE FCP Domain ext_bus 4 0/6/00.39.13.0.
IBM AIX Accessing IBM AIX utilities You can access IBM AIX utilities such as the Object Data Manager (ODM), on the following website: http://www.hp.com/support/downloads In the Search products box, enter MPIO, and then click AIX MPIO PCMA for HP Arrays. Select IBM AIX, and then select your software storage product.
Linux HBA drivers For most configurations and the latest version of Linux distributions, native HBA drivers are the supported drivers. Native driver means the driver that is included with the OS distribution. NOTE: The term inbox driver is also sometimes used and means the same as native driver. However, in some configurations, it may require the use of an out-of-box driver, which typically requires a driver package be downloaded and installed on the host.
It is important that you set the Console LUN ID to a number other than zero (0). If the Console LUN ID is not set or is set to zero (0), the OpenVMS host will not recognize the controller pair. The Console LUN ID for a controller pair must be unique within the SAN. Table 15 (page 47) shows an example of the Console LUN ID. You can set the OS unit ID on the Virtual Disk Properties window. The default setting is 0, which disables the ID field.
Shell> drvcfg -s 22 25 7. From the Fibre Channel Driver Configuration Utility list, select item 8 (Info) to find the WWN for that particular port. Output similar to the following appears: Adapter Adapter Adapter Adapter Path: WWPN: WWNN: S/N: Acpi(PNP0002,0300)/Pci(01|01) 50060B00003B478A 50060B00003B478B 3B478A Scanning the bus Enter the following command to scan the bus for the OpenVMS virtual disk: $ MC SYSMAN IO AUTO/LOG A listing of LUNs detected by the scan process is displayed.
If you are unable to access the virtual disk, do the following: • Check the switch zoning database. • Use HP P6000 Command View to verify the host presentations. • Check the SRM console firmware on AlphaServers. • Ensure that the correct host is selected for this virtual disk and that a unique OS Unit ID is used in HP P6000 Command View. Configuring virtual disks from the OpenVMS host To set up disk resources under OpenVMS, initialize and mount the virtual disk resource as follows: 1.
Configuring FCAs with the Oracle SAN driver stack Oracle-branded FCAs are supported only with the Oracle SAN driver stack. The Oracle SAN driver stack is also compatible with current Emulex FCAs and QLogic FCAs. Support information is available on the Oracle website: http://www.oracle.com/technetwork/server-storage/solaris/overview/index-136292.
2. Edit the following parameters in the /kernel/drv/lpfc.conf driver configuration file to set up the FCAs for a SAN infrastructure: topology=2; scan-down=0; nodev-tmo=60; linkdown-tmo=60; 3. If using a single FCA and no multipathing, edit the following parameter to reduce the risk of data loss in case of a controller reboot: nodev-tmo=120; 4.
Configuring QLogic FCAs with the qla2300 driver See the latest Enterprise Virtual Array release notes or contact your HP representative to determine which QLogic FCAs and which driver version HP supports with the qla2300 driver. To configure QLogic FCAs with the qla2300 driver: 1. Ensure that you have the latest supported version of the qla2300 driver (see http:// www.hp.com/storage/spock). 2. You must sign up for an HP Passport to enable access.
name="sd" class="scsi" target=30 lun=1; name="sd" class="scsi" target=31 lun=1; If LUNs are preconfigured in the/kernel/drv/sd.conf file, after changing the configuration file. use the devfsadm command to perform LUN rediscovery. 7. If the qla2300 driver is version 4.15 or later, verify that the following or a similar entry is present in the /kernel/drv/sd.
from the Symantec/Veritas support site for installation on the host. This download and installation is not required for VxVM 5.0 or later. To download and install the ASL/APM from the Symantec/Veritas support website: 1. Go to http://support.veritas.com. 2. Enter Storage Foundation for UNIX/Linux in the Product Lookup box. 3. Enter EVA in the Enter keywords or phrase box, and then click the search symbol. 4. To further narrow the search, select Solaris in the Platform box and search again. 5.
Example 4 Setting the I/O policy # vxdmpadm getattr arrayname EVA4400 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EVA44000 Round-Robin Round-Robin # vxdmpadm setattr arrayname EVA4400 iopolicy=adaptive # vxdmpadm getattr arrayname EVA4400 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EVA44000 Round-Robin Adaptive Configuring virtual disks from the host The procedure used to configure the LUN path to the array depends on the FCA driver.
50001fe1002709e9,5 • Emulex (lpfc)/QLogic (qla2300) drivers: ◦ You can retrieve the WWPN by checking the assignment in the driver configuration file (the easiest method, because you then know the assigned target) or by using HBAnyware/SANSurfer. ◦ You can retrieve the WWLUN ID by using HBAnyware/SANSurfer. You can also retrieve the WWLUN ID as part of the format -e (scsi, inquiry) output; however, it is cumbersome and difficult to read.
Example 5 Format command # format Searching for disks...done c2t50001FE1002709F8d1: configured c2t50001FE1002709F8d2: configured c2t50001FE1002709FCd1: configured c2t50001FE1002709FCd2: configured c3t50001FE1002709F9d1: configured c3t50001FE1002709F9d2: configured c3t50001FE1002709FDd1: configured c3t50001FE1002709FDd2: configured with with with with with with with with capacity capacity capacity capacity capacity capacity capacity capacity of of of of of of of of 1008.00MB 1008.00MB 1008.00MB 1008.
7. 8. 9. For each new device, use the disk command to select another disk, and then repeat 1 through 6. Repeat this labeling procedure for each new device. (Use the disk command to select another disk.) When you finish labeling the disks, enter quit or press Ctrl+D to exit the format utility. For more information, see the System Administration Guide: Devices and File Systems for your operating system, available on the Oracle website: http://www.oracle/com/technetwork/indexes/documentation/index.
1. Download the supported FCA BIOS update, available on http://www.hp.com/support/ downloads, to a virtual floppy. For instructions on creating and using a virtual floppy, see the HP Integrated Lights-Out user guide. 2. 3. • Unzip the file. Follow the instructions in the readme file to load the NVRAM configuration onto each FCA. If you have a blade server other than a ProLiant blade server: 1. Download the supported FCA BIOS update, available on http://www.hp.com/support/ downloads. 2. Unzip the file.
NOTE: Each LUN can be accessed through both EVA storage controllers at the same time; however, each LUN path is optimized through one controller. To optimize performance, if the LUN multipathing policy is Fixed, all servers must use a path to the same controller. Specifying DiskMaxLUN The DiskMaxLUN setting specifies the highest-numbered LUN that can be scanned by the ESX server. • For ESX 2.5.x, the default value is 8.
Figure 25 Verifying virtual disks from the host HP EVA P6000 Software Plug-in for VMware VAAI The vSphere Storage API for Array Integration (VAAI) is included in VMware vSphere solutions. VAAI can be used to offload certain functions from the target VMware host to the storage array. With the tasks being performed more efficiently by the array instead of the target VMware host, performance can be greatly enhanced.
2. Enable the primitives from the ESX server. Enable and disable these primitives through the following advanced settings: • DataMover.HardwareAcceleratedMove (full copy) • DataMover.HardwareAcceleratedInit (block zeroing) • VMFS3.HarwareAccelerated Locking (hardware assisted locking) For more information about the vSphere Storage API for Array Integration (VAAI), see the VMware documentation. 3. Install the HP EVA VAAI Plug-in.
c. d. 5. 6. Creating VAAI claim rules. Loading and executing VAAI claim rules. Restarting the target VMware host. Taking the target VMware host out of maintenance mode. After installing the HP VAAI Plug-in, the operating system will execute all VAAI claim rules and scan every five minutes to check for any array volumes that may have been added to the target VMware host. If new volumes are detected, they will become VAAI enabled.
4. Verify the installation: a. Check for new HP P6000 claim rules. Using the service console, enter: esxcli corestorage claimrule list -c VAAI The return display will be similar to the following: Rule Class VAAI VAAI b. Rule 5001 5001 Class runtime file Type vendor vendor Plugin hp_vaaip_p6000 hp_vaaip_p6000 Matches vendor=HP model=HSV vendor=HP model=HSV Check for claimed storage devices.
1. Obtain the VAAI Plug-in software package and save to a local folder on the target VMware host: a. Go to the HP Support Downloads website at http://www.hp.com/support/downloads. b. Navigate through the display to locate and then download the HP EVA P6000 Software Plug-in for VMware VAAI to a temporary folder on the server. (Example folder location: /root/vaaip) 2. Enter maintenance mode. 3. Enter a command using the following syntax: vicfg-hostops.
Installing the VAAI Plug-in using VUM NOTE: • This installation method is supported for use with VAAI Plug-in versions 1.00 and 2.00, in ESX/ESXi 4.1 environments. • HP recommends installing the plug-in using VMware Update Manager. Installing the VAAI Plug-in using VUM consists of two steps: 1. “Importing the VAAI Plug-in to the vCenter Server” (page 66) 2. “Installing the VAAI Plug-in on each ESX/ESXi host” (page 67) Importing the VAAI Plug-in to the vCenter Server 1.
4. Create a new Baseline set for this offline plug-in: a. Select the Baselines and Groups tab. b. Above the left pane, click Create. c. In the New Baseline window: d. • Enter a name and a description. (Example: HP P6000 Baseline and VAAI Plug-in for HP EVA) • Select Host Extension. • Click Next to proceed to the Extensions window. In the Extensions window: • Select HP EVA VAAI Plug-in for VMware vSphere x.x, where x.x represents the plug-in version.
NOTE: • In the Tasks & Events section, the following tasks should have a Completed status: Remediate entry, Install, and Check. • If any of the above tasks has an error, click the task to view the detail events information. Verifying VAAI status 1. 2. 3. From the vCenter Server, click the Home Navigation bar and then click Hosts and Clusters. Select the target VMware host from the list and then click the Configuration tab. Click the Storage Link under Hardware.
3. Uninstall the VAAI Plug-in. 4. 5. Enter a command using the following syntax: $host# esxupdate remove -b VAAI_Plug_In_Bulletin_Name --maintenancemode Restart the host. Exit maintenance mode.
4 Replacing array components Customer self repair Table 18 (page 71) and Table 19 (page 72) identify hardware components that are customer replaceable. Using WEBES, ISEE or other diagnostic tools, a support specialist will work with you to diagnose and assess whether a replacement component is required to address a system problem. The specialist will also help you determine whether you can perform the replacement.
Figure 26 Example of typical product label 1. Spare part number Replaceable parts This product contains the replaceable parts listed in “Controller enclosure replacement parts” (page 71) and “M6412–A disk enclosure replaceable parts” (page 72). Parts that are available for customer self repair (CSR) are indicated as follows: ✓ Mandatory CSR where geography permits. Order the part directly from HP and repair the product yourself. On-site or return-to-depot repair is not provided under warranty.
1 Requires XCS 09522000 or later.
Replacing the failed component CAUTION: protection. Components can be damaged by electrostatic discharge (ESD). Use proper anti-static • Always transport and store CRUs in an ESD protective enclosure. • Do not remove the CRU from the ESD protective enclosure until you are ready to install it. • Always use ESD precautions, such as a wrist strap, heel straps on conductive flooring, and an ESD protective smock when handling ESD sensitive equipment.
• HP Disk Enclosure I/O Module Replacement Instructions • HP Disk Enclosure Midplane Replacement Instructions • HP Disk Enclosure Power Supply Replacement Instructions • HP Fibre Channel Disk Drive Replacement Instructions • HP Power UID Replacement Instructions Replacing array components
5 Single path implementation This chapter provides guidance for connecting servers with a single path HBA to the EVA storage system with no multipath software installed. A single path HBA is defined as: • A single HBA port to a switch with no multipathing software installed • A single HBA port to a switch with multipathing software installed HBA LUNs are not shared by any other HBA in the server or in the SAN. Failover action is different depending on which single path method is employed.
Because of the risks of using servers with a single path HBA, HP recommends the following actions: • Use servers with a single path HBA that are not mission-critical or highly available. • Perform frequent backups of the single path server and its storage. Supported configurations All examples detail a small homogeneous SAN for ease of explanation. Mixing of dual and single path HBA systems in a heterogeneous SAN is supported.
Figure 27 Single path HBA server without OpenVMS 1. Network interconnection 6. SAN switch 1 2. Single HBA server (Host 1) 7. SAN switch 2 3. Dual HBA server (Host 2) 8. Fabric zone 4. Management server 9. Controller A 5. Multiple single HBA paths 10.
Figure 28 Single path HBA server with OpenVMS 1. Network interconnection 6. SAN switch 1 2. Single HBA server (Host 1) 7. SAN switch 2 3. Dual HBA server (Host 2) 8. Fabric zone 4. Management server 9. Controller A 5. Multiple single HBA paths 10. Controller B HP-UX configuration Requirements 78 • Proper switch zoning must be used to ensure each single path HBA has an exclusive path to its LUNs. • Single path HBA server can be in the same fabric as servers with multiple HBAs.
HBA configuration • Host 1 is a single path HBA host. • Host 2 is a multiple HBA host with multipathing software. See Figure 29 (page 79). Risks • Disabled jobs hang and cannot umount disks. • Path or controller failure may results in loss of data accessibility and loss of host data that has not been written to storage. NOTE: For additional risks, see “HP-UX” (page 92). Limitations • HP P6000 Continuous Access is not supported with single-path configurations.
Windows Server 2003 (32-bit), Windows Server 2008 (32-bit), Windows Server 2012 (32-bit) configurations Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • Single path HBA server can be in the same fabric as servers with multiple HBAs. • Single path HBA server cannot share LUNs with any other HBAs.
Figure 30 Windows Server 2008 (32-bit), Windows Server 2003 (32-bit), and Windows 2000 configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4. Management server 8. Controller B Windows Server 2008 (64-bit) and Windows Server 2003 (64-bit) configurations Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
Risks • Single path failure will result in loss of connection with the storage system. • Single path failure may cause the server to reboot. • Controller shutdown puts controller in a failed state that results in loss of data accessibility and loss of host data that has not been written to storage. NOTE: For additional risks, see “Windows Servers” (page 93). Limitations • HP P6000 Continuous Access is not supported with single path configurations.
• Single path HBA server cannot share LUNs with any other HBAs. • In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller. In the case of snapclones, after the cloning process has completed and the clone becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk.
Figure 32 Oracle Solaris configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4. Management server 8. Controller B OpenVMS configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • All nodes with direct connection to a disk must have the same access paths available to them.
Risks • For nonclustered nodes with a single path HBA, a path failure from the HBA to the SAN switch will result in a loss of connection with storage devices. NOTE: For additional risks, see “OpenVMS” (page 94). Limitations • HP P6000 Continuous Access is not supported with single path configurations. Figure 33 OpenVMS configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4. Management server 8.
HBA configuration • Host 1 is a single path HBA. • Host 2 is a dual HBA host with multipathing software. See Figure 34 (page 86). Risks • Single path failure may result in data loss or disk corruption. Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single path HBA server is not part of a cluster. • Booting from the SAN is not supported. Figure 34 Xen configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6.
Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • All nodes with direct connection to a disk must have the same access paths available to them. • Single path HBA server can be in the same fabric as servers with multiple HBAs. • In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller.
Figure 35 Linux (32-bit) configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4. Management server 8. Controller B Linux (Itanium) configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • All nodes with direct connection to a disk must have the same access paths available to them.
Risks • Single path failure may result in data loss or disk corruption. NOTE: For additional risks, see “Linux” (page 94). Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single path HBA server is not part of a cluster. • Booting from the SAN is supported on single path HBA servers. Figure 36 Linux (Itanium) configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7.
becomes an ordinary virtual disk, you may present that virtual disk as you would any other ordinary virtual disk. HBA configuration • Host 1 is a single path HBA host. • Host 2 is a dual HBA host with multipathing software. See Figure 37 (page 90). Risks • Single path failure may result in loss of data accessibility and loss of host data that has not been written to storage. • Controller shutdown results in loss of data accessibility and loss of host data that has not been written to storage.
VMware configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • All nodes with direct connection to a disk must have the same access paths available to them. • Single path HBA server can be in the same fabric as servers with multiple HBAs.
Figure 38 VMware configuration 1. Network interconnection 5. SAN switch 1 2. Single HBA server (Host 1) 6. SAN switch 2 3. Dual HBA server (Host 2) 7. Controller A 4. Management server 8. Controller B Mac OS configuration For information about Mac OS connectivity, see Mac OS X Fibre Channel connectivity to the HP Enterprise Virtual Array Storage System Configuration Guide (to download, see “Documents” (page 106)).
Fault stimulus Failure effect Server path failure Short term: Data transfer stops. Possible I/O errors. Long term: Job hangs, cannot umount disk, fsck failed, disk corrupted, need mkfs disk. Storage path failure Short term: Data transfer stops. Possible I/O errors. Long term: Job hangs, replace cable, I/O continues. Without cable replacement job must be aborted; disk seems error free.
OpenVMS Fault stimulus Failure effect Server failure (host power-cycled) Nonclustered-Processes fail. Clustered—Other nodes running processes that used devices served from the single-path HBA failed over access to a different served path. When the single-path node crashes, only the processes executing on that node fail. In either case, no data is lost or corrupted. Switch failure (SAN switch disabled) I/O is suspended or process is terminated across this HBA until switch is back online.
Fault stimulus Failure effect Server path failure Short: I/O suspended, possible data loss. Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting. Storage path failure Short: I/O suspended, possible data loss. Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting.
Fault stimulus Failure effect Server path failure Short term: I/O suspended, possible data loss. Long term: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting. Storage path failure Short term: I/O suspended, possible data loss. Long term: I/O halts with I/O errors, data loss.
6 Error messages This list of error messages is in order by status code value, 0 to 100. Table 20 Error messages Status code value Meaning How to correct 0 Successful Status The SCMI command completed successfully. No corrective action required. 1 Object Already Exists The object or relationship already exists. Delete the associated object and try the operation again.
Table 20 Error messages (continued) Status code value 12 Invalid Parameter handle Meaning The supplied handle is invalid. This can indicate a user error, program error, or a storage cell in an uninitialized state.
Table 20 Error messages (continued) Status code value Meaning How to correct Derived unit create: Case 2: The supplied virtual disk Case 4: Resolve the delay before performing handle is already an attribute of another derived unit. the operation.
Table 20 Error messages (continued) Status code value Meaning How to correct Case 4: GROUP get name: The operation cannot be performed because the Continuous Access group does not exist. This can indicate a user or program error. 28 Timeout A timeout has occurred in processing the request. Verify the hardware connections and that communication to the device is successful. 29 Unknown ID The supplied storage cell identifier is invalid. This can Report the error to product support.
Table 20 Error messages (continued) Status code value Meaning How to correct 46 Invalid DR mode The operation cannot be performed because the Configure the Continuous Access group Continuous Access group is not in the required mode. correctly and retry the request. 47 The target DR member is in full copy, operation rejected The operation cannot be performed because at least Wait for the copying state to complete and one of the virtual disk members is in a copying state. retry the request.
Table 20 Error messages (continued) Status code value Meaning How to correct Case 3: If this operation is still desired, delete one or more of the port WWNs and retry the operation. 60 Max size exceeded Case 1: The maximum number of items already exist on the destination storage cell. Case 2: The size specified exceeds the maximum size allowed. Case 3: The presented user space exceeds the maximum size allowed. Case 4: The presented user space exceeds the maximum size allowed.
Table 20 Error messages (continued) Status code value Meaning How to correct 71 Bad image segment The firmware image download process has failed because of a corrupted image segment. Verify that the firmware image is not corrupted and retry the firmware download process. 72 Image already loaded The firmware version already exists on the device. No action required. 73 Image Write Error The firmware image download process has failed because of a failed write operation.
Table 20 Error messages (continued) Status code value 82 Shutdown In Progress Meaning The controller is currently shutting down. How to correct No action required. 83 The device is not ready to process the request. Controller API Not Ready, Try Again Later Retry the request at a later time. 84 Is Snapshot No action required. This is a snapshot virtual disk and cannot be a member of a Continuous Access group.
Table 20 Error messages (continued) Status code value 98 Error on remote storage system. Meaning While the request was being performed, an error occurred on the remote storage system. How to correct Resolve the condition and retry the request 99 The request failed because the operation cannot be The DR operation can performed on a Continuous Access connection that only be completed when is up. the source-destination connection is down.
7 Support and other resources Contacting HP HP technical support For world wide technical support information, see the HP support website: http://www.hp.
• HP Partner Locator: http://www.hp.com/service_locator • HP Software Downloads: http://www.hp.com/support/downloads • HP Software Depot: http://www.software.hp.com • HP Single Point of Connectivity Knowledge (SPOCK): http://www.hp.com/storage/spock • HP SAN manuals: http://www.hp.com/go/sdgmanuals • HP Support Center http://h20566.www2.hp.
Typographic conventions Table 21 Document conventions Convention Element Blue text: Table 21 (page 108) Cross-reference links Blue, underlined text: http://www.hp.
Rack stability WARNING! To reduce the risk of personal injury or damage to equipment: • Extend leveling jacks to the floor. • Ensure that the full weight of the rack rests on the leveling jacks. • Install stabilizing feet on the rack. • In multiple-rack installations, secure racks together. • Extend only one rack component at a time. Racks may become unstable if more than one component is extended.
A Regulatory compliance notices Regulatory compliance identification numbers For the purpose of regulatory compliance certifications and identification, this product has been assigned a unique regulatory model number. The regulatory model number can be found on the product nameplate label, along with all required approval markings and information. When requesting compliance information for this product, always refer to this regulatory model number.
off and on, the user is encouraged to try to correct the interference by one or more of the following measures: • Reorient or relocate the receiving antenna. • Increase the separation between the equipment and receiver. • Connect the equipment into an outlet on a circuit that is different from that to which the receiver is connected. • Consult the dealer or an experienced radio or television technician for help.
This compliance is indicated by the following conformity marking placed on the product: This marking is valid for non-Telecom products and EU harmonized Telecom products (e.g., Bluetooth). Certificates can be obtained from http://www.hp.com/go/certificates.
Class B equipment Taiwanese notices BSMI Class A notice Taiwan battery recycle statement Turkish recycling notice Türkiye Cumhuriyeti: EEE Yönetmeliğine Uygundur Vietnamese Information Technology and Communications compliance marking Taiwanese notices 113
Laser compliance notices English laser notice This device may contain a laser that is classified as a Class 1 Laser Product in accordance with U.S. FDA regulations and the IEC 60825-1. The product does not emit hazardous laser radiation. WARNING! Use of controls or adjustments or performance of procedures other than those specified herein or in the laser product's installation guide may result in hazardous radiation exposure.
German laser notice Italian laser notice Japanese laser notice Laser compliance notices 115
Spanish laser notice Recycling notices English recycling notice Disposal of waste equipment by users in private household in the European Union This symbol means do not dispose of your product with your other household waste. Instead, you should protect human health and the environment by handing over your waste equipment to a designated collection point for the recycling of waste electrical and electronic equipment.
Dutch recycling notice Inzameling van afgedankte apparatuur van particuliere huishoudens in de Europese Unie Dit symbool betekent dat het product niet mag worden gedeponeerd bij het overige huishoudelijke afval. Bescherm de gezondheid en het milieu door afgedankte apparatuur in te leveren bij een hiervoor bestemd inzamelpunt voor recycling van afgedankte elektrische en elektronische apparatuur. Neem voor meer informatie contact op met uw gemeentereinigingsdienst.
Hungarian recycling notice A hulladék anyagok megsemmisítése az Európai Unió háztartásaiban Ez a szimbólum azt jelzi, hogy a készüléket nem szabad a háztartási hulladékkal együtt kidobni. Ehelyett a leselejtezett berendezéseknek az elektromos vagy elektronikus hulladék átvételére kijelölt helyen történő beszolgáltatásával megóvja az emberi egészséget és a környezetet.További információt a helyi köztisztasági vállalattól kaphat.
Portuguese recycling notice Descarte de equipamentos usados por utilizadores domésticos na União Europeia Este símbolo indica que não deve descartar o seu produto juntamente com os outros lixos domiciliares. Ao invés disso, deve proteger a saúde humana e o meio ambiente levando o seu equipamento para descarte em um ponto de recolha destinado à reciclagem de resíduos de equipamentos eléctricos e electrónicos. Para obter mais informações, contacte o seu serviço de tratamento de resíduos domésticos.
Battery replacement notices Dutch battery notice French battery notice 120 Regulatory compliance notices
German battery notice Italian battery notice Battery replacement notices 121
Japanese battery notice Spanish battery notice 122 Regulatory compliance notices
B Non-standard rack specifications This appendix provides information on the requirements when installing the EVA4400 in a non-standard rack. All the requirements must be met to ensure proper operation of the storage system. Internal component envelope EVA component mounting brackets require space to be mounted behind the vertical mounting rails. Room for the mounting of the brackets includes the width of the mounting rails and needed room for any mounting hardware, such as screws and clip nuts.
ΣdcomponentW = dsystem cgW where dcomponent= the distance of interest and W = Weight The distance of a component is its CG’s distance from the inside base of the cabinet. For example, if a loaded disk enclosure is to be installed into the cabinet with its bottom at 10U, the distance for the enclosure would be (10*1.75)+2.7 inches. Airflow and recirculation Component airflow requirements Component airflow must be directed from the front of the cabinet to the rear.
Table 23 UPS operating time limits (continued) Minutes of operation Load (percent) With standby battery With 1 ERM With 2 ERMs 80 6 32 63 50 13 57 161 20 34 146 290 R3000 100 5 20 80 6.5 30 50 12 45 20 40 120 R5500 100 7 24 46 80 9 31 60 50 19 61 106 20 59 169 303 R12000 100 5 11 18 80 7 15 24 50 14 28 41 20 43 69 101 Shock and vibration specifications Table 24 (page 125) lists the product operating shock and vibration specifications.
Glossary This glossary defines terms used in this guide or related to this product and is not a comprehensive glossary of computer terms. Symbols and numbers 3U A unit of measurement representing three “U” spaces. “U” spacing is used to designate panel or enclosure heights. Three “U” spaces is equivalent to 133 mm (5.25 inches). See also rack-mounting unit. µm A symbol for micrometer; one millionth of a meter. For example, 50 µm is equivalent to 0.000050 m.
B backplane An electronic printed circuit board that distributes data, control, power, and other signals among components in an enclosure. bad block A data block that contains a physical defect. bad block replacement A replacement routine that substitutes defect-free disk blocks for those found to have defects. This process takes place in the controller and is transparent to the host.
console LUN ID The ID that can be assigned when a host operating system requires a unique ID. The console LUN ID is assigned by the user, usually when the storage system is initialized. controller A hardware/firmware device that manages communications between host systems and other devices. Controllers typically differ by the type of interface to the host and provide functions beyond those the devices support.
disk migration state A physical disk drive operating state. A physical disk drive can be in a stable or migration state: • Stable—The state in which the physical disk drive has no failure nor is a failure predicted. • Migration—The state in which the disk drive is failing, or failure is predicted to be imminent. Data is then moved off the disk onto other disk drives in the same disk group.
as described in the SES SCSI-3 Enclosure Services Command Set (SES), Rev 8b, American National Standard for Information Services. Enclosure Services Interface See ESI. Enclosure Services Processor See ESP. environmental monitoring unit See EMU. error code The portion of an EMU condition report that defines a problem. ESD Electrostatic Discharge. The emission of a potentially harmful static electric voltage as a result of improper grounding. ESI Enclosure Services Interface.
fiber optics The technology where light is transmitted through glass or plastic (optical) threads (fibers) for data communication or signaling purposes. Fibre Channel A data transfer architecture designed for mass storage devices and other peripheral devices that require high bandwidth. Fibre Channel adapter See FCA. Fibre Channel drive enclosure An enclosure that provides 12-port central interconnect for Fibre Channel arbitrated loops following the ANSI Fibre Channel disk enclosure standard.
INFORMATION condition A drive enclosure EMU condition that may require action. This condition is for information purposes only and does not indicate the failure of an element. initialization A configuration step that binds the controllers together and establishes preliminary data structures on the array. Initialization also sets up the first disk group, called the default disk group, and makes the array ready for use. input/output module See I/O module. intake temperature See ambient temperature.
mirrored caching A process in which half of each controller’s write cache mirrors the companion controller’s write cache. The total memory available for cached write data is reduced by half, but the level of protection is greater. mirroring The act of creating an exact copy or image of data. MTBF Mean time between failures. The average time from start of use to first failure in a large population of identical systems, components, or devices.
PDM Power distribution module. A thermal circuit breaker-equipped power strip that distributes power from a PDU to HP Enterprise Storage System elements. PDU Power distribution unit. The rack device that distributes conditioned AC or DC power within a rack. petabyte A unit of storage capacity that is the equivalent of 250, 1,125,899,906,842,624 bytes or 1,024 terabytes.
redundancy 1. 2. Element Redundancy—The degree to which logical or physical elements are protected by having another element that can take over in case of failure. For example, each loop of a device-side loop pair normally works independently but can take over for the other in case of failure. Data Redundancy—The level to which user data is protected. Redundancy is directly proportional to cost in terms of storage usage; the greater the level of data protection, the more storage space is required.
topology An interconnection scheme that allows multiple Fibre Channel ports to communicate. Point-to-point, arbitrated loop, and ed fabric are all Fibre Channel topologies. transceiver The device that converts electrical signals to optical signals at the point where the fiber cables connect to the fibre channel elements such as hubs, controllers, or adapters. U UID Unit identification. uninitialized system A state in which the storage system is not ready for use.
Index A AC power distributing, 20 accessing multipathing, 41 Secure Path, 41 adding hosts, 47 adding hosts, 42 B bad image header, 102 bad image segment, 103 bad image size, 103 battery replacement notices, 120 bays locating, 10 numbering, 10 bidirectional operation, 11 C cabling controller, 19 Cache batteries failed or missing, 101 Canadian notice, 111 configuration physical layout, 9 configuring EVA, 58 configuring the ESX server, 58 connection suspended, 102 connectivity verifying, 60 connectors power
HSV Controllers defined, 9 I I/O modules bidirectional, 11 image already loaded, 103 image incompatible with configuration, 102 image too large, 102 image write error, 103 implicit LUN transition, 32 incompatible attribute, 101 invalid parameter id, 98 quorum configuration, 98 target handle, 98 target id, 98 time, 98 invalid cursor, 100 invalid state, 100 invalid status, 102 invalid target, 100 iopolicy setting, 54 P parts replaceable, 71 password mismatch, 102 PDUs, 20 physical configuration, 9 power con
T Taiwanese notices, 113 technical support HP, 106 service locator website, 106 text symbols, 108 time not set, 100 timeout, 100 transport error, 100 U universal disk drives, 13 unknown id, 100 unknown parameter handle, 100 unrecoverable media error, 100 UPS, selecting, 124 V Vdisk DR group member, 101 Vdisk DR log unit, 101 Vdisk not presented, 101 verifying virtual disks, 55 Veritas Volume Manager, 53 version not supported, 100 vgcreate, 44 virtual disks configuring, 43, 49, 55 presenting, 42 verifying,