HP StorageWorks Enterprise Virtual Array Cluster Administrator Guide This guide provides information for a storage administrator on how to manage the HP StorageWorks EVA Cluster.
Legal and notice information © Copyright 2010 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 HP EVA Cluster overview ................................................................... 13 Hardware ................................................................................................................................ Enterprise Virtual Arrays ...................................................................................................... Data Path Modules .............................................................................................................
DPM-VSM zoning ..................................................................................................................... 42 VSM-storage zoning .................................................................................................................. 43 VSM-VSM zoning ..................................................................................................................... 44 4 HP StorageWorks Management Infrastructure ...................................... 45 Quick tours ....
Web service IP address (IPv4/IPv6) ................................................................................ Discovery configuration settings ........................................................................................... Discovery interval ........................................................................................................ Discovery URI .............................................................................................................. Management port ......
Pool monitoring ........................................................................................................................ Global mechanisms ............................................................................................................ Individual mechanisms ........................................................................................................ Percentage on an individual virtual disk ..........................................................................
Setup volume configuration ...................................................................................................... Building basic storage pools .................................................................................................... Building storage pools using stripe sets ............................................................................... Storage pool size considerations ........................................................................................
Importing the VMware datastore ........................................................................................ Configuration ......................................................................................................................... Fibre Channel zoning ....................................................................................................... Storage system .................................................................................................................
Japanese VCCI marking .................................................................................................... Japanese power cord statement .......................................................................................... Korean notices ....................................................................................................................... Class A equipment ...........................................................................................................
Romanian recycling notice ................................................................................................. Slovak recycling notice ...................................................................................................... Spanish recycling notice .................................................................................................... Swedish recycling notice ...................................................................................................
Figures 1 Racked EVA Cluster ................................................................................................. 14 2 Install/Restore License Key screen of the Launch AutoPass window ................................. 18 3 License dialog box .................................................................................................. 18 4 Five zone types .......................................................................................................
Tables 1 VSM license types ................................................................................................... 17 2 License capacities ................................................................................................... 19 3 Example naming convention for zone types ................................................................ 36 4 Example naming convention for device port types ....................................................... 36 5 Troubleshooting Perfmon ............
1 HP EVA Cluster overview The Enterprise Virtual Array (EVA) Cluster is rack installed at the factory and set up on site by HP Services. Two EVA6400s or EVA8400s are bundled with SAN Virtualization Services Platform (SVSP) components, software, and licenses, to allow quick deployment into a storage area network. Figure 1 shows a configuration of EVAs with the maximum number of drive shelves available with an optional expansion cabinet, as well as SVSP devices and Fibre Channel switches.
1 HP Command View server 5 Ethernet switch 2 VSM server 6 EVA 3 Data Path Module 7 Expansion cabinet (optional) 4 Fibre Channel switch Figure 1 Racked EVA Cluster . Hardware This section describes the HP EVA Cluster hardware components. Enterprise Virtual Arrays The two rack-mounted EVA6400/8400s consist of the following: • HSV controllers—These contain power supplies, cache batteries, fans, and an operator control panel (OCP).
• Fibre Channel disk enclosures—Contains up to 12 disk drives, power supplies, fans, midplane, and I/O modules. • Fibre Channel Arbitrated Loop Cables—Provides connectivity to the HSV controllers and the Fibre Channel disk enclosures. For information on the EVAs, go to http://www.hp.com/support/manuals. In the Storage section, click Disk Storage Systems, and in the EVA Disk Arrays section, click HP StorageWorks 6400/8400 Enterprise Virtual Array.
• Data migration • Local replication (point-in-time copies, snapshots, and snapclones) • Asynchronous remote replication A command line interface (CLI) can be used with the VSM application to write scripts for automated processes. For information on using the CLI, see the HP StorageWorks SAN Virtualization Services Platform Manager Command Line Interface User Guide. This and other SVSP documentation can be obtained by going to http://www.hp.com/support/manuals.
on a certain amount of capacity. The basic unit of software license capacity is 1 TB. Most operations use license capacity. For example, configuring a back-end LU as a member of a storage pool deducts the BELU's disk capacity from your volume management licensed capacity.
3. Click Install/Restore License Key. Figure 2 Install/Restore License Key screen of the Launch AutoPass window . 4. In the File path field, enter the path name of the license key file. Alternatively, click Browse to search for the file. 5. Click View file contents. The properties of the license key file appear in the License Contents table. 6. In the Select column of the License Contents table, select the check box of the license you want to install. 7. Click Install.
The following table describes the capacities listed in the License dialog box. Each capacity features its total amount, the amount used, and the amount available. Table 2 License capacities Property Description Basic capacity The amount of licensed capacity allotted for basic operations (for example, the maximum size of all pools). See Table 1 on page 17. BC capacity The amount of licensed capacity allotted for local replication (for example, the size of all parents).
HP EVA Cluster overview
2 Adding servers to the HP EVA Cluster The EVA Cluster begins with a starter kit of cluster components, virtualization software (Volume Manager, Business Copy, Continuous Access and Thin Provisioning), two EVAs (either EVA6400s or EVA8400s), a pair of Fibre Channel switches, an Ethernet switch, and management servers. The EVA Cluster is designed to be factory configured and tested so that the EVA Cluster can be easily installed into an existing SAN.
3. Uninstall SVSP MPIO from the system. For example: # installp –u devices.fcp.disk.HP.svsp.mpio.rte Verify that it has been uninstalled properly. For example: # lslpp -l devices.fcp.disk.HP.svsp.mpio.rte The above command should return no output. 4. Install the TL/ML (follow instructions provided by IBM). 5. Reboot the server. 6. Install the HP SVSP MPIO kit. For installation instruction, refer to the AIX SVSP MPIO installation instructions. 7.
6. Uninstall the SVSP MPIO from the system. For example: # installp –u devices.fcp.disk.HP.svsp.mpio.rte Verify that it has been uninstalled properly. For example: # lslpp -l devices.fcp.disk.HP.svsp.mpio.rt The above command should return no output. NOTE: Do not reboot the server. 7. Install the HP SVSP MPIO kit. For installation instruction, refer to the AIX SVSP MPIO installation instructions. 8. Reboot the server. 9. Bring in all the paths and run the cfgmgr command. 10.
2. Partition alignment: • Align partition with diskpart.exe for Windows 2000 or 2003, non SP1: a. b. c. d. e. f. g. Download diskpart.exe from the Windows 2000 kit and place the executable file in the Windows system path. Click Disk Management, and in the lower right hand side pane, note the disk number of drive to be partitioned. Open a command prompt and type diskpart –s for disk number. Answer y to both questions for yes. Enter 128 for starting offset (128 = 64K). Insert desired partition size in MB.
5. Align a partition with diskpart for Windows Server 2003. a. Open a command prompt, type diskpart.exe, and press Enter. b. Type list disk. Note the disk number on which you want to create a partition. c. Type select disk (disk number). d. Type create partition primary align=64. e. Type assign letter (the drive letter). Or type assign mount (the path of a empty directory to mount the drive). f. Type exit to exit diskpart.
3. # cd /tmp/hpmpio 4. # gunzip devices.fcp.disk.HP.svsp.mpio.rte.1.0.0.1.aix.5.2.bff.gz 5. # smitty install_latest COMMAND STATUS Command: OK stdout: yes stderr: no Before command completion, additional instructions may appear below [TOP]geninstall -I "a -cgNQqwX -J" -Z -d . -f File 2>&1 File: devices.fcp.disk.HP.hsv.mpio.rte 1.0.2.0I:devices.fcp.disk.HP.hsv.mpio.rte 1.0.1.0 HP-UX multipathing HP-UX 11iv2 NOTE: Secure Path requires a right-to-use license per server.
1. Install the failover QLogic multipath driver. 2. Before mapping any LUNs to the host, open the SANsurfer GUI and choose the first port. 3. Select the persistent binding tab and select the check box under "Bind All" and choose a target ID for each of the targets mapped and select Save. 4. Perform a similar binding on the other HBA port. 5. Map the LUNs from the VSM to this host and scan for the virtual disks at the host using the hp_rescan -a utility. 6.
1. 2. Enable MPxIO: a. The MPxIO driver is installed and disabled by default on Solaris 10 SPARC servers for Fibre Channel devices. To enable MPxIO, type # stmsboot –e. A reboot is required. b. The MPxIO driver is installed and enabled by default in Solaris 10 x86–based servers. To verify, open /kernel/drv/fp.conf and check for the line: mpxio-disable=”no”;. Ensure it is set to “no” (MPxIO enabled). Disable AutoFailback: a. Open the /kernel/drv/scsi_vhci.conf file in a text editor. b.
QLogic HBAs with Windows To enable persistent binding with QLogic HBAs: 1. Launch Sansurfer. This application can be downloaded from hp.com or qlogic.com. 2. Connect to Localhost. The utility displays all QLogic HBAs recognized in the system. 3. Select the port on the HBA to be enabled for persistent binding. 4. Select the Target Persistent Binding tab. 5. Bind the WWPN to a Target ID. 6. Click Save, and enter the password on the security check popup.
4. Bind to the WWPN by clicking on a Target WWPN, and then clicking the Add Binding button. Either accept the default entries or change them as appropriate and click OK. 5. Restart the server activate the changes. Presenting SVSP virtual disks to servers The HBAs of a server need to be defined so that the DPM can customize its interface to the operating system of the server that will be using the virtual disk.
2. Right-click and select New. 3. Follow the prompts. 4. Present to a previously defined UDH. 5. On the server (UDH), discover the new LUN. Defining hosts in SVSP This section describes how to attach SVSP virtual disks to servers by operating system. AIX servers Not available at the time of publication. HP-UX servers 1. Run ioscan on all HP-UX hosts. 2. Create HP-UX user-defined hosts (UDHs) from absent HBAs. 3. Right-click on HP-UX UDHs and set the host to offline. 4.
7. Follow the VMware instructions to create a DATASTORE and to make the newly discovered virtual disk visible to a guest operating system. Windows servers Use Disk Manager to discover and initialize the new devices.
3 Zoning Zoning is a critical part of the configuration process for HP SVSP since it can directly impact the capacity, stability, and performance of the overall system. Failure to implement a correct zoning configuration can lead to a nonfunctioning configuration or one that operates in a reduced state with respect to capacity, performance, and high availability. Zoning overview Any given device port on the SAN can communicate with every other device port when zoning is disabled.
• Use zoning objects, often called aliases, on the switch as zone members. Zoning objects allow you to create logical representations on the switch of physical devices and ports in the SAN. These objects can be modified or removed as the physical topology changes and are easier to manage. • Follow a logical naming convention for zoning objects and zones that is readable and can be understood by anyone with knowledge of the HP SVSP.
NOTE: Since a VSM port can act as either a target or initiator, having ports from the same VSM in a single zone can lead to unpredictable behavior and should be avoided. HP recommends all VSM-related zones have at most one port from each VSM. • Each zone contains a single initiator device but may contain multiple target devices. A target device can be represented by multiple ports but an initiator device is represented by only a single port.
Figure 4 Five zone types .
NOTE: • The zoning templates shown in this section refer to a single domain with DPM pair configurations that have two or four licensed quads on the DPM. The rules and guidelines described in this section can be applied to other configurations with multiple domains or DPM pairs with a different number of licensed quads. • All examples involving storage related zones in this section will use the HP EVA and HP Command View EVA.
Figure 7 Single VSM dual-port configuration . Figure 8 Host server dual-port configuration . DPM-host zoning A DPM-Host front-end zone is used to give a host access to the virtual disks created in HP SVSP and presented through the DPM front-end target ports. A front-end path between the DPM and the host consists of a single host port and a single DPM front-end port.
• Use one operating system in each individual DPM-Host zone. This can be done by implementing only single initiator port zones as previously discussed in the zoning guidelines. Figure 9 illustrates zoning between a single server with two dual-port HBAs and the first two quads of a DPM pair with the recommended limit of eight front-end paths.
Figure 11 Zoning between two servers with two HBAs and two quads of a DPM pair . DPM-storage zoning A DPM-Storage back-end zone is used to give the DPM access to the back-end storage used to create virtual disks managed by HP SVSP. A back-end path between the DPM and back-end storage consists of a single DPM back-end initiator port and a single port on a back-end storage device controller.
Figure 12 Zoning between 2 dual-port controllers and first quad of each DPM . Figure 13 illustrates full zoning between a dual, quad-port controller back-end storage device and the first two quads of each DPM. Figure 13 Zoning between 2 quad-port controllers and two quads of each DPM . If greater control over the available paths to each LUN is required (for example, to improve load balancing), use a combination of more restrictive zoning and LUN presentation to the DPM from the back-end storage device.
the active non-optimized paths for reads to a LUN results in internal proxy I/Os between storage controllers and should be avoided. The DPM has a back-end multipath driver with properties similar to a basic multipath driver. A path table is constructed with all back-end paths available for each LUN. The status of each path and whether the path is active/active optimized or passive/active non-optimized is based on information provided by the back-end storage device.
Figure 14 Zoning between VSMs and first quad of a DPM pair . VSM-storage zoning A VSM-Storage back-end zone is used to give the VSM access to the ports of the back-end storage device to manage the storage being virtualized by SVSP, and facilitate data mover functions involving the “soft path” such as mirroring, local snapshots, and remote replication. A back-end path between the VSM and the storage device consists of a single port on the VSM and a storage device port.
VSM-VSM zoning The VSM-VSM zone allows the VSMs in an HP SVSP configuration to communicate with each other over the SAN in order to determine VSM connectivity state and manage failover behavior between the VSMs. This special purpose zone is not classified as a front-end or back-end zone since it does not involve any storage devices or hosts. In this type of zone, a VSM is not strictly classified as a target or initiator.
4 HP StorageWorks Management Infrastructure The HP StorageWorks Management Infrastructure is installed with the HP Command View SVSP client. HP Command View SVSP is the GUI that supports the HP EVA Cluster. The Management Infrastructure software provides storage-related security features and user interface capabilities.
Configuration interface – registry page quick tour The Registry page allows you to view registry entries.
Security interface – Management Group page quick tour The Management Group page allow you to view key characteristics of a Management Group, change authenticator states, and open the Move Machine wizard. 1. Management Group 2. Actions 3. Authenticating OS security domains 4.
Management Infrastructure concepts Discovery All machines with Management Infrastructure software which are on the same LAN can automatically discover and communicate with each other. To do this, the Management Infrastructure discovery component on each machine stores information about its web service API and other functions in a local Management Infrastructure registry.
Local service port, page 66 Available OS security domains, page 66 Management Group management service port, page 67 User interface integration (SPoG and trees) The Management Infrastructure user interface integration function allows multiple Management Infrastructure capable user interfaces to be displayed in a single browser-based interface.
Configuration settings and service startup When Management Infrastructure software is first installed on a machine, the default settings are applied and there is no Management Infrastructure configuration file. When you make and save the first change using the configuration interface, Management Infrastructure software creates a configuration file and writes the changes to the file. All subsequent configuration changes are written to the configuration file.
When browsing to an Management Infrastructure interface, if there is no trusted certificate authority in the Management Infrastructure environment to attest to the certificate, then connection to Management Group member machines is blocked. This condition can be resolved by installing the Management Group self-signed certificate in the browser as a trusted certificate authority. See “Management Group security certificate installation overview” on page 54.
Machines with Management Infrastructure software on a LAN The HP Management Infrastructure software on SVR01 and SVR07 was automatically installed as part of the installation of server-based HP Command View EVA. The HP Management Infrastructure software on STOR06, EVA02, and EVA05 was factory installed. As part of their installation, each machine would be a member of its own Management Group. Thus, there would initially be five Management Groups, as shown below.
Reorganized into fewer Management Groups Or, assume that you would like all of the machines to participate in single sign-on. You could make any four of the five machines members of another machine's Management Group, or you could create a new Management Group and make the five machines members of the new group, as shown below. Reorganized into one Management Group Management Groups are created when: • When a Management Infrastructure capable application is initially installed on a server.
• In Management Groups that include multiple machines, configure more than one machine as an OS security domain authenticator. This practice prevents losing single-sign on functionality for the Management Group should an authenticator machine become unavailable. Management Group names Management Group naming guidelines: • Names must be unique in a given Management Infrastructure environment. • Names can only include alphabetical and numeric characters, underscores _ and dashes -.
When browsing to a Management Infrastructure interface, if there is no trusted certificate authority to attest to the certificate, then connection to the machine is blocked. This condition is indicated by an error message on the login dialog box. If this occurs, the certificate for the Management Group can be installed in the browser as a trusted certificate authority. After installing the certificate and refreshing the browser, the connection will no longer be blocked.
6. 7. Click Install Certificate. The Certificate Import wizard opens. a. Click Next. b. Select Place all certificates in the following store and click Browse. c. Select Trusted Root Certification Authorities. d. Click Next, then Finish, then Yes. The certificate for the Management Group is installed in the browser. Close the dialog boxes and refresh the browser. After the refresh, the connection error should no longer be displayed.
• When browsing from a server which is running Windows Server 2008, the server's IE Enhanced Security must be turned off. Procedure 1. Browse to a Management Group member machine. A Secure Connection Failed dialog box opens. 2. Click Or you can add an exception. a. Click Add Exception. The Add Security Exception page opens. b. Click Get Certificate. c. Click Confirm Security Exception. 3. The login dialog box opens and a connection error is displayed. 4.
Using the configuration interface Best practices • Avoid simultaneous configuration sessions for a given machine. Although Management Infrastructure software supports simultaneous browser sessions, communication errors can result when multiple sessions simultaneously attempt to configure the same machine. Example. Assume that two administrators simultaneously have sessions running to make changes for machine A.
1. Browse to the Management Infrastructure configuration interface for the machine and log in. The Configuration page opens. 2. Expand the General panel. 3. In the Web Service IP Address box, enter the desired IP address. 4. Click Save Changes. Wait until the change is saved. 5. After the change is saved, click Restart Service. The Management Infrastructure software will bind to the specified IP address.
• Viewing an Management Infrastructure interface requires a supported browser and Flash Player plug-in. Supported browsers and Flash Players are listed in the HP StorageWorks Enterprise Virtual Array Compatibility Reference. • HP recommends using qualified user names. See “Log in user names” on page 50. • The Management Infrastructure web server port number shown in the example is the default, 2374. If the port number has been changed, you must enter the new port.
Viewing configuration guidelines Management Infrastructure configuration guidelines appear in the: • Management Infrastructure configuration online help • Management Infrastructure administrator guide Also, the user interface includes proactive assistance for most fields. For example, in the Discovery Interval, you can delete the displayed value, type an x, then mouse-over the warning icon to see the guideline. Default value example Interactive assistance example Viewing the configuration for a machine 1.
General configuration settings Audit file max age This general setting establishes the number of calendar days that Management Infrastructure audit files are retained. The files are deleted the day after the max age is reached. • The default is 10 days. • If you change the setting, it must be in the range of 1 to 365 days. • Typical use. To increase how long audit files are retained. This setting is used mostly by HP support personnel.
Logging level This general setting specifies the level of detail that is recorded in an Management Infrastructure log file. • The default is 1 (least detail). • If you change the setting, it must be in the range of 1 to 4 (most detail). • Typical use. To change amount of detail being recorded about the Management Infrastructure service. Increasing the detail is helpful when troubleshooting. This setting is used mostly by HP support personnel.
Discovery configuration settings Discovery interval This discovery setting establishes how often Management Infrastructure software performs discoveries in an Management Infrastructure network. • • • • The default is 600 seconds (10 minutes). If you change the setting, it must be in the range of 1 to 3600 seconds (1 hour). Typical use. To optimize performance relative to the size of an Management Infrastructure network. Considerations. A short interval increases network traffic.
Non-local registry entry time-out This discovery setting establishes how long Management Infrastructure software waits before it removes non-local entries from its registry. The entries are removed if they are not updated during the time-out period. • • • • The default is 60 seconds (1 minute). If you change the setting, it must be in the range of 1 to 3600 seconds (1 hour). Typical use. Used in conjunction with a change in the Registry Table Update interval. Considerations.
Security configuration settings The following topics describe configuration settings for the Management Infrastructure security function. See also “Security integration” on page 48. Available OS security domains This security setting establishes an administrator-specified list of OS security domains that Management Infrastructure software can use for authentication. • By default, this setting is empty. • If you specify a security domain, it can be any legal domain name (up to 255 characters).
Management Group communication service port This security setting establishes the port for the Management Group communication web service. • The default is 0 (zero), which allows Management Infrastructure software to assign the port number. • If you specify a port number, it must be in the range of 1024 to 65535. • Typical use. To accommodate environments where corporate policy or network infrastructures (firewalls, proxies, etc.) require that specific ports be used.
interval causes Management Infrastructure software to check for trees less often, which decreases network traffic but also decreases interface responsiveness. SPoG port This security setting establishes the port for the Management Infrastructure SPoG web service. • The default is 0 (zero), which allows Management Infrastructure software to assign the port number. • If you specify a port number, it must be in the range of 1024 to 65535. • Typical use.
Tree integrator port This security setting establishes the port for the Management Infrastructure tree integrator web service. • The default is 0 (zero), which allows Management Infrastructure software to assign the port number. • If you specify a port number, it must be in the range of 1024 to 65535. • Typical use. To accommodate environments where corporate policy or network infrastructures (firewalls, proxies, etc.) require that specific ports be used.
4. Click Move Machine. The Move Machine wizard opens. 5. Click Next. 6. On the Select Destination Management Group page, select New Management Group, enter the name for the new group, then click Next. 7. Follow the instructions in the wizard pages, then click Finish to create the new group. Deleting a Management Group You cannot use the Move Machine wizard to directly delete a Management Group. Instead, you delete a group by removing all member machines from the group.
• If the machine that you choose is the only member of the existing Management Group, then the wizard will delete the existing group. 1. Identify the target machine to remove from a Management Group. 2. Browse to security interface on any member machine in the target machine's Management Group. 3. Select the machine. Management Infrastructure software will determine if the machine's membership can be changed. If yes, the Move Machine button is enabled. 4. Click Move Machine.
Navigation methods and key combinations are as follows: Common navigation Key Click (activate) a selected element Spacebar Move forward through settings, choices or buttons Tab Move backwards through settings, choices or buttons Shift+Tab Select a choice (radio button) Up and down arrows Drop down list navigation Key Close a drop down list Ctrl + up arrow Move through a list and highlight an item Up and down arrows Open a drop down list Ctrl + down arrow Select a highlighted list item Ent
• Message: Unable to communicate with security component on the local machine. Verify local Management Infrastructure security component is started and configured properly. Verify SSL certificates are loaded properly. Resolution: Verify that the local Management Infrastructure security component is started and configured properly. Verify that all SSL certificates are correctly loaded. • Message: Invalid OS security domain credentials for destination Management Group.
HP StorageWorks Management Infrastructure
5 Monitoring the SVSP domain This chapter describes how to set up monitoring for an SVSP domain using administrative tools. Array workload concentration SVSP relies on the back-end arrays to handle the I/O workload. The volume management capabilities permit focusing the workload of multiple front-end virtual disks onto one back-end virtual disk. The DPMs can unintentionally concentrate front-end I/O workload from multiple front-end hosts and front-end paths down a single back-end path.
HP Command View SVSP GUI For best maintenance performance, open the HP Command View SVSP GUI on a daily basis and look for changes to the objects status. Object status should be Normal (for logical objects like volumes or pools) or Present (for physical devices like disk drives or HBAs). Figure 17 shows statuses in which the logical objects have a status of Normal and the physical devices have a status of Present. Alternatively, you can use a Search to check for statuses that are not normal or not present.
HP Command View SVSP Event Log You can review the HP Command View SVSP Event Log for this information: • • • • Critical events Errors Warnings Information An event log can be viewed for objects by going to that object on the navigation pane. Select the object and when the Properties window appears, select the Event Log tab. Alerts automated notification HP Command View SVSP can be set up to provide e-mail notification of events that occur within the domain.
Setting up Perfmon Perfmon should be run from a remote server so as not to interfere with VSM performance. Install the application on the server, and then log in and launch the perfmon.msc file. Perfmon can then be set up to run automatically. These procedures are for Windows Server 2003, but the concepts are the same for Windows Server 2008, although the display is different. 1. Click Performance Logs and Alerts in the left sidebar. 2. Double-click Counter Logs. 3.
5. In the log settings window, click Add Counters. 6. In the drop-down box under select counters from computer, choose or enter the IP address of the VSM server that is to be monitored. Add any counters you want to monitor. 7. Click Close. 8. In the Interval field, select the time interval for data to be sampled. You can start with 15 seconds, but you may need to occasionally use 3 seconds for more precise data. 9. In the Run As: field, enter the user name and password needed to access the VSM.
Using Perfmon counters to log Perfmon has many counters available, but your data becomes harder to monitor if you have to sort through too much. To learn about a counter, select it, and then click the Explain button. Choose the category from the Performance object drop-down menu. Some counters with similar purposes (for example, Processor: %, Processor Time, and System: Processor Queue Length) are in different categories.
Troubleshooting Perfmon Table 5 describes potential Perfmon problems and possible corrective actions. Table 5 Troubleshooting Perfmon Problem Perfmon log does not start or is not working Cannot change Perfmon settings Corrective action • Check that the correct username and password are used for the VSM server. • Check that the time period is correct. For example, you may have chosen 6 days instead of 6 hours. Ensure that Apply is selected.
monitored objects (Pages/sec, Avg. Disk Queue Length and % Processor time). They can be removed by using the Delete key or clicking the delete icon (X). Add the dedicated performance objects of the VSM agent. Click the '+' icon (or use Ctrl+I) to launch the Add counters interface. On the Performance object drop-down list, locate 'Sync. Mirror Group Performance' and 'Sync. Mirror Job Performance'.
The following table describes the available counters and what they measure. Counter name Explanation Sync.Mirror Job Average Read Response Time (microSec) Measures the average time it takes to complete a read for a mirror job Sync.
Recommendations The VSM servers may be set up with Integrated Lights Out (iLO) to allow for remote monitoring. This requires two additional IP addresses. Information on iLO can be found on HP.com or http:// h18013.www1.hp.com/products/servers/management/iloadv2/index.html?jumpid= reg_R1002_USEN. Monitoring DPM performance You can use the Diagnostics panel in the DPM Management GUI to monitor the performance of a DPM. Use a web browser to access the GUI and log in with a user name and password.
Monitoring license use To monitor license use, routinely check the License dialog box with the HP Command View SVSP GUI. Monitoring capacity utilization To monitor pool utilization, use the HP Command View SVSP GUI as described in the HP StorageWorks Command View SVSP User Guide. Monitoring event logs Use the HP Command View SVSP GUI to view and configure event logs. See the HP StorageWorks Command View SVSP User Guide.
• Emergency (2% free) • Delete all PiTs (starting from the oldest) Individual mechanisms Percentage on an individual virtual disk When free capacity drops below the threshold: • • • • Notification occurs every five minutes PiT creation is stopped User is not permitted to create new volumes Expansion thresholds for thin volumes, PiTs, and snapshots are reduced to 1 GB per expansion request.
6 Installing the VSM command line interface The Virtualization Services Manager (VSM) command line interface (CLI) provides scripting capabilities that you can use to automate creation and modification of VSM objects or entities. You may see references to the VSM CLI on some menu screens as SANAPI. The VSM CLI package is separate from the VSM software and the DPM image.
Solaris 9 and 10 operating systems Install this package: VSMCLI.5.1.29a.0.pkg Windows 2003/2008 operating systems Run one of these executables : • VSM CLI – 5.1.29.0.exe • VSM CLI X86 – 5.1.29.0.exe • VSM CLI IA64 – 5.1.29.0.exe Install locations The installation programs will install the CLI set of commands in the following locations: • AIX: /usr/lpp/svmdd.
7 Removing devices from the domain This chapter provides a set of steps or checklists for what is to be done when deleting objects or devices from the domain. See the referenced material to get the exact steps needed to perform the indicated action. Deleting or reusing capacity In general, the process of deleting virtual disks is the reverse or opposite of the process used to create and present those same virtual disks. 1.
5. Delete the pool and any associated stripes sets. Deleting back-end LUs 1. Follow the Deleting or reusing capacity procedure above to first identify all affected virtual disks. 2. Delete the PiTs and snapshots associated with those virtual disks. 3. Using the HP Command View SVSP GUI, unpresent the virtual disks from the servers. 4. Using the GUI, delete the virtual disks.
9. Remove all DPM-to-array and VSM server-to-array zone sets. 10. Turn off power to the array and detach it from the SAN. Deleting hosts From a VSM perspective, the only requirement for deleting a host is to have the host in an absent status. This status can be achieved by powering the host down. Once deleted from the GUI, VSM automatically removes the permission for that host on all the objects that it used.
Removing devices from the domain
8 Boot from SVSP devices This chapter outlines the process for booting from the SAN with the various operating systems supported by the SAN Virtualization Services Platform (SVSP). Please see the http://h18006.www1.hp.com/ storage/networking/bootsan.html website for a link to detailed boot from SAN documentation, where application notes are available for each operating system. Boot from SAN with AIX Not currently supported as of the publication date for this document.
7. Continue installing the OS on the new root disk. If previous network settings are not being reused, configure the network settings when prompted during the OS install and setup. 8. If previous network settings are being reused, wait until the OS installation has been completed. Log in as root and use the settings recorded from the original /etc/rc.config.d/netconf file to configure the LAN interfaces for the newly installed OS. 9. Ping a known IP address to confirm network connectivity.
2. Map a LUN to the virtual machine as described in the chapter titled “Creating Virtual Machines.” Specifically, follow the instructions in the section titled “Mapping a SAN LUN.” NOTE: The ESX Server has two methods of presenting SAN storage to virtual machines: • With disk files, a virtual machine can use part of a VMFS-formatted virtual disk on a presented SAN LUN as its storage drive.
14. When a Windows cannot verify the digital signature for this file message appears on the Windows Boot Manager screen, press Enter, followed by F8. Choose Disable Driver Signature Enforcement. (To prevent repeating these actions, you can run following command from a command prompt: Bcdedit.exe -set TESTSIGNING ON). 15. Verify that the host appears in the GUI host list. 16. Add the second host HBA to the original zone from step 5. The host should now see all required front-end ports of both DPMs. 17.
9 Microsoft Volume Shadow Copy Service The Volume Shadow Copy Service (VSS) captures and copies stable images for backup on running systems, particularly servers, without unduly degrading the performance and stability of the services they provide. The VSS solution is designed to enable developers to create services (writers) that can be effectively backed up by any vendor's backup application using VSS (requesters).
• Unified interface to VSS. VSS abstracts the shadow copy mechanisms within a common interface while enabling a hardware vendor to add and manage the unique features of its own providers. Any backup application (requester) and any writer should be able to run on any disk storage system that supports the VSS interface. • Multivolume backup. VSS supports shadow copy sets, which are collections of shadow copies, across multiple types of disk volumes from multiple vendors.
1. Run the SVSP VSS installation file. You can find the SVSP VSS installation file on the VSM installation CD or you can download the file on the web. From the VSM installation CD, click Browse to SVSP VSS Provider on the main menu. For your type of installation (ia64, x64, or x86), select the SVSPVssProviderSetup file. The Welcome screen appears. 2. Click Next. The Select Installation Folder window appears.
3. Click Next. The Confirm Installation window appears. 4. If you want to make changes to your installation, click Back until you arrive at the window where you can make the change. If you are satisfied with your installation choices, click Next to start the installation. After the SVSP VSS hardware provider is installed, the Installation Complete window appears. 5. 100 Click Close to exit the installation wizard.
6. Make sure that the SVSP VSS hardware provider is recognized by VSS by opening a DOS command prompt window (click Start > Run, and type cmd), typing vssadmin list providers, and pressing Enter. The information returned by the vssadmin list provider command is similar to the information shown in Figure 18. Figure 18 SVSP VSS hardware provider in a DOS window . Make sure that the SVSP VSS hardware provider appears in the list. 7.
8. To add a user that the SVSP VSS hardware provider will use for interfacing with the VSMs, type SaHwConfig AddUser and press Enter. Adding a user succeeds if these conditions are met: • The server can access a SAN CLI virtual disk from the domain with which you are trying to connect. • The user exists in that domain.
7. In the DOS command prompt window on the host server, type vshadow.exe -p m: and press Enter. This command creates a persistent shadow copy on drive m:. The drive label is the letter that you gave to the new drive in step 6. The shadow copy is a read-only point-in-time replica of the original volume contents. A persistent shadow copy remains in the system until you, or the backup application, initiates an explicit command to delete the shadow copy.
Figure 20 Results of the vshadow.exe -p m: command in the DOS command prompt window . Figure 21 shows an example of the hierarchical snapshot structure that is created on the VSM. Both the PiT name and the snapshot name are included. Figure 21 Hierarchical snapshot structure .
8. In the DOS command prompt window on the server, mount the view by typing vshadow.exe -el={Snapshot ID},k:\ and pressing Enter. The vshadow.exe -el={Snapshot ID},k:\ command mounts the view to the host server with the specified mount point. In this example the mount point was selected to be the drive letter k:, but the mount point can be any one of these: 9. • Drive letter • Directory share • Network share Open the newly mounted snapshot.
• Backup server • Backup client or clients • Backup media servers The backup server runs the backup software and manages the backup process by communicating with the backup agents and the media servers. Backup clients negotiate with the applications and prepare the data for backup. The media servers take the data that the backup agent prepared and write the data to tapes or disks.
Figure 23 Example of a disk drive acting as a media server . Figure 24 shows the attributes of the backup policy. Note that this policy is configured to perform snapshot backups.
Figure 24 Backup policy attributes . Figure 25 shows that VSS was selected as the snapshot method for use. VSS was selected through the Advanced Snapshot Options... button shown in Figure 24. Figure 25 VSS selected as the snapshot method . VSS deployment with VSM virtual disk groups To reference multiple VSM virtual disks as a single entity, you must place the VSM virtual disks in a virtual disk group (VDG).
on all VDG members. VDGs are often used to encapsulate data files and log files of the same database into a one entity. From a server perspective, the data files and the log files reside on two separate drives. From a backup and recovery perspective, the data files and the log files are two components of a single entity. A backup snapshot must be synchronously captured on both the data drive and the log drive.
Microsoft Volume Shadow Copy Service
10 Site failover recovery with asynchronous mirrors The asynchronous mirror decision table When using an asynchronous mirror group pair, some actions and properties require that you specify either the source or destination. See the following tables: creating and deleting, adding and deleting virtual disks, editing (setting) properties, and controlling. Creating and deleting Task Create an asynchronous mirror group pair. Delete an asynchronous mirror group or pair.
Editing (setting) properties Task Async mirror group to specify Result on source async mirror group Result on destination async mirror group Edit (general) an asynchronous mirror group. Either Properties are changed. Properties are changed. Auto suspend on links down mode for an asynchronous mirror group pair. Source Auto suspend on links down is disabled or enabled. Auto suspend on links down is disabled or enabled. Comment for an asynchronous mirror group.
Task Resume remote replication in an asynchronous mirror group pair. Revert an asynchronous mirror group pair to its home configuration. Suspend remote replication in an asynchronous mirror group pair. Async mirror group to specify Result on source async mirror group Result on destination async mirror group Source Remote replication from the source is allowed. If applicable, begins log merging or full copy from the source. Remote replication to the destination is allowed.
11. Each SVSP domain now sees the VSM servers of the other SVSP domain with status degraded because the FC HBAs that previously used to connect the SVSP domains are no longer used. Delete the FC HBAs that previously used to connect the SVSP domains from the HBA lists on both SVSP domains. You can access the HBA list from the HBA node in the tree.
7. Wait until the PiT you created is copied to the destination. 8. Suspend the group. 9. Split the group. 10. Log in to the DR site's SVSP domain. 11. Assign the host permission to use the mirrored virtual disk. 12. Merge the mirrored virtual disk without enabling rollback. Specify the name of the original virtual disk on the main site as the destination. VSM creates an async mirror group, mirroring from the DR site to the main site. To fail back from the disaster recovery site to the main site: 1.
1. Connect to the main site's SVSP domain and prepare the virtual disk for a merge, as follows: a. Verify that the virtual disk exists. b. Detach the task. c. Remove host presentations from the virtual disk. d. Delete any snapshots on the virtual disk. The virtual disk is now ready to become the destination virtual disk of a new group created by merging the current production virtual disk on the DR site. 2.
4. Assign the host permission to use the recovery virtual disks with HP Command View SVSP. a. Select the specific DR element that you want to recover, and click Vdisks > Presentation > Present to assign permission to a host to use the DR element. The host will then use the most recent PiT available on that DR element. There is a chance, however, that the application will not be able to use the PiT as it is.
3. 118 Perform a controlled failback of each virtual disk to the new main site, as follows: a. Plan a downtime window for the application, based on the organization’s needs and any data that was not yet mirrored. b. At the scheduled time, shut down the application, which is currently using a virtual disk on the DR site. c. Unmount the virtual disk on the host. d. Connect to the DR SVSP domain. e. Remove the host permission from the virtual disk.
11 Configuration best practices SAN topology The SAN configuration for the EVA Cluster contains four fabrics while the standard SVSP configuration contains only two fabrics. This allows the EVA Cluster to be directly plugged into the customer SAN, and enables the back-end components to be preconfigured in the factory. This simplification will enable many debugging, troubleshooting, and performance features in the future.
SAN switches All switches on a fabric must be from the same vendor. It is permissible for one fabric to contain switches from one vendor and the other fabric to contain switches from a different vendor. Switches are not supported in vendor neutral roles (or interoperability mode). High bandwidth devices (such as tape backup servers and storage arrays) often use the same SAN switches as the EVA Cluster components.
Setup volumes might be spread across different arrays for additional redundancy, but remember that all writes are mirrored, and therefore the slowest performing volume will determine when the write is acknowledged. HP does not recommend placing all three volumes on a single array; if that is all that is available, create only two setup volumes and use different pools if the array has that option.
from the EVA to the DPMs. In that case, 16 back-end volumes would be the recommended minimum number of volumes for the pool, while 32 back-end volumes would be even better. • Larger numbers of volumes for a concatenated pool have the benefit described above of providing more opportunities to distribute the workload across the multiple array ports; however, there are trade-offs involved. There is a maximum number of 1024 back-end volumes supported per domain.
Storage pool size considerations When comparing small pools to large pools, the large pools have an advantage. Because there are fewer, they are easier to manage, and since the pool free space in the same pool is used for snapshots, asynchronous mirroring, and thin provisioning, there is a less likelihood of stranded capacity. Small pools, however, may allow the administrator to better partition the storage for various user groups, or to have a pool per back-end array to ease troubleshooting.
Configuration best practices
12 Backup and restore This chapter describes how to backup and restore the VSM configuration database and the DPM configuration information. Backing up and restoring the VSM configuration The active VSM runs an automatic backup of the setup configuration at predefined intervals and places it in the C:\Program Files\Hewlett-Packard\SVSP\Core\Backup directory. The default backup interval is every 60 minutes. You can define when the backup occurs through HP Command View SVSP.
CAUTION: Possible loss of data access—You can safely restore the VSM setup database from backup only if the system does not have PiTs. If PiTs exist, either created by users or create by multi applications, the metadata for the PiTs is in the setup backup. The metadata in the setup backup might be invalid and can result in the loss of data access if restored. You are given the option to restore the setup database when VSM is started in safe mode.
1. Login to the DPM as admin. 2. To upload the saved config file you want use, type this command and press Enter. load config where is the name of the configuration file that you saved using the save config command. The configuration file is retrieved from /common/ images/configs. 3. 4. Reboot the DPM. Make sure that the configuration from the backup configuration file is the configuration that you want to use.
Backup and restore
13 Basic maintenance and troubleshooting This chapter describes how to solve problems you might encounter after installing and configuring the HP StorageWorks SAN Virtualization Services Platform. Diagnostic tools HP Command View EVA and the Array Configuration Utility (ACU) for the MSA will report hardware and configuration problems after storage has been presented to the HP StorageWorks SAN Virtualization Services Platform domain.
Problem Corrective action Check the DPM status LEDs. An amber LED may show as solid or blinking. The DPM powers on but does not boot • A solid amber indicates the DPM failed to complete the boot up process. • A blinking amber indicates the DPM detected a chassis failure or impending chassis failure (such as a fan or power supply). In either case, contact HP Services. • Check if the second VSM is active. The VSM does not become active on startup. • Check the status of VSM service in the VSM monitor.
Problem Corrective action • Verify that there is adequate free space in the pool. • Verify that the pool is in a normal status. Missing EVA or MSA virtual disks can cause a volume creation failure. • Verify the presented capacity is available. Cannot create a new virtual disk. • Verify that the license capacity has not been exceeded. • Check the Disabled Operations tab on the pool for a potential cause. Check the disabled operations tab on the virtual disk.
Presentation problems Table 9 Presentation problems Problem Corrective action • Verify that the correct preferred path is configured with HP Command View EVA or the ACU for each LUN that is exposed. If so, reboot the VSM server. Back-end LUNs cannot be seen, even after a rescan using the GUI.
Administrative problems Table 10 Administrative problems Problem Corrective action Cannot remember the administrator account password. Report the problem to HP support. The setup database will have to be modified to reset the password to a known value. Cannot remember non-administrative account password. Log in with the administrator user name and password and reset the user password to a known value.
VSM server LUN masking To verify a proper LUN masking configuration on the VSM server, open the VSM management interface and go to the back-end LUs. Make sure that VSM can see all the back-end LUs provisioned to the VSM HBAs. For each back-end LU, verify that the number of paths is correct. To check the settings of the second VSM, failover the passive VSM server, and then repeat the process.
14 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
1. Add VSM SaSnap icon to the desktop. a. Click the Windows Start button. b. Highlight All Programs. c. Highlight SVSP. d. Right-click on SVSP SaSnap, highlight Send To, and select Desktop. A VSM SaSnap icon appears on the desktop. 2. Launch VSM SaSnap by clicking the icon created in the previous step. 3. If using HP Continuous Access SVSP: 136 a. Right-click in blank field under the Name section, and select Add New. Enter the IP address, and an appropriate user name and password. b.
c. Select Full and click OK. d. Repeat these steps for all other VSMs at the second site. NOTE: If you are not using HP Continuous Access SVSP, you still have to enter the administrator user name and password for the second VSM at the local site. The user name must have administrative privileges for the VSM. 4. Ensure that the check box is selected next to the names of the Local SVSP. 5. Check the box next to the DPMSnap under Local SVSP. 6. Click the + button next to parameters and select full.
8. Click the ... button to set the output path. NOTE: The SaSnap process can cause the local drive to run out of free space over time as files accumulate. Consider putting SaSnap files onto another partition, such as the backup partition. The status window shows the log collection progress. When the process is complete, the abort button will change to a Start button.
9. Upload the collected log files to HP support. a. Open Windows Explorer. b. Navigate to the output directory selected during the VSM SaSnap process. c. Contact your local support center, and get the appropriate FTP site to use for uploading the SaSnap files. d. Upload the files to the site. E-mail the pointer to HP Support and send a copy of the message to SVSPHealthCheck@hp.com.
Related information The following documents [and websites] provide related information: • • • • • HP StorageWorks HP StorageWorks HP StorageWorks HP StorageWorks HP StorageWorks Guide • HP StorageWorks • HP StorageWorks Command View EVA User Guide Command View EVA Release Notes Command View SVSP user guide SAN Virtualization Services Platform Data Path Module User Guide SAN Virtualization Services Platform Manager Command Line Interface User SAN Virtualization Services Platform Best Practices Guide SAN Vi
Convention Monospace, italic text Monospace, bold text Element • Code variables • Command variables Emphasized monospace text WARNING! Indicates that failure to follow directions could result in bodily harm or death. CAUTION: Indicates that failure to follow directions could result in damage to equipment or data. IMPORTANT: Provides clarifying information or specific instructions. NOTE: Provides additional information. TIP: Provides helpful hints and shortcuts.
HP product documentation survey Are you the person who installs, maintains, or uses this HP storage product? If so, we would like to know more about your experience using the product documentation. If not, please pass this notice to the person who is responsible for these activities. Our goal is to provide you with documentation that makes our storage hardware and software products easy to install, operate, and maintain.
A Using VSM with firewalls To protect you system against unauthorized access from outside your network, enable Windows Firewall. However, a number of ports need to be opened to allow SVSP to communicate properly. The HP Command View SVSP GUI, the SVSP product, and the VMAs use the ports listed below.
2. On the General tab, verify that the firewall is On (enabled). 3. Click the Exceptions tab. 4. Select File and Printer Sharing and click the Edit button. 5. Check the box to enable UPD 137. While UPD 137 is highlighted, click the Change scope button. 6. Select Any computer (including those on the Internet) and click OK. 7. Ensure the check box to the left of File and Printer Sharing is selected. 8. Select the Remote Desktop check box. 9. Click Add Port... The Add a Port window appears.
10. Enter a name and port number for the entries below. NOTE: The VSM Status Monitor is already displayed by default. 11. Click OK.
Windows 2008 The HP Command View SVSP GUI, the SVSP product, and the VMAs use the ports listed below.
2. Select Go to Windows Firewall. Ensure that Windows Firewall is turned on. 3. Select Windows Firewall Properties. The Windows Firewall with Advanced Security screen appears.
4. On each of the Domain Profile, Private Profile, and Public Profile tabs, select Settings > Customize, and ensure that under Firewall settings, Display a Notification is set to Yes (default). 5. Set Inbound Rules from the Windows Firewall with Advanced Security page to create the News Rules to open the ports.
6. Select the Advanced tab and ensure that All Profiles is selected. 7. Repeat the above steps to open the inbound ports, then open the Outbound Rules under the Windows Firewall with Advanced Security screen and open the same ports with the same settings. In addition, the SVSP must be added to the Exceptions tab in the Windows Firewall Settings. 1. Go to Control Panel > Windows Firewall Settings. 2. Click on the Exceptions tab. 3. Select Add program. 4. Add the SVSP Monitor and SaSnap.
Using VSM with firewalls
B Adding arrays to the EVA Cluster Adding a new array The following guidelines must be observed when adding arrays to the domain: • An array must be attached to both fabrics. • Back-end zones are created as described in Chapter 3 on page 33. Two sets are needed and are defined as follows: • Array to DPMs • Array to VSM servers • If adding the array also involves using a new DPM quad, add the new DPM quad to the VSM server zones and verify that the DPM ports are licensed.
Adding EVAs When using HP Command View EVA to create back-end LUNs on the EVA that will be presented to the SAN Virtualization Services Platform domain (both DPM and VSM servers), or when presenting existing EVA virtual disks to the domain (for data import) the following presentation rules apply: • Each DPM quad and each VSM server must be defined as hosts and include all ports. • Create a number of LUNs with at least one LUN per path to the controller.
2. Click the Manage option, click Create a vdisk from the drop-down menu, and then select Automatic Virtual Disk Creation (Policy-based). The following screen appears. 3. Enter a virtual disk name, tolerance level, size of virtual disk, and number of volumes. Click Create virtual disk. The following screen is displayed.
4. Click Create New Virtual Disk and a processing message appears, as shown on the following screen. 5. After the virtual disk and volumes are created successfully, the volumes can be discovered as back-end LUs with HP Command View SVSP as shown below. 6. Create a storage pool using the MSA back-end LUs. Use this pool to create SVSP virtual disks based on your requirements. Adding HP XP arrays Define each DPM and VSM as a host. The host mode should be set to Windows, host mode 0C.
Adding non-HP branded arrays The general process is: 1. Create a LUN using the array's management software with properties similar to those of an EVA LUN, for example, failover with autofailback. Create a number of LUNs with at least one LUN per path to the controller. 2. Present the LUN to at least one and not more than two quads per DPM and both VSM servers of the SVSP domain, consistent with the array-to-DPM zoning.
Adding arrays to the EVA Cluster
C Deploying VMware ESX Server with SVSP For current information regarding VMware and SVSP, see the HP StorageWorks SAN Virtualization Services Platform release notes. To ensure proper deployment, the following sections must be followed in order. HP recommends that you test this deployment in a test environment before using it in a production environment. The ESXi 4.0 Configuration Guide is available at http://www.vmware.com/pdf/vsphere4/r40/ vsp_40_esxi_server_config.pdf.
Deployment steps Before actually configuring the environment it is very important to carefully plan the environment and the deployment steps after taking all the requirements into consideration. The deployment steps include configuring of all the storage components that provide storage services for the VMware environment: • Fibre Channel zoning—Configure the appropriate SAN zoning. • Storage systems—Configure the LUNs and LUN masking.
5. Assign permissions to the other servers in the cluster. There is one complication on imported VMware disks: If the LUN is a Raw Device Mapped (RDM) LUN, you must remove and re-map the imported RDM LUN to the virtual machine (VM) configuration. This is done on a VM-by-VM basis. 1. Before importing the LUN, check each VM for RDM LUNs and record the back-end LU number and the VM LU number. 2. Shut down the VM and remove the RDM LUN mappings. 3. Import the LUNs . 4. Re-create the RDM LUN mappings.
3. Follow the HP StorageWorks Command View SVSP User Guide for instructions on how to create a virtual disk from that storage pool. 4. Follow the HP StorageWorks Command View SVSP User Guide for instructions on how to configure a UDH for the VMware server (choose VMware for the OS type). 5. Configure the SCSI personality (Hosts > Personalities > Show).
1. Using the VMware VI client GUI, choose the ESX server, select the Configuration tab, and then click the Advance Settings link. On the left menu window choose Disk. • Disk.UseDeviceReset—Make sure this setting is set to 0. This setting forces VMware to not send a target reset to the DPM port when initiating a failover, allowing the failover to be done on a more granular, per-LUN basis (see Disk.UseLunReset below). • Disk.UseLunReset—Make sure this setting is set to 1.
4. Under the Storage Adapters window, chose QLA/LP HBA and then select Rescan. Make sure Scan For New Storage Devices and Scan for New VMFS Volumes are checked in the Rescan window. Under the Details window you should see targets and paths within a target for every VSM virtual disk. 5. Using the VMware VI client GUI, choose the ESX server, select the Configuration tab and then click Storage. 6. Select Add Storage and follow the VMware wizard to create a DATASTORE.
NOTE: At this time, the only supported multipath policy is Most Recently Used (default). VMware storage administration best practices Rescan SAN operations HP recommends that whenever a change is made to the front-side zone a “Rescan SAN” operation is performed on all ESX servers. This is particularly important after recovery of a path failure or when an DPM is replaced. If “Rescan SAN” is not performed, the ESX server may not know about new available paths and will operate in a single path mode.
• Install the HP SVSP VSS hardware provider within the Windows OS running on the virtual machine. For more information on the SVSP VSS hardware provider, see “Installing the SVSP VSS hardware provider on the host server” on page 98. Creating a synchronized snapshot of the virtual machine • Verify the VMware VSS hardware provider is installed as part of the VMware tools. If you upgrade from update 1 to update 2, a manual installation is needed.
4. While the server is rebooting, at the BIOS level, verify the SVSP virtual disk was recognized by the HBA BIOS. 5. In the ESX install wizard, verify that the installation will be done to the VSM virtual disk. You are able to recognize the VSM virtual disks because they have HP in their names. VMware issues VMware and large I/Os When setting up a VMware server, change the default Disk.DiskMaxIOSize to 1 MB or less. This can be done using the following steps: 1.
2. Select the Disk configuration option and scroll down to the Disk,DiskMaxIOSize option, and change the value in the field to 1024. 3. Apply the changes and reboot the ESX server. Using Windows Guests on VMware with VSS The DPM VSS hardware provider installed on a Windows Virtual machine does pass the request to create a VSS snapshot properly to the VSM, and the VSM does respond properly to this request by creating the PIT and the snapshot and assigning it back.
D Configuration worksheets Use these worksheets to document the names, IP addresses, and other important information for your SAN Virtualization Services Platform configuration.
Configuration worksheets
E Specifications This appendix contains the specifications for the HP StorageWorks SAN Virtualization Services Platform Data Path Module (DPM) and the HP StorageWorks SAN Virtualization Services Platform Virtualization Services Manager (VSM) Server (v1).
Device management Feature Description Access Serial port, SSH, telnet, web browser, SOAP/XML, SNMP interfaces Interfaces Supported protocols • 10/100/1000 Ethernet RJ-45 for management (optional) • 1 serial DB-9 RS232 for configuration and basic management ssh, telnet, ftp, http, SNMP, NTP, and net syslog Mechanical Characteristic Value Dimensions 17 in. (W) x 1.75 in. (H) x 26 in. (D) Enclosure 1U rack-mountable Weight 10.
Regulatory The Data Path Module has the following certifications: • • • • • UL CE cUL FCC TUV VSM server Environmental Specification Value Temperature range1 Operating 10°C to 35°C (50°F to 95°F) Shipping –40°C to 70°C (–40°F to 158°F) Maximum wet bulb temperature 28°C (82.4°F) Relative humidity (noncondensing)2 Operating 10% to 90% Non-operating 5% to 95% 1 All temperature ratings shown are for sea level. An altitude derating of 1°C per 300 m (1.
Specification Value Input requirement Rated input voltage 100 VAC to 240 VAC Rated input frequency 50 Hz to 60 Hz Rated input current 7.1 A (at 120 VAC); 3.5 A (at 240 VAC) Rated input power 852 W BTUs per hour 2910 (at 120 VAC); 2870 (at 240 VAC) Power supply output Rated steady-state power 700 W Characteristics Component Characteristic Processor Dual-Core Intel Xeon 5130 2.
F Regulatory compliance notices This section contains regulatory notices for the HP ______________________. Regulatory compliance identification numbers For the purpose of regulatory compliance certifications and identification, this product has been assigned a unique regulatory model number. The regulatory model number can be found on the product nameplate label, along with all required approval markings and information.
of this equipment in a residential area is likely to cause harmful interference, in which case the user will be required to correct the interference at personal expense. Class B equipment This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation.
Class B equipment This Class B digital apparatus meets all requirements of the Canadian Interference-Causing Equipment Regulations. Cet appareil numérique de la class B respecte toutes les exigences du Règlement sur le matériel brouilleur du Canada.
Japanese power cord statement Korean notices Class A equipment Class B equipment Taiwanese notices BSMI Class A notice 176 Regulatory compliance notices
Taiwan battery recycle statement Turkish recycling notice Türkiye Cumhuriyeti: EEE Yönetmeliğine Uygundur Enterprise Virtual Array Cluster Administrator Guide 177
Laser compliance notices English laser notice This device may contain a laser that is classified as a Class 1 Laser Product in accordance with U.S. FDA regulations and the IEC 60825-1. The product does not emit hazardous laser radiation. WARNING! Use of controls or adjustments or performance of procedures other than those specified herein or in the laser product's installation guide may result in hazardous radiation exposure.
French laser notice German laser notice Italian laser notice Enterprise Virtual Array Cluster Administrator Guide 179
Japanese laser notice Spanish laser notice Recycling notices English recycling notice Disposal of waste equipment by users in private household in the European Union This symbol means do not dispose of your product with your other household waste. Instead, you should protect human health and the environment by handing over your waste equipment to a designated collection point for the recycling of waste electrical and electronic equipment.
Bulgarian recycling notice Този символ върху продукта или опаковката му показва, че продуктът не трябва да се изхвърля заедно с другите битови отпадъци. Вместо това, трябва да предпазите човешкото здраве и околната среда, като предадете отпадъчното оборудване в предназначен за събирането му пункт за рециклиране на неизползваемо електрическо и електронно борудване. За допълнителна информация се свържете с фирмата по чистота, чиито услуги използвате.
Estonian recycling notice Äravisatavate seadmete likvideerimine Euroopa Liidu eramajapidamistes See märk näitab, et seadet ei tohi visata olmeprügi hulka. Inimeste tervise ja keskkonna säästmise nimel tuleb äravisatav toode tuua elektriliste ja elektrooniliste seadmete käitlemisega egelevasse kogumispunkti. Küsimuste korral pöörduge kohaliku prügikäitlusettevõtte poole.
Greek recycling notice μ Αυτό το σύμβολο σημαίνει ότι δεν πρέπει να απορρίψετε το προϊόν με τα λοιπά οικιακά απορρίμματα. Αντίθετα, πρέπει να προστατέψετε την ανθρώπινη υγεία και το περιβάλλον παραδίδοντας τον άχρηστο εξοπλισμό σας σε εξουσιοδοτημένο σημείο συλλογής για την ανακύκλωση άχρηστου ηλεκτρικού και ηλεκτρονικού εξοπλισμού. Για περισσότερες πληροφορίες, επικοινωνήστε με την υπηρεσία απόρριψης απορριμμάτων της περιοχής σας.
Lithuanian recycling notice Nolietotu iek rtu izn cin šanas noteikumi lietot jiem Eiropas Savien bas priv taj s m jsaimniec b s Šis simbols norāda, ka ierīci nedrīkst utilizēt kopā ar citiem mājsaimniecības atkritumiem. Jums jārūpējas par cilvēku veselības un vides aizsardzību, nododot lietoto aprīkojumu otrreizējai pārstrādei īpašā lietotu elektrisko un elektronisko ierīču savākšanas punktā. Lai iegūtu plašāku informāciju, lūdzu, sazinieties ar savu mājsaimniecības atkritumu likvidēšanas dienestu.
Slovak recycling notice Likvidácia vyradených zariadení používate mi v domácnostiach v Európskej únii Tento symbol znamená, že tento produkt sa nemá likvidovať s ostatným domovým odpadom. Namiesto toho by ste mali chrániť ľudské zdravie a životné prostredie odovzdaním odpadového zariadenia na zbernom mieste, ktoré je určené na recykláciu odpadových elektrických a elektronických zariadení. Ďalšie informácie získate od spoločnosti zaoberajúcej sa likvidáciou domového odpadu.
Bulgarian recycling notice Czech recycling notice Danish recycling notice 186 Regulatory compliance notices
Dutch recycling notice Estonian recycling notice Finnish recycling notice Enterprise Virtual Array Cluster Administrator Guide 187
French recycling notice German recycling notice Greek recycling notice 188 Regulatory compliance notices
Hungarian recycling notice Italian recycling notice Latvian recycling notice Enterprise Virtual Array Cluster Administrator Guide 189
Lithuanian recycling notice Polish recycling notice Portuguese recycling notice 190 Regulatory compliance notices
Romanian recycling notice Slovak recycling notice Spanish recycling notice Enterprise Virtual Array Cluster Administrator Guide 191
Swedish recycling notice Battery replacement notices Dutch battery notice 192 Regulatory compliance notices
French battery notice German battery notice Enterprise Virtual Array Cluster Administrator Guide 193
Italian battery notice Japanese battery notice 194 Regulatory compliance notices
Spanish battery notice Enterprise Virtual Array Cluster Administrator Guide 195
Regulatory compliance notices
Glossary This glossary defines acronyms and terms used with the SVSP solution. access path A specific series of physical connections through which a device is recognized by another device. active boot set The boot set used to supply system software in a running system. Applies to the DPM. See also boot set. active path A path that is currently available for use. See also passive path, and in use path.
Business Copy SVSP An HP StorageWorks product that works with SAN storage systems to provide local replication capabilities within the SVSP domain, providing local point-in-time (PiT) copies of data, using snapshots of data, based on changes to virtual disks. CLI Command line interface. The Data Path Module provides a CLI through the local administrative console (serial port console), telnet, or SSH.
front-side path A path between the host (host bus adapter) and the Data Path Module. Group In VSM, a virtual container that defines one or more elements for a data moving task. See also VDG. HBA See host bus adapter. host In VSM, every server that uses VSM virtual disks. Servers that run as VSM servers are also considered hosts. host bus adapter A device that provides input/output (I/O) processing and physical connectivity between a server and a storage system.
migration A VSM service that migrates virtual disks from one storage pool to another while the host application remains online. mirror A VSM service that mirrors virtual disks synchronously and asynchronously. See also asynchronous mirroring and synchronous mirroring. mirroring The creation and continuous updating of one or more redundant copies of data, usually for the sake of fault or disaster recovery. OpenVMS Unit ID Abbreviated as OUID.
SAN Storage Area Network. A network specifically dedicated to the task of transporting data between storage systems and servers. SANs are traditionally connected over FC networks but have also been built using iSCSI technology. secondary path For an active/passive device, the set of paths that are passive by default. See also active/passive RAID, passive path, and primary path.
target port A Fibre Channel port capable of presenting one or more SCSI LUNs to servers. A target is also known as the destination of a server's I/O request. task In VSM, a process that carries out a data moving task on a group. temporary virtual disk A virtual disk created when a PiT is created on another virtual disk. The temporary virtual disk holds any modifications redirected from the original virtual disk after the PiT is created.
VSS freeze A period of time during the shadow copy creation process when all services (writers) have flushed their writes to the volumes and are not initiating additional writes. VSS thaw The completion of a VSS shadow copy freeze. WWNN World Wide Node Name. The globally unique identifier for a system containing Fibre Channel ports. A WWN is a 64–bit value, typically represented as a string of 16 hexadecimal digits. WWPN World Wide Port Name.
Glossary
Index A adding array, 151 EVAs, 152 MSAs, 152 new back-end LUs, 155 servers, 21 administrative problems, 133 array adding, 151 non-HP branded, 155 retiring, 90 array workload concentration, 75 asynchronous mirrors decision table, 111 B back-end LUs, 151, 152 back-end LU deleting, 90 backup, DPM configuration, VSM configuration, battery replacement notices, 192 best practices Fibre Channel links, 120 SAN switches, 120 SAN topology, 119 virtualized environments, 120 boot from SAN HP-UX, 93 Linux, 94 VMware,
E L Emulex HBAs, multipathing, 29 European Union notice, 175 EVAs adding, 152 presentation problems, 132 laser compliance notices, 178 licenses capacities, 19 entering, 16 key file, 17 types, 17 Linux boot from SAN, 94 defining host, 31 multipath, 26 F failover DR site after problem, 115 main site lost, 116 fault isolation, 129 Federal Communications Commission notice, 173 Fibre Channel links best practices, 120 firewalls, 143 H health check commands, submitting, 139 help obtaining, 135 high availabili
P Perfmon function, 80 set up, 78 troubleshooting, 81 Performance Monitor, 81 persistent binding Emulex HBAs, 29 QLogic HBAs, 29 presentation problems, 132 VSM LUNs to servers, 30 Q QLogic HBAs, multipathing, 29 R rack stability warning, 141 recycling notices, 180 regulatory compliance Canadian notice, 174 European Union notice, 175 identification numbers, 173 Japanese notices, 175 Korean notices, 176 laser, 178 recycling notices, 180 Taiwanese notices, 176 related documentation, 140 restore DPM configura
VSM CLI host package, 87 VSM CLI virtual disk, 87 VSM management software monitoring setup volume, 81 VSM server specifications, 171 VSS on virtual machine, 163 W warning rack stability, 141 websites HP , HP Subscriber's Choice for Business, 135 product manuals, 140 Windows boot from SAN, 95 defining host, 32 Emulex HBAs, 29 multipath, 28 QLogic HBAs, 29 VSS on virtual machine, 163 Z zoning VMware, 159 208