Veritas Volume Manager 5.0.1 Release Notes HP-UX 11i v3 HP Part Number: 5900-0033 Published: November 2009 Edition: 1.
© Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Table of Contents 1 Veritas Volume Manager 5.0.1 Release Notes...........................................................5 Product Description................................................................................................................................5 New Features in This Release.................................................................................................................5 System Requirements..............................................................................
List of Tables 1-1 1-2 1-3 1-4 4 Required and Recommended Patches...........................................................................................11 License Versus VxVM Feature Availability...................................................................................12 Feature Enabled by Full VxVM.....................................................................................................12 Features Enabled by HP Serviceguard Storage Management Licenses..............................
1 Veritas Volume Manager 5.0.1 Release Notes This chapter discusses new features, licenses, system requirements, compatibility with previous releases, and known problems with Veritas Volume Manager 5.0.1 which is supported on systems running HP-UX 11i v3.
vxdmpadm settune dmp_cache_open=on For more information on the vxdmpadm command, see the vxdmpadm(1M) manpage. IMPORTANT: Ensure that the dmp_cache_open tunable is tuned on during dynamic reconfiguration operations. • Enhancements to the Dynamic Multipathing Feature This release provides a number of enhancements to the DMP features of VxVM. These enhancements simplify administration and improve display of detailed information of the connected storage.
vxdmpadm command, enables you to sort the output based on path name, DMP node name, enclosure name, or host controller name. — Enhanced I/O Statistics The following enhancements are now made to the I/O statistics: ◦ Queued and Erroneous I/O counts The vxdmpadm iostat show command now provides options to display queued I/O counts (using the -q option) and erroneous I/O counts (using the -e option). These options are applicable for the DMP node, path, and controller.
— New Log File Location for DMP Events The new log file for DMP events is the /var/adm/vx/dmpevents.log file. For backward compatibility, the previous log file, /etc/vx/dmpevents.log, is symbolically linked to the /var/adm/vx/dmpevents.log file. — Extended Device Attributes Displayed in the vxdisk list Command The vxdisk list command now displays extended device attributes, such as hardware mirrors, for certain arrays.
Additionally, the vxdmpadm setattr arraytype array_type command sets the attribute for all array types derived from the given array_type. — Support for ALUA JBOD Devices The Device Discovery Layer (DDL) now provides improved support for Just a bunch of disks' (JBOD) devices to include Asymmetric Logical Unit Access (ALUA) JBOD devices. DMP now provides immediate basic support for any ALUA-compliant array, however full support still requires an ASL.
• Campus Cluster enhancements The campus cluster feature provides the capability of mirroring volumes across sites, with hosts connected to storage at all sites through a Fibre Channel network. In this release, the following enhancements have been made to the campus cluster feature: — Site Tagging of disks or Enclosures Enhancements to the vxdisk command related to site tagging are now available: ◦ ◦ — Site tagging operations on multiple disks or enclosures are now supported.
Software Requirements • OS Version HP-UX 11i v3 0903 OEUR (or later) • Patches Required Table 1-1 lists the required and recommended patches for installing VxVM 5.01. It lists each patch and mentions if the patch or a superseding patch is already included in a given OEUR. If the OEUR does not contain a particular patch, you must download the patch accordingly. NOTE: Ensure that the HP-UX 11i v3 March 2009 OEUR release is installed on the system.
Table 1-2 License Versus VxVM Feature Availability License Feature Availability by Product Base Concatenation, spanning, rootability and root disk mirroring, multiple disk groups, striping, mirroring and VEA, coexistence with native volume manager Full Base features plus volume resizing, DRL logging for mirrors, striping plus mirroring, mirroring plus striping, RAID-5, RAID-5 logging, hot sparing, hot-relocation, online relayout, Storage Expert, Device Discovery Layer, DMP HP-UX Serviceguard Storage B
Table 1-4 Features Enabled by HP Serviceguard Storage Management Licenses (continued) Feature T2771DB T2773DB T2774DB T2775DB T2777DB Smartmove (volume resynchronization) Yes Yes Yes Yes Yes Thin Provisioning Yes Yes Yes Yes Yes Dynamic Multipathing (DMP) - Balance I/O Yes across multiple paths between the server and the storage array to improve performance and availability, active/passive (A/P) failover for root disk Yes Yes Yes Yes Improved usability for I/O statistics – More Yes vi
Table 1-4 Features Enabled by HP Serviceguard Storage Management Licenses (continued) Feature 14 T2773DB T2774DB T2775DB T2777DB Import Cloned LUN on the same host as the No original LUN used with ShadowCopies, BCVs, and so on. No Yes Yes Yes Dynamic Storage Tiering - Allows the No administrator to identify and move infrequently used files online to less expensive storage, transparently to users and applications.
Table 1-4 Features Enabled by HP Serviceguard Storage Management Licenses (continued) Feature T2771DB T2773DB T2774DB T2775DB T2777DB ODM, Quick I/O (QIO); Concurrent I/O (CIO) Yes Yes Yes Yes Yes Cluster File System No No No Yes Yes 1 2 HP-UX client support. Defaults to standalone support of Oracle database features. * If used with a file system, the technical dependencies are that ODM is used for the I/O and that VxVM (Full) or higher (SMO Standard to SF RAC) is present.
Limitations of VxVM 5.0.1 on HP-UX 11i v3 The following limitations exist for VxVM 5.0.1 on HP-UX 11i v3: • Limitation of automatic site reattachment feature The automatic site storage does not reattach automatically if the site storage disconnects and reconnects to a CVM slave node whereas the master node is still connected to the site storage. • Manually installing the VRTSvxvm patch requires a reboot After you manually install the VRTSvxvm patch, you must reboot the system.
• • • Volume relayout is not supported for site-confined volumes or for site-consistent volumes in this release. The vxvol command cannot be used to set site consistency on a volume unless sites and site consistency have first been set up for the disk group. VxVM does not currently support RAID-5 volumes in cluster-shareable disk groups. Known Problems And Workarounds Following are the known limitations of VxVM 5.0.1 on HP-UX 11i v3: NOTE: For information on the Known Problems and Workarounds in VxVM 5.
Running the vxdisk scandisks command before the Disk Group (DG) deport operation triggers DMP reconfiguration that updates the DMP database such that a disk is accessible through active paths. • Problem I/O failures result in the disk failing flag In some DMP failover scenarios, I/O retry causes the disk failing flag to be set, although there is nothing wrong with the disks except for the failing flag. Workaround Clear the failing flag using the vxedit command.
# vxdisk rm disk_access_name # vxdisk define disk_access_name HP recommends that you use the vxdisksetup command to initialize a disk for use by VxVM. This method does not create persistent disk access records. To initialize a disk for regular use, run the following command: # /etc/vx/bin/vxdisksetup -I disk_access_name In some cases, issues can occur with persistent disk access records.
NOTE: You must execute these steps for all agile DSFs in the /dev/rdisk/ directory. If new LUNs or DSFs are added to the host at runtime on the HP-UX host (that is, Storage Foundation is already configured and then new LUNs are added), you must execute the following steps separately for the newly added LUNs. 1. Determine if the attribute is already disabled, that is, if alua_enabled is set to false. The following scsimgr command displays the alua_enabled attribute and its persistence.
# scsimgr -p get_attr all_lun -a device_file -a alua_enabled — Disabling nMP at the esdisk driver level This method sets the alua_enabled attribute at the esdisk driver level, thereby disabling or enabling ALUA based on the attribute value for all LUNs bound to the esdisk driver. The following procedure explains how to set, save, and display the current and default settings for the alua_enabled attribute and also to disable the ALUA persistently at the esdisk driver level: 1.
of time and consume considerable resources. To minimize the frequency of having to perform a site reattachment operation, HP recommends that you use the vxdmpadm settune command to configure a value smaller than 60 seconds for dmp_health_time, and a value larger than 300 seconds for dmp_path_age. — Domain controller mode in CVM clusters The slave nodes in a CVM cluster only have access to I/O objects.
The subsequent slave join will succeed. ◦ — If the node join is required before the connectivity problem is resolved, the master role must be failed over to a node that has connectivity to all shared disk groups. The subsequent slave join (with the new master) will succeed. Problem Deport operation on a shared disk group fails With all primary paths inaccessible, the deport operation on a shared disk group fails to clear the PGR keys as the DMP database is not up-to-date.
code. It will be ignored. WARNING: The file '/usr/conf/mod/dmpaaa.1' does not contain valid kernel code. It will be ignored. WARNING: The file '/usr/conf/mod/dmpap.1' does not contain valid kernel code. It will be ignored. WARNING: The file '/usr/conf/mod/dmpapf.1' does not contain valid kernel code. It will be ignored. WARNING: The file '/usr/conf/mod/dmpapg.1' does not contain valid kernel code. It will be ignored. WARNING: The file '/usr/conf/mod/dmphdsalua.1' does not contain valid kernel code.
Even though this error message is displayed, the clone operation completes in the background. A rescan in the GUI displays the appropriate objects. Workaround In the SOFTWARE\VERITAS\VRTSobc\pal33\Agents\__defaults section in the /etc/ vx/isis/Registry file, add the following key: [REG_INT] "USE_RT_TIMEOUT" = 0; • The dbed_clonedb -o restartdb command fails after the database group switches to a second node with spfile.