Veritas Volume Manager 5.0.1 Release Notes, HP-UX 11i v3, First Edition, November 2009

# scsimgr -p get_attr all_lun -a device_file -a alua_enabled
— Disabling nMP at the esdisk driver level
This method sets the alua_enabled attribute at the esdisk driver level, thereby
disabling or enabling ALUA based on the attribute value for all LUNs bound to the
esdisk driver.
The following procedure explains how to set, save, and display the current and default
settings for the alua_enabled attribute and also to disable the ALUA persistently at
the esdisk driver level:
1. Disable ALUA at the esdisk driver:
# scsimgr set_attr -N "/escsi/esdisk" -a alua_enabled = 0
Value of attribute alua_enabled set successfully
2. Make this attribute persistent across host reboots and save it:
# scsimgr save_attr -N "/escsi/esdisk" -a alua_enabled=0
Value of attribute alua_enabled saved successfully
NOTE: You need not reboot the host to make these changes effective. You must
also not run I/O when making these changes.
3. Use get_attr to check or display the attribute changes:
# scsimgr get_attr -N "/escsi/esdisk" -a alua_enabled
SCSI ATTRIBUTES FOR SETTABLE ATTRIBUTE SCOPE : "/escsi/esdisk"
name = alua_enabled
current = false
default = true
saved = false
4. Check the alua_enable attributes:
# scsimgr get_attr -D /dev/rdisk/disk460 -a alua_enabled
SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk460
name = alua_enabled
current = false
default = false
saved =
For more information on scsimgr, see the scsimgr(1M) man page.
The following are cluster issues in this release of Veritas Volume Manager:
Cluster Volume Manager (CVM) behavior when the disk group failure policy is set
to leave
If the master node loses access to all copies of the logs, the behavior depends on the
disk group failure policy. If the disk group failure policy is set to leave, the master node
panics so that a different node that has access to the disk group can become the master.
If the detach policy is set to global, the master node panics immediately. However, if
the detach policy is set to local, the master node does not panic immediately. The panic
occurs later when an event occurs that requires an update to the kernel log, for example,
after all slave I/O stops.
Handling intermittently failing paths in a campus cluster
In remote mirror configurations, a site is reattached when its disks become online.
Recovery is then initiated for the plexes of a volume that are configured at that site.
Depending on the configuration, recovery of the plexes can take a considerable amount
Known Problems And Workarounds 21