HP-UX vPars and Integrity VM V6.3 Release Notes (762790-001, July 2014) (Edition: 1.6)
Table Of Contents
- HP-UX vPars and Integrity VM V6.3 Release Notes
- Contents
- HP secure development lifecycle
- 1 Introduction
- 2 Installing or upgrading to HP-UX vPars and Integrity VM V6.3
- 3 New functionality and changes from earlier versions
- 3.1 New features and enhancements
- 3.1.1 Enhanced capability for emulated platform NVRAM (Non Volatile RAM)
- 3.1.2 Increased resources for Integrity VM guests
- 3.1.3 Dynamic addition of I/O devices
- 3.1.4 PCI OLR support on Superdome 2 VSPs
- 3.1.5 AVIO Networking improvements
- 3.1.6 AVIO Storage improvements
- 3.1.7 Greater flexibility for online VM migration
- 3.1.8 Improvements to Virtual Server Management
- 3.1.9 Improvements to VSP resource management
- 3.2 Changes from previous versions
- 3.1 New features and enhancements
- 4 Known problems, limitations, and workarounds
- 4.1 CPU/vCPU
- 4.2 Memory
- 4.3 Networking
- 4.3.1 Cannot remove a VLAN-based vNIC if the VLAN has been removed
- 4.3.2 hpvmhwmgmt might add ports in link aggregates into the DIO pool
- 4.3.3 DIO limitations
- 4.3.4 Known issues or limitations with DIO support for 10GigEthr-02 (iexgbe)
- 4.3.5 DIO-capable functions might become inconsistent with information in vPar or VM device database
- 4.3.6 When DIO device is assigned or removed from the DIO pool, error messages appear multiple times
- 4.4 Storage
- 4.4.1 Presenting a Logical Volume created on iSCSI devices as AVIO backing store to a guest not supported
- 4.4.2 Size change operations on a SLVM volume based backing store do not get reflected in the vPar or VM
- 4.4.3 The hpvmdevinfo command may not list the correct host to guest device mapping for legacy AVIO backing stores
- 4.4.4 Probe of NPIV HBAs for Fibre Channel targets may timeout
- 4.4.5 NPIV LUNs not shown by default invocation of ioscan
- 4.4.6 The interrupt balancing daemon must not be enabled in vPars and Integrity VM guests
- 4.4.7 Online addition of DMP device as backing store is not supported
- 4.5 VM <—> vPar conversion
- 4.6 Migration, Suspend, and Resume operations on Integrity VM guests
- 4.6.1 Use of -F with hpvmmigrate on a suspended VM can cause VM to be not runnable on both source and target
- 4.6.2 Copy of a vPar or VM might be left in runnable state if migration fails
- 4.6.3 Interrupt migration of vNICs during Online guest migration can result in network disconnectivity
- 4.6.4 Physical NIC link state change during hpvmsuspend to hpvmresume may result vNIC in down state
- 4.6.5 Offline migration of a guest- with multiple DIO resources might succeed with errors if the DIO devices are added under the same label
- 4.7 User interface—CLI
- 4.8 Known system crashes, panics, hangs and MCAs
- 5 HP-UX vPars and Integrity VM support policy
- 5.1 Support duration
- 5.2 VSP firmware requirements
- 5.3 VSP server and OS support
- 5.4 HP-UX version support for vPar and Integrity VM guests
- 5.5 Storage device support for vPar and Integrity VM guests
- 5.6 Network device support for vPar and Integrity VM guests
- 5.7 Support for migration of vPars and Integrity VMs
- 6 Support and other resources
- 7 Documentation feedback
# hpvmmigrate -P guest -h host2 -o
.
.
.
hpvmmigrate: Frozen phase (step 23) - progress 21%
Target: (protocol low) header read timeout, 30 seconds
Target: could not receive message header
On the target VM Host machine, the syslog contains the following warning message:
vmunix:
HVSD: HPVM online migration warning: NPIV probe took too long, 79
seconds
Workaround
To solve this problem:
1. Verify that the probe time of each individual FC HBA is under 10 seconds.
2. On the target VM Host, run the ioscan command to measure how long each FC HBA takes
to probe. The ms_scan_time column gives the probe time.
# ioscan -P ms_scan_time -C fc
Class I H/W Path ms_scan_time
============================================
fc 0 44/0/0/0/0/0/0 0 min 7 sec 7 ms
fc 1 44/0/0/2/0/0/0 0 min 6 sec 213 ms
fc 2 44/0/0/2/0/0/1 0 min 4 sec 1 ms
3. If an individual FC HBA takes more than 10 seconds to probe, check your FC switch and zone
settings to see why the probe time is so high.
4. Increase the online migration timeout value (ogmo) using the hpvmmodify command. For
example, if the syslog reports a warning that the NPIV probe time is 79 seconds, increase the
timeout value several seconds beyond that, to around 90 seconds (90000 msec):
# hpvmmodify -P guest1 -x tunables=ogmo=90000
# hpvmmodify -P guest1 -x migrate_frozen_phase_timeout=90
The default timeout value for ogmo is 30000 msec (30 seconds). The default timeout value for
migrate_frozen_phase_timeout is 60 seconds.
Alternatively, you can also reduce the NPIV probe time by reducing the number of NPIV HBAs
assigned to the guest.
4.4.5 NPIV LUNs not shown by default invocation of ioscan
An ioscan issued from within a vPar or VM guest does not display any LUNs behind the NPIV HBA
unless the –N option is specified. The ioscan command when executed without the -N option
only displays devices that use the legacy device file format. NPIV LUNs use the agile device file
format and require the –N option to be specified with ioscan in order to display LUNs in the
output.
Workaround
Use the –N option with the ioscan command to view devices behind NPIV HBAs.
4.4.6 The interrupt balancing daemon must not be enabled in vPars and Integrity
VM guests
Frequent interrupt migration on a vPar or VM guest can lead to storage LUNs going offline. This
can occur indirectly when dynamic CPU migration occurs on a frequent basis when the interrupt
balancing daemon is enabled.
4.4 Storage 23