HP LeftHand SAN Solutions Support Document Service Notes VSA 7.
Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice. Restricted Rights Legend Confidential computer software.
Current Limitations in This Release Installation and Upgrandes Post-Install Qualification Window Doesn’t Appear After Completing An Upgrade (6543) Scenario In rare cases when doing an upgrade, backout, or a patch that requires a reboot, on storage modules running any 6.x.x release, you may see a message in the Install Status Window indicating that the storage module is being rebooted, but nothing else happens.
Centralized Management Console Fails To Install On Linux (3177) Scenario When downloading the installer for the Console from the vendor’s FTP site, the FTP program reports that the download completed successfully. However, when you run the installer, you receive an error message indicating that a Java error occurred and the installation cannot continue. This occurs because some FTP programs may not download the complete installation package.
Workaround Do either of these tasks: • Let the upgrade finish before moving the storage module into a management group. • Move the storage module into a management group and then perform the upgrade. Upgrade Post-Qualification May Grab Focus Every 20 Seconds (2754) Scenario During a software upgrade, the Console may come to the front of other windows open on the desktop and may grab focus as well. Workaround Click in a different window to re-establish focus elsewhere.
SAN/iQ Software Upgrade On A Management Group with Mixed Version Storage Modules May Not Upgrade The Management Group Version (7698) Scenario In a management group with mixed SAN/iQ software versions, the management group database version does not get upgraded, For example, if the current management group version is 6.5, and some storage modules are upgraded to 70 6.6 and others are upgraded to 70 7.0, the management group version remains at 6.5.
Finding Storage Modules on Network Windows Fire Wall Prevents Storage Module Discovery In The Centralized Management Console (5855) Scenario After upgrading the SAN/iQ software, the Centralized Management Console fails to discover storage modules. Workaround 1 Determine if Windows Fire Wall is running. 2 If Windows Fire Wall is running, disable it.
Repair Storage Module Procedure Generates “Will Cause Restripe” Warning Message (7433) Scenario When adding the repaired module to the cluster during the Repair Storage Module procedure, a warning message is displayed that may give the impression that the entire cluster will be restriped. This is not the case. Only the repaired NSM will be restriped. Workaround Ignore the warning; just dismiss it.
When Replacing or Reseating A Power Supply, The Console May Report Improper Power Supply Status [NSM 160] (2997, 3532, 7060) Scenarios • Replacing a power supply may cause both power supplies to show “Missing” in the Console. • If the AC power cord is plugged into the power supply during installation, the Console may report “missing” for one or both power supplies even though they are both installed and working properly.
Boot Flash1 Status Changes After Changing RAID Configuration On NSM 150 (5498) Scenario Changing the RAID configuration on an NSM 150 causes the Boot Flash1 status to change to Inactive for about 2 minutes. The status then changes to Updating, and then back to Normal. Explanation This status change is due to the system processing the RAID reconfiguration. If you use the factory default RAID configuration, you never see this alert.
2 Remove the disk from the drive bay and insert the replacement disk. 3 Wait for the RAID status to show “rebuilding.” 4 Click the Power Disk On button. Even if the drive appears to be on and everything appears normal, this enables drive monitoring functions for that drive. Reseating A Disk 1 Power off the disk in the Console. 2 Power off the IBM x3650 in the Console. 3 Reseat the disk in the drive bay. 4 Manually power back on the IBM x3650. 5 Wait for the RAID status to show “rebuilding.
Intermediate Disk Status Reporting • When a disk is powered on or inserted in a drive, certain intermediate states may be reported. For example, if a drive is added to a degraded RAID 5 array, it may temporarily say Normal, before correctly changing to Degraded and then to Rebuilding. Swapping One Or More Disks Across Controllers Causes Data Loss [NSM 260] (3342) If the storage module powers up with one or more drives foreign to the configuration of a controller, data corruption occurs.
After Reboot, Lower Capacity Disk Status Is Shown As On And Secured In An IBM x3650 That Has Higher Capacity Disks (6740) Scenario You insert a lower capacity disk in an IBM x3650 with higher capacity disks and reboot it. In the Console, the physical drive status appears as Active, and RAID status appears as Degraded. You will not be able to power off the lower capacity disk to replace it with the higher capacity one.
When Powering Off A Mirrored Disk And RAID Is Rebuilding, The Mirrored Disk Is Not Powered Off [NSM 150] (7368) Scenario If you try to power off the mirrored disk when RAID is rebuilding, it is not powered off. However, no message appears to inform you that your request for power off has been denied. The Disk Setup panel indicates that the drive is still active, confirming that the disk has not been powered down. Workaround Wait for the RAID to rebuild and then power off the disk.
Why RAID May Go Off If A Foreign Drive Is Inserted Prior To Powering Up The Storage Module [NSM 260] (3341) Scenario If the storage module powers up with a drive that does not belong to the RAID configuration, data corruption may occur causing RAID to go off and preventing the storage module from coming online. Replacing the original drive may not result in RAID going to normal. Data may be lost on this storage module in this case. Workaround Never replace a drive when the storage module is off.
Workaround Increase the minimum rebuild rate to a value of 10 or greater. The following guidelines describe the effects of the RAID rebuild rates. • Setting the rate high is good for rebuilding RAID quickly and protecting data; however, it will slow down user access. • Setting the rate low maintains user access to data during the rebuild.
Single Drive Error [NSM 160, NSM 260] (6502) Scenario A drive may become unavailable, causing the RAID status to go Degraded or Off, depending on the RAID configuration. Workarounds The following three options should be tried, in order. If one does not fix the problem, try the next one. • Reseat the drive using the instructions in the User Manual or the Online Help. If the drive does not start rebuilding, and the drive status shows Inactive in the Disk Setup tab, select the drive and click Add to RAID.
When creating a NIC bond on a storage module, flow control for the NIC bond is not representative of each NIC. A NIC bond may indicate that flow control for one NIC is “enabled” and that for the second NIC, flow control is “disabled.” Workaround Set flow control using the following guidelines: • Do not change flow control settings after the bond is created. • Flow control setting on a disabled physical NIC interface cannot be changed. • Flow control setting on the NIC bond is meaningless.
Table 1 lists the expected flow control settings for various types of NIC bonds. The flow control setting should remain the same after you create any bond. If you check the NIC bond and find that the settings are not the same, delete the bond and reset the flow control settings to ensure that they are the same.
Workaround Assign “public” adapters, intended for servicing users, to a subnet distinct from storage adapters. Time On The VSA Is Out Of Sync With The Time On The ESX Server (8101) Scenario The customer will experience a noticeable time difference between the actual time and the time displayed on the Console for the Virtual SAN Appliance (VSA). Solution Using the VMware VI Client, configure ESX to sync the system clock with NTP (See ESX configuration documentation).
Battery Capacity Test Timing Changed [NSM 160] (7040) Scenario If you upgrade an NSM 160 from release 6.6.x to 7.0, the battery capacity test runs every week instead of once every four weeks. Workaround After an upgrade, use the Console and manually change the BBU Capacity Test monitoring variable frequency to four weeks. Select a NSM 160 storage module > Alerts >Alert Setup> Edit Monitored Variable. Change the Schedule Week field to Every Four Weeks.
Clusters Cannot Create A Cluster Using VSA And Any Storage Module Running SAN/iQ Software Release 6.6 (7874) Explanation SAN/iQ software release 7.0 is the first release that supports mixing RAID levels in a cluster. The VSA runs virtual RAID which is new in the 7.0 release. Therefore, the VSA cannot be added to any cluster with storage modules running release 6.6 or earlier.
Workaround Wait to edit the volume using the Console until all the snapshots created prior to changing the volume's autogrow value are deleted by the snapshot schedule. If you anticipate an immediate need for editing the volume, delete all snapshots and if the cluster space constraint is still there, only then change the volume's autogrow value.
Workaround 1 Convert the volume back to a remote volume. 2 Convert it to a primary, but with full provisioning. 3 Edit the volume and make it thin provisioned. MS Cluster Failovers When Migrating A Large Number Of Volumes Concurrently (7485) Scenario You may experience delayed write failures and cluster failovers. The client servers become unresponsive. When migrating volumes from one cluster to another, multiple disk groups fail.
Workaround Bring the storage module back online and check the replication level, changing it if necessary. Replication proceeds from where it left off when the storage module went down. In A Cluster With A Virtual IP Address, Cannot Mount Volume Using Storage Module IP As A Discovery Address (7369) Scenario If a cluster has a virtual IP address, and that IP address is not used for discovery in the iSCSI initiator, you cannot mount a volume from that cluster using the storage module’s physical IP address.
Snapshot Schedules Do Not Adjust For Daylight Savings Time (4383, 4913) Scenario When snapshot schedules are created under Standard Time, the schedules continue to execute at the originally scheduled Standard Time, even though the storage modules are operating under Daylight Savings Time. For example, if a schedule is configured, under Standard Time, to run at 2:00 PM, then the schedule initially runs at 2:00 P.M. Standard Time.
Volume Not Added To Volume List Appears In iSCSI Initiator (4215) Scenario You create a cluster and configure the cluster to use iSNS. You then create a volume but do not add the volume to a volume list. The volume appears as a target in the iSCSI initiator. However, if you attempt to log on to this target, you receive an Authorization Failure message. This is a function of iSNS discovery.
Workaround To designate enough bandwidth for I/O to the management group, reduce the local bandwidth used for Remote Copy. 1 Log in to the remote management group. 2 On the Edit Remote Bandwidth dialog window, reduce the local bandwidth setting. iSCSI Two-Way CHAP Can Be Done Using One-Way Chap Password (7370) Scenario For One-way CHAP, you have one password and use Outgoing Authentication.
Workaround • See the LeftHand Networks document at this URL: https://www.lefthandnetworks.com/member_area/ dl_file.php?fid=1037 • Also, see the section entitled “Running automatic start services on iSCSI disks” in the Microsoft iSCSI Initiator Users Guide for more details.
Workaround Open the Windows Disk Management console and assign a new drive letter to the volume. The volume should then appear in the directory structure. Linux-iSCSI Initiator Cannot Reboot When SAN/iQ Volume is Unavailable (3346) Scenario The iSCSI Device Manager hangs when network problems prevent it from communicating with a storage module. Because the default time-out for Linux-iSCSI initiator is infinite, the initiator cannot reboot when it is unable to access the iSCSI volume on the storage module.
Red Hat: Changing Authentication Type Causes Existing iSCSI Devices To Be Renamed (3668) Scenario You configured an authentication group for iSCSI access. You then changed the access configuration, either to require CHAP or to remove or change CHAP requirements. After the change, the existing iSCSI devices are renamed and cannot be remounted. Workaround To change the authentication type of any volume (LVM or otherwise), follow these steps: 1 Unmount volumes and stop iSCSI services. # /etc/init.
# /etc/init.d/iscsi start # vgchange -a y vgiSCSI # mount /dev/vgiSCSI/lvol0 /iSCSI After Power Cycle, Load Balancing Does Not Distribute Requests Properly From A Microsoft Cluster (3993) Scenario A storage module is powered off and then powered on, and another storage module in the SAN/iQ cluster handles all the connections to the volumes connected to that cluster. When the storage module is powered on again, load balancing does not redirect I/O to that storage module.
An Extra Microsoft iSCSI Session Is Created In The Console After Rebooting The Host (5023) Scenario An extra iSCSI session is created in the Console after rebooting the host for the volume which is mounted with “Automatically restore this connection when the system boots” selected. Explanation This is a Microsoft issue in which different session IDs (iSCSI ISIDs) are used for the same hostvolume pair, depending on how the session was established.
Workaround For 1-way CHAP, use the Initiator Secret from the Console Authentication Group as the QLogic Target Secret. For 2-way CHAP, first use the Initiator Secret from the Console Authentication Group as the QLogic Target Secret. Next, add the Target Secret from Console Authentication Group as the QLogic Initiator Secret. Using QLogic HBA And Solaris 10, I/O Can Only Be Done On One Volume (5269) Explanation The QLogic HBA is not supported with Solaris 10 and the HP LeftHand SAN.
was saved in the Unit-1 configuration back-up file. That search never completes because the IP address on Unit-2 has changed and is now the IP address of Unit-1. Note: Restoring multiple storage modules from single backup file causes an IP address conflict. Workaround Before restoring a backed-up storage module configuration file, make certain that the new storage module is configured with the IP address of the original storage module.
Workaround To finish updating the IP address using the Console: 1 Log in to the storage module with the new IP address. 2 On the storage module, navigate to the TCP/IP Network category. 3 On the Communication tab, select Communications Tasks and click Update Communications List. This synchronizes the IP addresses of all managers.
Dell Open Manage Secure Port Server Unable To Install Or Load Console With Dell's Secure Port Server Service Started (909) Scenario Using Windows on a Dell Server with the Dell OpenManage Secure Port Server service, you cannot properly install the Console or start the Console. Workaround Stop the Dell OpenManage Secure Port Server service when installing or running the Console.
Red Hat Enterprise Linux On A RHEL Cluster With A Volume In Use, A Network Outage Longer Than 45 Seconds Results In The Volume Not Automatically Remounting [NSM 150] (6545) Workaround 1 Deactivate the volume that was being used when the node failed on all other nodes in the cluster: Example: [root@rac8] # umount /mnt/home1 [root@rac8] # vgchange -an home1 2 Restart the cluster services on the failed node. 3 Reactivate the VolumeGroup on the other RHCS nodes.
You log back in to the management group to start a virtual manager. Now, the Console cyclically logs into a storage module where the manager is no longer running. The Start Virtual Manager menu item for the storage module is not displayed because the global database is not available to the Console. There is no way to start the virtual manager to recover quorum. Workaround 1 Log out of all the storage modules. 2 Log into the storage module that has a manager running. 3 Log out of the management group.