HP StoreEasy 3000 Storage Administrator Guide This document describes how to install, configure, and maintain all models of HP StoreEasy 3000 Storage and is intended for system administrators. For the latest version of this guide, go to http://www.hp.com/support/StoreEasy3000Manuals.
© Copyright 2012, 2014 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 HP StoreEasy 3000 Storage.........................................................................8 Features..................................................................................................................................8 Hardware components..............................................................................................................8 HP StoreEasy 38x0 Gateway Storage hardware components.....................................................
Using storage elements.......................................................................................................32 Clustered server elements....................................................................................................32 Network adapter teaming........................................................................................................32 Management tools..............................................................................................................
Comparing administrative (hidden) and standard shares....................................................56 Managing shares..........................................................................................................56 File Server Resource Manager..................................................................................................56 Quota management...........................................................................................................
Creating physical disk resources......................................................................................70 Creating file share resources...........................................................................................71 Creating NFS share resources.........................................................................................71 Shadow copies in a cluster..................................................................................................
Warranty information..............................................................................................................96 Glossary....................................................................................................98 Index.......................................................................................................
1 HP StoreEasy 3000 Storage The HP StoreEasy 3000 Storage products enable simplified file and application storage. These products reduce your cost of ownership by simplifying management, increasing resource utilization, centralizing growth, and protecting data. NOTE: The HP StoreEasy 3000 Storage Administrator Guide provides information on all models within the StoreEasy 3000 Storage product family. The product name is listed generically where the same information is applicable to different models.
Figure 2 HP StoreEasy 38x0 Gateway Storage front panel LEDs and buttons Item 1 Description Status NIC status LED Off = No network link Solid green = Link to network Flashing green = Network activity 2 System health LED Green = Normal Flashing amber = System degraded Flashing red = System critical To identify components in degraded or critical state, see “Systems Insight Display LED combinations” (page 13) 3 UID LED and button Solid blue = Activated Flashing blue = System being remotely managed Of
Figure 3 HP StoreEasy 38x0 Gateway Storage rear panel components 1. PCIe slots 1–3 (top to bottom) 2. PCIe slots 4–6 (top to bottom) 3. Power supply 1 (PS1) 4. PS1 power connector 5. PS2 power connector 6. Power supply 2 (PS2) 7. USB connectors (4) 8. Video connector 9. iLO connector 10. Serial connector 11.
Item 4 Description Status NIC activity LED Green = Activity exists Flashing green = Activity exists Off = No activity exists 5 NIC link LED Green = Link exists Off = No link exists HP StoreEasy 38x0 Gateway Storage Blade hardware components The following figures show components and LEDs located on the front and rear panels of the HP StoreEasy 38x0 Gateway Storage Blade. Figure 5 HP StoreEasy 38x0 Gateway Storage Blade front panel components 1. Hard drive bay 1 2. Server blade release button 3.
Item Description Status Flashing Green = System is waiting to power on; Power On/Standby button is pressed. Solid Amber = System is in standby; Power On/Standby Button service is initialized. Off and the Health Status LED bar is off = The system has no power. Off and the Health Status LED bar is flashing green = The Power On/Standby Button service is being initialized.
Item 4 LED Drive status Status Definition Off Removing the drive does not cause a logical drive to fail. Solid green The drive is a member of one or more logical drives. Flashing green The drive is rebuilding or performing a RAID migration, stripe size migration, capacity expansion, or logical drive extension, or is erasing. Flashing amber/green The drive is a member of one or more logical drives and predicts the drive will fail.
Table 1 Systems Insight Display LEDs and internal health LED combinations (continued) Systems Insight Display LED and color Health LED System power LED Status • Redundant power supply fault • Power supply mismatch at POST or power supply mismatch through hot-plug addition Power cap (off) — Power cap (green) — — Amber Standby Flashing green Waiting for power Green Power is available.
2 Installing and configuring the storage system Setup overview The HP StoreEasy 3840 Storage comes preinstalled with the Microsoft Windows Storage Server 2012 R2 Standard Edition operating system with Microsoft iSCSI Software Target and a Microsoft Cluster Service (MSCS) license included. Verify the kit contents Remove the contents, ensuring that you have all of the following components. If components are missing, contact HP technical support.
For 38x0 Gateway Storage systems, install the rail kit and insert and secure the storage system into the rack by following the HP Rack Rail Kit Installation Instructions. For 38x0 Gateway Storage Blade systems, install the server blade by following the procedures documented in the Quick Start Guide provided for your model. Connect to the storage system Use either the direct attach or remote management method to connect to the storage system.
2. After installation completes and the server (or servers if deploying a cluster) reboots, you are automatically logged on as the local administrator. The default password is HPinvent!. You are not prompted to change the password. If you are deploying a cluster, continue to work only with the server on which you used the Setup Windows Wizard.
3. Enter Get-StorageProvider to verify the registration of SMI-S provider. If the registration is successful, SMI-S provider is displayed as registered on the system. Multi-Path I/O configuration The Multi-Path IO configuration option opens the MPIO properties applet. You must have a volume (LUN) presented to the gateway before you can claim it using the MPIO properties applet. Using Control Panel, select the DSM that matches your storage array. A DSM is required by your storage vendor.
Supported arrays include, but are not limited to the following: • HP 3PAR StoreServ Storage • HP EVA P6000 Storage • HP XP P9000 Storage • HP StoreVirtual 4000 Storage • HP P2000 G3 MSA Array Systems NOTE: For instructions on how to connect the HP StoreEasy 3000 Storage to an HP 3PAR StoreServ system, see the HP 3PAR Windows Server 2012 and Windows Server 2008 Implementation Guide, which is available at: http://www.hp.
features such as snapshots (volume shadow copies), data deduplication, directory quotas, and much more. NOTE: Microsoft Storage Spaces are not supported with StoreEasy products. All storage provisioning for HP StoreEasy 3000 Storage is done on the particular array used for storage. Consult the documentation for your particular array to perform the necessary tasks involved in presenting LUNs to the HP StoreEasy 3000 Storage.
Adding a node to an existing cluster A cluster can consist of up to eight nodes. A dedicated network switch is recommended for connecting the cluster heartbeat when more than two nodes are used. Alternately, a dedicated VLAN on an existing switch can be used. 1. Add the new node to the same domain as the other nodes before adding the new node to the cluster. 2. Ensure that the shared storage is connected to the new node and shared LUNs are presented or exported to the new node. 3.
10. Verify that the cluster resources can perform failover: a. Under Navigate in the main viewing pane, click Roles. Verify that a file server is listed (if not, create one). Right-click the file server name and select Move. There are two move options: Best Possible Node and Select Node. b. Click Select Node. The Move Clustered Role window opens. Select the newly added node to move the resource to and click OK. The operation must be successful to indicate the nodes can failover.
NOTE: Transitioning to Server Core mode disables the OEM-Appliance-OOBE feature. After transitioning back to Server with a GUI mode, you must manually enable this feature by executing the following command: PS C:\Users\Administrator>dism /online /enable-feature /featurename:OEM-Appliance-OOBE Then, install HP ICT from C:\hpnas\Components\ManagementTools.
3 Administration tools HP StoreEasy 3000 Storage systems include several administration tools to simplify storage system management tasks. Microsoft Windows Storage Server 2012 and 2012 R2 administration tools Microsoft Windows Storage Server 2012 or 2012 R2 operating systems provide a user interface for initial server configuration, unified storage system management, simplified setup and management of storage and shared folders, and iSCSI targets.
Administrators can use the File and Storage Services role to setup and manage multiple file servers and their storage by using Server Manager or Windows PowerShell. Some of the specific applications include the following: • Use Data Deduplication to reduce the disk space requirements of your files, saving money on storage. • Use iSCSI Target Server to create centralized, software-based, and hardware-independent iSCSI disk subsystems in storage area networks (SANs).
Print Management Use Print Management to view and manage printers and print servers in your organization. You can use Print Management from any computer running Windows Storage Server 2012 or 2012 R2, and you can manage all network printers on print servers running Windows 2000 Server, Windows Server 2003, Windows Storage Server 2003, Windows Storage Server 2003 R2, Windows Storage Server 2008, Windows Storage Server 2008 R2, Windows Storage Server 2012, or Windows Storage Server 2012 R2.
4 Storage management overview This chapter provides an overview of some of the components that make up the storage structure of the storage system. Storage management elements Storage is divided into four major divisions: • Physical storage elements • Logical storage elements • File system elements • File sharing elements Each of these elements is composed of the previous level's elements.
Figure 8 Storage management process example Physical storage elements The lowest level of storage management occurs at the physical drive level. Minimally, choosing the best disk carving strategy includes the following policies: • Analyze current corporate and departmental structure. • Analyze the current file server structure and environment. • Plan properly to ensure the best configuration and use of storage.
Arrays See Figure 9 (page 29). With an array controller installed in the system, the capacity of several physical drives (P1–P3) can be logically combined into one or more logical units (L1) called arrays. When this is done, the read/write heads of all the constituent physical drives are active simultaneously, dramatically reducing the overall time required for data transfer. NOTE: Depending on the storage system model, array configuration may not be possible or necessary.
Table 2 Summary of RAID methods RAID 0 Striping (no fault tolerance) RAID 1+0 Mirroring RAID 5 Distributed Data Guarding RAID 6 (ADG) Maximum number of hard N/A drives N/A 14 Storage system dependent Tolerant of single hard drive failure? No Yes Yes Yes Tolerant of multiple simultaneous hard drive failures? No If the failed No drives are not mirrored to each other Yes (two drives can fail) Online spares Further protection against data loss can be achieved by assigning an online spare (or ho
span multiple LUNs. You can use the Windows Disk Management utility to convert disks to dynamic and back to basic and to manage the volumes residing on dynamic disks. Other options include the ability to delete, extend, mirror, and repair these elements. Partitions Partitions exist as either primary partitions or extended partitions.
space. Each of these folders can contain separate permissions and share names that can be used for network access. Folders can be created for individual users, groups, projects, and so on. File sharing elements The storage system supports several file sharing protocols, including Distributed File System (DFS), Network File System (NFS), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), and Microsoft Server Message Block (SMB).
Interconnect (PCI) adapters) into a virtual adapter. This virtual adapter is seen by the network and server-resident network-aware applications as a single network connection. Management tools HP Systems Insight Manager HP SIM is a web-based application that allows system administrators to accomplish normal administrative tasks from any remote location, using a web browser. HP SIM provides device management capabilities that consolidate and integrate management data from HP and third-party devices.
5 File server management This chapter describes the tasks and utilities that play a role in file server management. File services management Information about the storage system in a SAN environment is provided in the SAN Design Reference Guide, located on the HP web site at www.hp.com/go/SDGManuals. Storage management utilities The storage management utilities preinstalled on the storage system include the HP Array Configuration Utility (ACU).
Some ACU guidelines to consider: • Do not modify the single logical drive of the storage system; it is configured for the storage system operating system. • Spanning more than 14 disks with a RAID 5 volume is not recommended. • Designate spares for RAID sets to provide greater protection against failures. • RAID sets cannot span controllers. • A single array can contain multiple logical drives of varying RAID settings. • Extending and expanding arrays and logical drives is supported.
• Only basic disks can be formatted as FAT or FAT32. • Read the online Disk Management help found in the utility. Scheduling defragmentation Defragmentation is the process of analyzing local volumes and consolidating fragmented files and folders so that each occupies a single, contiguous space on the volume. This improves file system performance. Because defragmentation consolidates files and folders, it also consolidates the free space on a volume.
For more information about disk quotas, read the online help. Adding storage Expansion is the process of adding physical disks to an array that has already been configured. Extension is the process of adding new storage space to an existing logical drive on the same array, usually after the array has been expanded. Storage growth may occur in three forms: • Extend unallocated space from the original logical disks or LUNs. • Alter LUNs to contain additional storage. • Add new LUNs to the system.
Expanding storage for EVA arrays using HP P6000 Command View Presenting a virtual disk offers its storage to a host. To make a virtual disk available to a host, you must present it. You can present a virtual disk to a host during or after virtual disk creation. The virtual disk must be completely created before the host presentation can occur. If you choose host presentation during virtual disk creation, the management agent cannot complete any other task until that virtual disk is created and presented.
Shadow copy planning Before setup is initiated on the server and the client interface is made available to end users, consider the following: • From what volume will shadow copies be taken? • How much disk space should be allocated for shadow copies? • Will separate disks be used to store shadow copies? • How frequently will shadow copies be made? Identifying the volume Shadow copies are taken for a complete volume, but not for a specific directory.
volume instead of the source volume. Remember that when the storage limit is reached, older versions of the shadow copies are deleted and cannot be restored. CAUTION: To change the storage volume, shadow copies must be deleted. The existing file change history that is kept on the original storage volume is lost. To avoid this problem, verify that the storage volume that is initially selected is large enough.
and shadow copies are enabled on it, users cannot access the shadow copies if they traverse from the host volume (where the mount point is stored) to the mounted drive. For example, assume there is a folder F:\data\users, and the Users folder is a mount point for G:\. If shadow copies are enabled on both F:\ and G:\, F:\data is shared as \\server1\data, and G:\data\users is shared as \\server1\users.
Figure 13 Shadow copies stored on a source volume The cache file location can be altered to reside on a dedicated volume separate from the volumes containing files shares. (See Figure 14 (page 42)). Figure 14 Shadow copies stored on a separate volume The main advantage to storing shadow copies on a separate volume is ease of management and performance. Shadow copies on a source volume must be continually monitored and can consume space designated for file sharing.
Enabling and creating shadow copies Enabling shadow copies on a volume automatically results in several actions: • Creates a shadow copy of the selected volume. • Sets the maximum storage space for the shadow copies. • Schedules shadow copies to be made at 7 a.m. and 12 noon on weekdays. NOTE: Creating a shadow copy only makes one copy of the volume; it does not create a schedule. NOTE: After the first shadow copy is created, it cannot be relocated.
1. 2. 3. 4. 5. 6. 7. Access Disk Management. Select the volume or logical drive, then right-click on it. Select Properties. Select the Shadow Copies tab. Select the volume that you want to redirect shadow copies from and ensure that shadow copies are disabled on that volume; if enabled, click Disable. Click Settings. In the Located on this volume field, select an available alternate volume from the list. NOTE: 8. 9. To change the default shadow copy schedule settings, click Schedule. Click OK.
3. Click the Shadow Copies tab. See Figure 15 (page 45). Figure 15 Accessing shadow copies from My Computer Shadow Copies for Shared Folders Shadow copies are accessed over the network by supported clients and protocols. There are two sets of supported protocols, SMB and NFS. All other protocols are not supported, including HTTP, FTP, AppleTalk, and NetWare Shares. For SMB support, a client-side application denoted as Shadow Copies for Shared Folders is required.
SMB shadow copies Windows users can independently access previous versions of files stored on SMB shares by using the Shadow Copies for Shared Folders client. After the Shadow Copies for Shared Folders client is installed on the user's computer, the user can access shadow copies for a share by right-clicking on the share to open its Properties window, clicking the Previous Versions tab, and then selecting the desired shadow copy. Users can view, copy, and restore all available shadow copies.
point-in-time copies of the file or folder contents that users can then open and explore like any other file or folder. Users can view files in the folder history, copy files from the folder history, and so on. NFS shadow copies UNIX users can independently access previous versions of files stored on NFS shares via the NFS client; no additional software is required. Server for NFS exposes each of a share's available shadow copies as a pseudo-subdirectory of the share.
Recovering an overwritten or corrupted file Recovering an overwritten or corrupted file is easier than recovering a deleted file because the file itself can be right-clicked instead of the folder. To recover an overwritten or corrupted file: 1. Right-click the overwritten or corrupted file, and then click Properties. 2. Click Previous Versions. 3. To view the old version, click Open. To copy the old version to another location, click Copy to replace the current version with the older version, click Restore.
1. 2. 3. 4. Create a shadow copy of the source data on the source server (read-only). Mask off (hide) the shadow copy from the source server. Unmask the shadow copy to a target server. Optionally, clear the read-only flags on the shadow copy. The data is now ready to use. Folder and share management The storage system supports several file-sharing protocols, including DFS, NFS, FTP, HTTP, and Microsoft SMB.
Figure 17 Properties screen, Security tab Several options are available on the Security tab: 3. • To add users and groups to the permissions list, click Add. Follow the dialog box instructions. • To remove users and groups from the permissions list, highlight the desired user or group, and then click Remove. • The center section of the Security tab lists permission levels.
Figure 18 Advanced Security settings screen, Permissions tab Other functionality available in the Advanced Security Settings screen is illustrated in Figure 18 (page 51) and includes: 4. • Add a new user or group—Click Add, and then follow the dialog box instructions. • Remove a user or group— Click Remove. • Replace permission entries on all child objects with entries shown here that apply to child objects—This allows all child folders and files to inherit the current folder permissions by default.
Figure 19 User or group Permission Entry screen Another area of the Advanced Security Settings is the Auditing tab. Auditing allows you to set rules for the auditing of access, or attempted access, to files or folders. Users or groups can be added, deleted, viewed, or modified through the Advanced Security Settings Auditing tab.
Figure 20 Advanced Security Settings screen, Auditing tab 5. Click Add to display the Auditing Entry screen. Figure 21 Auditing Entry for New Volume screen 6. Click Select a principal to display the Select User or Group screen.
Figure 22 Select User or Group screen NOTE: 7. 8. 9. 10. Click Advanced to search for users or groups. Select the user or group. Click OK. Select the desired Successful and Failed audits for the user or group. Click OK. NOTE: Auditing must be enabled to configure this information. Use the local Computer Policy Editor to configure the audit policy on the storage system. The Owner tab allows taking ownership of files.
2. 3. If it is also necessary to take ownership of subfolders and files, enable the Replace owner on subcontainers and objects box. Click OK. Share management There are several ways to set up and manage shares. Methods include using Windows Explorer, a command line interface, or Server Manger. NOTE: Select servers can be deployed in a clustered as well as a non-clustered configuration. This chapter discusses share setup for a non-clustered deployment.
This method results in a hierarchical security model where the network protocol permissions and the file permissions work together to provide appropriate security for shares on the device. NOTE: Share permissions and file-level permissions are implemented separately. It is possible for files on a file system to have different permissions from those applied to a share. When this situation occurs, the file-level permissions override the share permissions.
Quota management On the Quota Management node of the File Server Resource Manager snap-in, you can perform the following tasks: • Create quotas to limit the space allowed for a volume or folder and generate notifications when the quota limits are approached or exceeded. • Generate auto quotas that apply to all existing folders in a volume or folder, as well as to any new subfolders created in the future.
6 Cluster administration One important feature of HP StoreEasy 3000 Storage systems is that they can operate as a single node or as a cluster. This chapter discusses cluster installation and cluster management issues. Cluster overview A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software.
Resources Hardware and software components that are managed by the cluster service are called cluster resources. Cluster resources have three defining characteristics: • They can be brought online and taken offline. • They can be managed in a cluster. • They can be owned by only one node at a time. Examples of cluster resources are IP addresses, network names, physical disk resources, and file shares. Resources represent individual system components.
service attempts to transfer the group to the next node on the preferred owner's list. If the transfer is successful, the resources are brought online in accordance with the resource dependency structure. The system failover policy defines how the cluster detects and responds to the failure of individual resources in the group. After a failover occurs and the cluster is brought back to its original state, failback can occur automatically based on the policy.
Figure 25 Cluster concepts diagram Sequence of events for cluster resources The 1. 2. 3. 4. 5. sequence of events in the diagram includes: Physical disks are combined into RAID arrays and LUNs. LUNS are designated as basic disks, formatted, and assigned a drive letter via Disk Manager. Physical Disk resources are created for each basic disk inside Failover Cluster Management. Directories and folders are created on assigned drives.
• An IP Address resource is formed in the group and relates to the IP address by which the group's virtual server is identified on the network. • A Network Name resource is formed in the group and relates to the name published on the network by which the group is identified. • The Group is owned by one of the nodes of the cluster, but may transition to the other nodes during failover conditions. The diagram illustrates a cluster containing two nodes. Each node has ownership of one group.
NOTE: The LUN underlying the basic disk should be presented to only one node of the cluster using selective storage presentation or SAN zoning, or having only one node online at all times until the physical resource for the basic disk is established. In preparing for the cluster installation: • All shared disks, including the Quorum disk, must be accessible from all nodes. When testing connectivity between the nodes and the LUN, only one node should be given access to the LUN at a time.
Table 3 Sharing protocol cluster support Protocol Client Variant Cluster Aware (supports failover) Supported on cluster nodes SMB Windows Yes Yes NFS UNIX Yes Yes Linux HTTP Web No Yes FTP Many Yes Yes NCP Novell No Yes AppleTalk Apple No No iSCSI Standards-based iSCSI Yes initiator Yes NOTE: AppleTalk is not supported on clustered disk resources. AppleTalk requires local memory for volume indexing. On failover events, the memory map is lost and data corruption can occur.
• A domain user account for Cluster service (all nodes must be members of the same domain) • Each node should have at least two network adapters—one for connection to the public network and the other for the node-to-node private cluster network. If only one network adapter is used for both connections, the configuration is unsupported. A separate private network adapter is required for HCL certification.
Setting up networks Verify that all network connections are correct, with private network adapters connected to other private network adapters only, and public network adapters connected to the public network. Configuring the private network adapter The following procedures are best practices provided by Microsoft and should be configured on the private network adapter. • On the General tab of the private network adapter, ensure that only TCP/IP is selected.
Configuring shared disks Use the Windows Disk Management utility to configure additional shared disk resources. Verify that all shared disks are formatted as NTFS and are designated as Basic. Additional shared disk resources are automatically added into the cluster as physical disk resources during the installation of cluster services. Verifying disk access and functionality Write a file to each shared disk resource to verify functionality.
The following rules must be followed with geographically dispersed clusters: • A network connection with latency of 500 milliseconds or less ensures that cluster consistency can be maintained. If the network latency is over 500 milliseconds, the cluster consistency cannot be easily maintained. • All nodes must be on the same subnet. Cluster groups and resources, including file shares The Failover Cluster Management tool provides complete online help for all cluster administration activities.
File share resource planning issues SMB and NFS are cluster-aware protocols that support the Active/Active cluster model, allowing resources to be distributed and processed on both nodes at the same time. For example, some NFS file share resources can be assigned to a group owned by a virtual server for Node A and additional NFS file share resources can be assigned to a group owned by a virtual server for Node B. Configuring the file shares as cluster resources provides for high availability of file shares.
• • Map properly. ◦ Valid UNIX users should be mapped to valid Windows users. ◦ Valid UNIX groups should be mapped to valid Windows groups. ◦ Mapped Windows user must have the “Access this computer from the Network privilege” or the mapping will be squashed. ◦ The mapped Windows user must have an active password, or the mapping will be squashed. In a clustered deployment, create user name mappings using domain user accounts.
NOTE: • Physical disk resources usually do not have any dependencies set. • In multi-node clusters it is necessary to specify the node to move the group to. When a cluster group is moved to another node, all resources in that group are moved. • When a physical disk resource is owned by a node, the disk appears as an unknown, unreadable disk to all other cluster nodes. This is a normal condition. When the physical disk resource moves to another node, the disk resource then becomes readable.
MSNFS administration on a server cluster The Microsoft Services for Network File System (NFS) online help provides server cluster information for the following topics: • • • Configuring shared folders on a server cluster ◦ Configuring an NFS share as a cluster resource ◦ Modifying an NFS shared cluster resource ◦ Deleting an NFS shared cluster resource Using Microsoft Services for NFS with server clusters ◦ Understanding how Server for NFS works with server clusters ◦ Using Server for NFS on a
1. 2. 3. 4. Create Create Create Create a dedicated group (if desired). a physical resource (disk) (if required, see note). an IP address resource for the Virtual Server to be created (if required, see note). a Virtual Server Resource (Network Name) (if required, see note). NOTE: If the printer spool resource is added to an existing group with a physical resource, IP address, and virtual server resource, steps 1-4 are not required. 5. 6. Create a Print Spool resource.
The physical process of restarting one of the nodes of a cluster is the same as restarting a storage system in single node environment. However, additional caution is needed. Restarting a cluster node causes all cluster resources served by that node to fail over to the other nodes in the cluster based on the failover policy in place. Until the failover process completes, any currently executing read and write operations will fail.
7 Troubleshooting, servicing, and maintenance The storage system provides several monitoring and troubleshooting options. You can access the following troubleshooting alerts and solutions to maintain the system health: • Notification alerts • System Management Homepage (SMH) • Hardware component LEDs • HP and Microsoft support websites • HP Insight Remote Support software • Microsoft Systems Center Operations Manager (SCOM) and Microsoft websites • HP SIM 6.
go to http://www.hp.com. Search for your specific product or the underlying server platform (for example, ProLiant DL320 Gen8 server) to find specific updates. • HP recommends updating the operating system, software, firmware, and NIC drivers simultaneously (in the same update window) to ensure proper operation of the storage system. Determining the current storage system software version You can find the current version using the registry. From the registry: 1. Log in to the server blade. 2.
4. 5. Select Home/work (Private) and Public and click OK. To access the SMH on another server, enter the following URL: https://:2381 NOTE: Port 2381 may need to be opened in the system’s firewall, if applicable. System Management Homepage main page Figure 26 (page 77) shows the SMH main page. Figure 26 System Management Homepage main page The page provides system, subsystem, and status views of the server and displays groupings of systems and their status.
Enclosure This section provides information about the enclosure cooling, IDs, power, Unit Identification LED, PCIe devices, and I/O modules. NOTE: A large number of disk errors may indicate that an I/O module has failed. Inspect the I/O module LEDs on the storage system and any disk enclosures, and replace any failed component. • Because both a system and drive fan are required, the maximum and minimum number of fans required is two. If either fan becomes degraded, the system could shut down quickly.
• Logical Drives A list of logical drives associated with the controller appears in the left panel tree view. Select one of the logical volume entries to display the status of the volume, fault tolerance (RAID level), and capacity (volume size). A link to the logical volume is also displayed. • Tape Drives This section provides information about tape drives, if they are included.
Table 5 Known issues (continued) Issue Resolution Mounted data volumes are not remounted after performing a system recovery. These data volumes are not damaged or destroyed but they are not visible after a system recovery operation. In order to restore the mount points to their original locations, you must record them prior to running system recovery. 1. Using Windows Disk Manager, record the mount points of the volumes within the root directory of each volume. 2.
Table 5 Known issues (continued) Issue Resolution failed with the following the opportunity to assign network addresses to other network interfaces. After error message: The WinRM addresses are assigned the network interfaces can be reconnected or enabled. client cannot process the request.
Table 5 Known issues (continued) Issue Resolution a. Get-partition –DiskNumber | remove-Partition –Confirm b. New-partition –DiskNumber -Size 128MB –GptType ‘{e3c9e316-0b5c-4db8-817d-f92df00215ae}' c. New-partition –DiskNumber -Size –AssignDriveLetter | Format-Volume –Force Please ensure that all GPT disks have a Microsoft Reserved Partition (MSR) present. Error: 0x80070022 3.
Table 5 Known issues (continued) Issue Resolution d. create partition msr size=128 e. exit 3. Enter the Get-partition –DiskNumber command to verify the new partitions. The SMI-S provider registration with To register SMI-S provider with the HP StoreEasy 3000 Storage, ensure that the HP StoreEasy 3000 Storage might array is reachable from the node. Perform the following steps on all nodes: fail due to the following reasons: 1. Open an elevated PowerShell command prompt.
Table 6 HP Insight Management CSP WBEM Providers for Windows errors (continued) Error code Description Source Event Log Entry Type Resolution 0x916 Enclosure provider is unable to build internal lists. Blade classes may fail. HP CSP WBEM Providers Error Check the provider logs for details. 0x917 Enclosure provider is unable to connect to health driver. Many or all classes may fail. HP CSP WBEM Providers Error Check the provider logs for details. Also report to the Support Team.
IMPORTANT: Some troubleshooting procedures found in ProLiant server guides may not apply to the storage system. If necessary, check with your HP Support representative for further assistance. For HP StoreEasy 3000 Storage guides, go to http://www.hp.com/support/ StoreEasy3000Manuals. For specific ProLiant model documentation, go to: http://www.hp.com/go/proliantgen8/docs For software-related components and issues, online help or user guide documentation may offer troubleshooting assistance.
8 Storage system recovery This chapter describes how to perform a system recovery. To restore the HP StoreEasy 3000 Storage system to the factory defaults, see “Restoring the factory image with a DVD or USB flash device” (page 86). System Recovery DVD The System Recovery DVD enables you to install an image or recover from a catastrophic failure. At any time, you may boot from the DVD and restore the server to the factory condition.
2. Reboot the server blade to either the USB flash device or USB DVD drive. The system BIOS attempts to boot to the USB device first by default. Watch the monitor output during the boot as you may need to press a key to boot to the USB media. NOTE: If directly connected, you may have to change the BIOS settings to ensure proper boot sequence. If connected remotely, you may have to change some iLO settings to ensure proper boot sequence. 3. Click Restore Factory Image.
Recovering both servers If both server blades are being recovered, the process is similar to configuring a newHP StoreEasy 3000 Storage system delivered from the factory. NOTE: Although the recovery process restores the HP StoreEasy 3000 Storage system to the factory version, it does not restore the EMU and iLO address configuration to the factory defaults. The EMU and iLO address configuration will be the same as it was prior to system recovery.
6. 7. Select the time and date shown in the lower right corner of the task bar. Click the Change date and time settings link. Set the time zone of the server to be the same time zone as the other 3840 server and the domain controller. Adjust the time of day, if needed. Windows Server Manager opens when the ICT window is closed. If it is not open, launch it from the shortcut on the task bar to the right of the Windows Start button.
2. Reboot the server to either the USB flash device or USB DVD drive. The system BIOS attempts to boot to the USB device by default. Watch the monitor output during the boot as you may need to press a key to boot to the USB media. NOTE: If directly connected, you may have to change the BIOS settings to ensure proper boot sequence. If connected remotely, you may have to change some iLO settings to ensure proper boot sequence. 3. In Windows Boot Manager, select Windows Recovery Environment.
17. Click Yes on the confirmation message to proceed with Windows recovery. IMPORTANT: Do not interrupt the recovery process. 18. Remove the directly connected DVD or flash device (or remotely connected iLO virtual DVD or flash device) from the server.
9 Support and other resources Contacting HP HP technical support For worldwide technical support information, see the HP support website: http://www.hp.
Rack stability Rack stability protects personnel and equipment. WARNING! To reduce the risk of personal injury or damage to equipment: • Extend leveling jacks to the floor. • Ensure that the full weight of the rack rests on the leveling jacks. • Install stabilizing feet on the rack. • In multiple-rack installations, fasten racks together securely. • Extend only one rack component at a time. Racks can become unstable if more than one component is extended.
10 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
A Operating system logical drives The logical disks reside on physical drives as shown in Storage system RAID configurations (page 95). IMPORTANT: The first two logical drives are configured for the storage system operating system. The Operating System volume default factory settings can be customized after the operating system is up and running.
B Regulatory information For important safety, environmental, and regulatory information, see Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products, available at http:// www.hp.com/support/Safety-Compliance-EnterpriseProducts. Belarus Kazakhstan Russia marking Manufacturer and Local Representative Information Manufacturer’s information: • Hewlett-Packard Company, 3000 Hanover Street, Palo Alto, California 94304, U.S.
HP Enterprise Servers http://www.hp.com/support/EnterpriseServers-Warranties HP Storage Products http://www.hp.com/support/Storage-Warranties HP Networking Products http://www.hp.
Glossary The following glossary terms and definitions are provided as a reference for storage products. Glossary terms ACL Access control list. ADS Active Directory Service. array A synonym of storage array, storage system, and virtual array. A group of disks in one or more disk enclosures combined with controller software that presents disk storage capacity as one or more virtual disks. backups A read-only copy of data copied to media, such as hard drives or magnetic tape, for data protection.
LUN Logical unit number. A LUN results from mapping a logical unit number, port ID, and LDEV ID to a RAID group. The size of the LUN is determined by the emulation mode of the LDEV and the number of LDEVs associated with the LUN. mount point A host's file system path or directory name where a host volume (device) is accessed. NAS Network attached storage. NCT Network Configuration Tool NFS Network file system. The protocol used in most UNIX environments to share folders or mounts.
Index A D access rights, managing, 69 Accessing the storage system Remote Desktop method, 23 ACL, defining, 55 adding node to existing cluster, 21 Array Configuration Utility, 34 array controller, purpose, 29 arrays, defined, 29 backup, with shadow copies, 48 basic disks, 30, 31 Belarus Kazakhstan Russia EAC marking, 96 data blocks, 29 Data Deduplication, 25 data striping, 29 disk access, verifying, 67 Disk Management extending volumes, 37 documentation providing feedback on, 94 domain membership, verif
G O GPT partitions, 31 group, cluster, 62 groups, adding to permissions list, 50 online spares, 30 operating system logical drives, 95 OpsMgr see Microsoft Systems Center Operations Manager (SCOM) H hardware components HP StoreEasy 3840 Gateway Storage, 8 HP StoreEasy 3840 Gateway Storage Blade, 11 HP Array Configuration Utility, 34 Storage Manager, 34 HP Initial Configuration Tasks, 17 HP StoreEasy 3840 Gateway Storage hardware components, 8 HP StoreEasy 3840 Gateway Storage Blade hardware components,
factory image, 86 S SAN environment, 34 security auditing, 52 file level permissions, 49 ownership of files, 54 serial number, 15 server power on, 16 Server Core, using, 22 Services for UNIX, 31, 32 setting up overview, 15 setup completion, 20 shadow copies, 32 backups, 48 cache file, 41 defragmentation, 40 described, 38 disabling, 44 file or folder recovery, 47 in a cluster, 71 managing, 41 mounted drives, 41 on NFS shares, 47 on SMB shares, 46 planning, 39 redirecting, 43 scheduling, 43 uses, 38 viewing