HP 9000 Virtual Library System User Guide For VLS Firmware 6.1.0 Abstract This document describes the HP VLS9000–series systems to facilitate their installation, operation, and maintenance. This document is intended for system administrators who are experienced with setting up and managing large storage systems.
© Copyright 2007, 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Acknowledgments Microsoft® and Windows® are U.S.
Contents 1 Introduction.............................................................................................12 VLS9000 Components............................................................................................................12 2 Hardware Installation................................................................................14 Minimum Hardware Requirements.............................................................................................14 Preparing for the Installation..
3 Multi-node Setup......................................................................................44 Configuring the Primary Node 0..............................................................................................44 Configuring the Secondary Nodes............................................................................................44 4 Storage Configuration...............................................................................45 Managing VLS Capacity.........................
Ejecting Media from a Slot into an Empty Mailslot..................................................................70 Ejecting Media from a Drive into an Empty Mailslot................................................................70 Restarting Automigration/Replication Services.......................................................................71 Scanning a SAN Destination Library....................................................................................71 Editing the Management URL...........
Rebooting the VLS System........................................................................................................95 Powering Off the System..........................................................................................................96 Powering Off VLS Arrays..........................................................................................................96 8 User Interfaces.........................................................................................
Deleting Unused Virtual Disks............................................................................................125 Clearing the Leftover Disks................................................................................................125 Updating the Disk Firmware..............................................................................................125 Resetting the Disk Array Information...................................................................................
Physical Capacity............................................................................................................152 Workload Assessment......................................................................................................152 Running a Workload Assessment Simulation...................................................................152 Using the Workload Assessment Templates.....................................................................153 Editing the Notification Alerts....
Hard Drive LEDs..............................................................................................................188 Hard Drive LED Combinations...........................................................................................188 VLS9200 High Performance Node Components, LEDs, and Buttons.............................................188 Front Panel Components....................................................................................................
Removing a VLS Node from the Rack..................................................................................216 Removing the VLS Node Access Panel................................................................................216 Installing the VLS Node Access Panel..................................................................................216 VLS Node Component Replacement........................................................................................216 Hard Drive.....................
VLS9200 Disk Array Enclosure...............................................................................................248 Fibre Channel Switch 4/10q..................................................................................................249 Fibre Channel Switch 4/16q..................................................................................................250 Ethernet Switch 2510–24 Specifications...................................................................................
1 Introduction The HP Virtual Library System (VLS) family consists of RAID disk-based SAN backup devices that emulate physical tape libraries, allowing you to perform disk-to-virtual tape (disk-to-disk) backups using your existing backup applications. The VLS family includes different series of models that vary in storage capacity and performance. Firmware version 6.0.0 marked the change to a 64–bit operating system on the nodes.
VLS9000 system scalability considerations: • Two Fibre Channel ports (one Fibre Channel port on each Fibre Channel switch) are required for each VLS node. • Two Fibre Channel ports (one Fibre Channel port on each Fibre Channel switch) are required for each VLS base enclosure. • Up to two VLS arrays may be installed for every VLS node. • For maximum capacity, install two arrays for every VLS node installed. • For maximum performance, install one VLS array for every VLS node installed.
2 Hardware Installation This section details the steps to install the VLS hardware from installation preparation to final cabling. Minimum Hardware Requirements VLS9000 systems upgrading to firmware 6.
• Use conductive field service tools. • Use a portable field service kit with a folding static-dissipating work mat. If you do not have any of the suggested equipment for proper grounding, have an authorized reseller install the part. For more information on static electricity or assistance with product installation, contact your authorized reseller. Unpacking Place the shipping carton as close to the installation site as possible.
Rack Requirements HP supports the HP System E racks and the HP 10000 Series racks for use with VLS systems. Other racks might also be suitable, but have not been tested with the VLS. Rack Warnings WARNING! To reduce the risk of personal injury or damage to the equipment, before installing equipment be sure that: • The leveling jacks are extended to the floor. • The full weight of the rack rests on the leveling jacks. • The stabilizing feet are attached to the rack if it is a single-rack installation.
Item Description 3 VLS badge (1) 4 8 Gb FC transceivers (2) 5 Power cords (2) 6 LTU (1 for 10 TB, 2 for 20 TB) 7 Printed VLS array installation poster (1) Ethernet cables (2) and FC cables (2), not shown (shipped separately) VLS9200 Capacity Enclosure Shipping Cartons Item Description 1 VLS9200 capacity enclosure, 10 TB or 20 TB (1) 2 1U rack mounting hardware kit (1) 3 VLS badge (1) 4 SAS cables (2) 5 Power cords (2) 6 LTU (1 for 10 TB, 2 for 20 TB) 7 Printed VLS array installat
VLS9200 Node Shipping Carton Item Description 1 VLS9200 node (1) 2 1U rack mounting hardware kit (1) and documentation 3 Loopback plugs for FC ports (2) 4 8 Gb FC transceivers (2) 5 Power cords (2) 6 Quick Restore CD (1) 7 Printed VLS node installation poster (1) Ethernet cables (2) and Fibre Channel cables (2), not shown (shipped separately) VLS9200 High Performance Node Shipping Carton 18 Item Description 1 VLS9200 high performance node (1) 2 2U rack mounting hardware kit (1) and d
Item Description 7 Printed VLS node installation poster (1) Ethernet cables (2) and Fibre Channel cables (2), not shown (shipped separately) VLS9000 40-port Connectivity Kit Shipping Carton Item Description 1 Ethernet switches (2) 2 20–port FC switches (2) 3 1U rack mount kits (4) and documentation 4 Power cords (8) 5 Printed VLS connectivity kit installation poster (1) Air plenums for the Ethernet switches (2), not shown Ethernet cables (3), not shown (shipped separately) VLS9000 Entry-lev
VLS Assembly Overview HP recommends you install the VLS9000 and VLS9200 components in the following order: 1. 2. 3. 4. 5. 6. Install base and capacity disk array enclosures using HP 9200 Virtual Library System 10 TB and 20 TB SAS Base Enclosure Installation Instructions and HP 9200 Virtual Library System 10 TB and 20 TB SAS Capacity Enclosure Installation Instructions.
1. Determine the number of PDUs to install. • The number of PDUs you install is based on the number of arrays to install. • Install up to four arrays in one rack. • Install up to a maximum of four additional arrays in racks two through four. Use the following table to determine how many PDUs to install: Arrays PDUs PDMs North America Europe North America Europe 1 2 2 6 6 2 2 2 6 6 3 4 4 10 8 4 4 4 10 8 NOTE: PDUs are installed in pairs.
Figure 1 PDU and PDM locations Installing the Disk Array Enclosures into a Rack This section describes how to install the disk array enclosures into a rack. Installing Cage Nuts 1. 2. 3. 4. 5. 6. 22 Locate the cage nuts from the rack mounting hardware kit contents. Start at rack positions 3 and 4 when installing full arrays. Leave rack space for future expansion for any partial array being installed. See“Mounting the Disk Array Enclosures into the Rack” (page 25) for the placement of the enclosures.
7. On the rear vertical posts, starting at the same rack positions as in the front, install a cage nut in the middle hole of each position for each 2U enclosure to be installed. Attaching Side Brackets to Enclosures NOTE: The right and left enclosure side brackets are identical. Install the brackets with the beveled slots facing away from the disk array enclosures. To attach enclosure side brackets to each side of a disk array enclosure, use two #8-32 x 3/16-inch flathead screws on each side. 1. 2. 3. 4.
1. Locate the front and rear rail pieces and screws from the rack mounting hardware kit contents. NOTE: 2. 3. 4. 5. 6. The front rail piece has three long, beveled slots. The rear rail piece has holes. Slide the rear rail piece behind the front rail piece so the brackets are at opposite ends and bend away from you. Line up the center of the beveled slots on the front rail piece with the first, third, and fifth holes in the rear rail piece, counting from the unbent end.
Mounting the Enclosures into the Rack WARNING! The enclosure weighs 33.6 kg (74 lb) full. At least two people are required to lift, move, and install the enclosure. If only one person is to perform the installation, remove the power modules and hard drives from an enclosure before installing it, and if possible position it on top of another device or shelf in the rack to hold it as you attach all the brackets.
4. 5. 6. 7. Tighten all four screws. Repeat this procedure to install up to two more capacity enclosure above the previous one. At the top of the capacity enclosures, install the base enclosure. Install the remaining base and capacity enclosures: • If you are installing four full arrays, continue installing three capacity enclosures beneath each base enclosure working up the rack. • If you are installing fewer than four full arrays, begin at rack position 17.
Table 1 Cabling the Base Enclosure Item Description Connects to 1 FC port 1 Array 1 base enclosure: connects to port 9 of Fibre Channel switch #1 (FC SW1) via FC cable. Additional base enclosures: connects to the next available port on Fibre Channel switch #1 (FC SW1) via FC cable. Cable additional base enclosures to the switch ports in this order: 19, 8, 18, 7, 17, 6, 16. 2 FC port 1 Array 1 base enclosure: connects to port 9 of Fibre Channel switch #2 (FC SW2) via FC cable.
Installing the VLS Node into a Rack NOTE: If you are installing the node into a telco rack, order the appropriate option kit at the RackSolutions.com web site: http://www.racksolutions.com/hp. Follow the instructions on the web site to install the rack brackets. 1. 2. Locate the rail kit, part number 360332–003. Install the two outer slide rails to the rack. The outer rails are marked “FRONT” and “REAR.” On both sides of the rack, align the rail holes with the holes in the rack and secure with thumbscrews.
Cabling the Node Table 3 Cabling the Node Item Description Connects to 1 FC port 4 Primary node: connects to port 0 of Fibre Channel switch #2 in rack 1 (FC SW2) via FC cable. Secondary nodes: connects to the next available port on Fibre Channel switch #2 (FC SW2) via FC cable. Cable secondary nodes to the switch ports in this order: 10, 1, 11, 2, 12, 3, 13. 2 FC port 3 Primary node: connects to port 0 of Fibre Channel switch #1 in rack 1 (FC SW1) via FC cable.
Installing the VLS High Performance Node into a Rack NOTE: If you are installing the node into a telco rack, order the appropriate option kit at the RackSolutions.com web site: http://www.racksolutions.com/hp. Follow the instructions on the web site to install the rack brackets. 1. 2. Locate the rail kit. Install the two outer slide rails to the rack. If your rack contains single phase PDUs, you will install the node in rack positions 35 and 36.
3. Connect one end of an Ethernet cable to NIC 1 on the primary node. Connect the other end to the external network. If your configuration contains one connectivity kit, the VLS9200 hardware installation is complete. Continue installation by configuring the identities of each node and array. See the HP 9200 Virtual Library System User Guide.
Installing Cage Nuts and Rail Flanges 1. On the rack vertical posts, mark the holes (three on each front vertical post and two on each rear vertical post) that will be used by the rail flanges. Then, from the inside of each vertical post, insert a cage-nut into each marked hole. 2. From the front of the rack, secure the mounting flanges to the marked holes, using screws shipped with the rails. Attach a washer and nut to the posts at the end of each mounting flange.
Mounting Ethernet Switch 6600-24G into the Rack 1. At rack position 39, from the back of the rack align the grooved ends of the switch rails with the posts on the mounting flanges. Placing the grooved ends between the mounting flange and the loose washer and nut provides guidance. 2. 3. Slide the switch fully into the rack. Tighten the washer and nut on both sides of the rack to secure the switch rails to the mounting flanges.
Cabling Ethernet Switches Table 4 Cabling Ethernet Switch #1 (SW1) Item Description Connects to 1 Port 1 NIC 3 of primary node via Ethernet cable 2–8 Ports 2–8 NIC 3 of secondary nodes (if present) via Ethernet cable 9–18, 20 Ports 9–18, 20 Ethernet port of RAID controller 2 of additional base array enclosures (if present) via Ethernet cable 19 Port 19 Ethernet port of RAID controller 2 of first base array enclosures via Ethernet cable 21 Port 21 Port 21 of Ethernet switch #2 (SW2) in a se
Table 5 Cabling Ethernet Switch #2 (SW2) (continued) Item Description Connects to 23 Port 23 Ethernet port of FC switch #2 (FC SW2) via Ethernet cable 24 Port 24 Port 24 of Ethernet switch #1 (SW1) via Ethernet cable Table 6 Cabling Ethernet Switch #3, if present (SW1 of a second kit) Item Description Connects to 17–20 Ports 17–20 Ethernet port of RAID Controller 2 of additional base array enclosures via Ethernet cable Table 7 Cabling Ethernet Switch #4, if present (SW2 of a second kit) Item
1. If the metal mounting brackets are not attached to the switch, attach them as follows: a. Align the brackets so that the four screw holes are against the side of the switch. The side of the bracket with two screw holes extends from the switch and aligns with the front of the bezel. b. c. d. 2. 3. 4. 5. 6. Adjust alignment so that the holes in the side of the mounting bracket line up with the holes in the switch.
3. Secure Ethernet cables with a Velcro® tie to the right side of the rack. Installing the Fibre Channel Switches into a Rack Installing the switches into the rack involves attaching rails to the Fibre Channel switches and then mounting them into the rack. Install the switches immediately above the nodes previously installed. 1. Locate the following items and set them aside on a stable work surface: 2. 3. 4.
5. 7. From the front of the rack, secure the adjustable mounting flanges to the marked holes, using screws shipped with the rails. From the rear of the rack, slide the racking shelf assembly with Fibre Channel switch into the rack, sliding the rail ends onto the adjustable mounting flanges already installed in the front rack vertical posts. When the rail flanges are flush with the rack vertical posts, secure them to the rack. 8. Attach two 1U cover plates to the front of the rack. 6.
Cabling Fibre Channel Switches Table 9 Cabling Fibre Channel Switch #1 (FC SW1) Item Description Connects to 0 Port 0 FC Port 3 of primary node via FC cable 1–3 Ports 1–3 FC Port 3 of secondary nodes (if present) via FC cable 4–8 Ports 4–8 Port 0 of RAID controller 1 of additional arrays (if present) via FC cable 9 Port 9 Port 0 of RAID controller 1 of first array via FC cable 10–13 Port 10–13 FC Port 3 of secondary nodes (if present) via FC cable 14–19 Ports 14–19 Port 0 of RAID contro
NOTE: Fibre Channel switch #1 is on the bottom and switch #2 is on the top. If present, Fibre Channel switch #3 is on the bottom and switch #4 is on the top. 1. 2. Connect the Fibre Channel switches to the nodes, base array enclosures, and Ethernet switches if not already connected using Table 9 (page 39) and Table 10 (page 39). If you are installing more than one array: a. Connect a Fibre Channel cable from Fibre Channel switch #1 to port 0 of each additional RAID controller 1 (array 1, array 2, etc.
15. A prompt asks if you want to log out. Enter y. The switch logs off, and the spanning tree is now reconfigured to include the new switch. 16. Repeat this procedure for the remaining Ethernet switches in racks 1 and 3. NOTE: After reconfiguring the Ethernet switches, power down the entire VLS system. See Powering Off the System for instructions. Installing XPAK Transponders The XPAK transponders (XPAKs) plug into the 10 Gb Fibre Channel ports in the front of the 20–port Fibre Channel switches.
Applying ISL Kit Labels Locate the labels supplied in the interswitch link kit contents. As you install each cable in the following sections, apply the appropriate label to each cable end. NOTE: The labels for interlinking the switches use “A” to indicate rack 1 and “B” to indicate rack 3. For example, an Ethernet cable label will read, “SW6600–24A port 22 TO SW6600–24B port 22.” Installing Interswitch Fibre Channel Cables 1. 2. 3.
1. 2. 3. Locate the Ethernet cables included in the interswitch link kit contents. Connect the Ethernet cables from the switches in rack 1 to the switches in rack 4 as shown in the figure and table below. From Rack 1 To Rack 4 Switch 0 port 22 Switch 0 port 22 Switch 1 port 22 Switch 1 port 22 Switch 0 port 21 Switch 1 port 21 Switch 1 port 21 Switch 0 port 21 Secure Ethernet cables to the right side of the rack using a Velcro® tie.
3 Multi-node Setup This section explains how to configure the identities of each node after the nodes and other components of the system are installed and cabled. NOTE: nodes. The Fibre Channel and Ethernet switches should be powered on before configuring the NOTE: The VLS system can be configured remotely using iLO with virtual terminal or virtual media; see the iLO user guide for details. Configuring the Primary Node 0 To configure the primary node: 1. Power on array 0.
4 Storage Configuration This section describes how to configure the storage pool policy and add or remove storage as needed after the nodes have been configured. Managing VLS Capacity There are several ways to manage the capacity of your system: • Increase the number of VLS nodes • Increase the number of VLS capacity enclosures. See Adding VLS Capacity. • Reduce the number of VLS disk array enclosures. See Removing VLS Capacity. • Create storage pools. See Configuring the Storage Pool Policy.
4. 5. Power on the enclosure. See Powering on VLS Arrays (page 91). Add the new disk array storage to the VLS using Command View VLS: a. Select the System tab. b. In the navigation tree, select Storage LUNs. c. Select Discover Unconfigured Storage from the task bar. The VLS locates the new array or capacity enclosure and the screen displays the LUN capacity that will be added to the storage pools, based on the storage pool policy, as a result of the new enclosure.
7. Click Finish. Viewing the Storage Pool To view the storage pool information from Command View VLS: 1. Select the System tab. 2. Expand Storage Pools in the navigation tree. 3. Select the storage pool of interest in the navigation tree. The storage pool details window opens. Rebuilding all Storage Pools To delete all information on the VLS9200 arrays and reformat them, perform a Rebuild All Storage Pools from Command View VLS.
Adding New Arrays to the Storage Pool If you add a new array or disk array enclosure and run the Discover Unconfigured Storage task (see “Adding VLS Capacity” (page 45)) but cancel the process without adding the LUN capacity, you can resume the process later. 1. Select the System tab. 2. Select Storage Pools in the navigation tree. 3. On the Storage Pools screen, select Run Pool Policy from the task bar.
• Secure Erasure: When you delete a cartridge, this feature overwrites deleted cartridge data with a specific data pattern so the data cannot be recovered. This is comparable to tape shredding of physical tapes. This only applies to firmware version 6.0 and higher. • iLO 2 Advanced VLS nodes are shipped with the HP Integrated Lights-Out (iLO) Standard feature for remote management. However, you need a license to use the iLO 2 Advanced features including Virtual Media and Remote Console.
5 Automigration/Replication Instead of the preferred method of copying virtual media to physical media via the backup application, another option is to perform transparent tape migration via the VLS device using automigration. Automigration describes the feature in which the Virtual Library System acts as a tape copy engine that transfers data from virtual cartridges on disk to a physical tape library connected to the VLS device.
• The destination library can only be used for copy operations. • Echo copy is a full tape copy, rather than an incremental change copy, so it can be an inefficient use of media if you are using non-appending copy pools in your backup jobs. An echo copy pool is used to define which destination library slots are to be echoed into a specified virtual library.
Replication can be configured to operate in one of two modes: • Deduplication-enabled replication, known simply as replication — the virtual cartridge on the source VLS is deduplicated against the virtual cartridge on the target VLS. In this manner, only data that has changed is transmitted over the network to the target VLS. This mode requires that deduplication is licensed and enabled on both the source and the target VLS.
Using automigration, you can share a single destination library across multiple virtual libraries (maximum of 20 drives on the physical libraries), or configure multiple destination libraries to be used in a single virtual library. CAUTION: Automigration only supports destination libraries that have homogeneous drive types; for example, all drives are LTO-2. A mixture of drive types in the destination library, such as LTO-3 and LTO-2, is not supported.
1. Select the Automigration/Replication tab. The Summary for All Managed Destination Libraries screen displays. 2. 3. 4. 5. 6. 7. Select Manage LAN/WAN Replication Library from the task bar. Enter the name or IP address of the host containing the LAN/WAN replication target you just created. Select Submit. On the next screen, select the LAN/WAN replication target to manage. Enter the password you created for that target. Select Submit. The LAN/WAN replication target is now associated with the source.
2. From the task bar, select Unmanage Library. The Unmanage Destination Library screen displays, showing all managed libraries. If there are no managed libraries, the system will return the message: “There are no managed libraries”. 3. If the library you wish to unmanage is not already selected, select it now. NOTE: You can only select one library to unmanage at a time. To unmanage additional libraries, repeat the procedure for each library to unmanage. 4. Select Submit.
5. of the mirror by using a Command View VLS Console and viewing the destination tapes in Slots in the expanded list under Destination Library. In order to restore from a destination cartridge, either load it into a physical drive that is visible to the backup application, or perform a Load for Restore. Load for Restore copies the destination tape back into the virtual cartridge so that the backup application can then restore from the virtual cartridge.
NOTE: The sizing factor is crucial to creating the right size virtual tapes. When determining the sizing factor of the virtual tapes, keep in mind the following: • ◦ The sizing factor should be based on the size of the physical tape or the tape type if possible. Common tape types and their sizes are: LTO-1 – 100 GB, LTO-2 — 200 GB, LTO-3 — 400 GB, LTO-4 — 800 GB, DLT-IV — 80 GB, DLT-VS1 — 160 GB, SDLT-I — 320 GB, SDLT-II — 600 GB.
LAN/WAN libraries: • Priority — the priority this echo copy pool takes over other copy pools during the backup window. This can be High, Medium, or Low. • Deduplication Timeout (only if you selected deduplicated replication on the previous screen) — if the cartridge fails to deduplicate within the timeout limit you set, the entire cartridge is copied over in non-deduplicated mode. • Send notification if cartridge not replicated in — the copy pool threshold.
NOTE: The tape is only created if a header exists and is legible by the system. Restoring from a SAN Physical Cartridge If the destination tape is still loaded in the destination library, then its matching virtual cartridge will still be present in the virtual library. In this case, you can simply restore from the virtual cartridge using the backup application. If the destination tape has been ejected from the destination library, you must use one of the following options: 1.
Restoring from a LAN/WAN Virtual Cartridge From Command View VLS: 1. Click the Automigration/Replication tab. 2. Under Destination Libraries in the navigation tree, expand the library you want to restore. 3. From the navigation tree, select Slots. 4. On the task bar, select Restore Media. 5. Select the slot numbers you wish to restore. 6. Click Submit. The Restore Media screen refreshes with a message that indicates the restores were successful.
4. 5. 6. From the task bar, select Load Media for Overwrite to open the Load Media for Overwrite screen. From the drop-down box, select the copy pool you want to load the media into. For all mailslots, the destination slots are automatically populated with the first available slots. To keep the automatic assignment, skip to Step 10. To assign the destination slots manually, continue to the next step. If the Destination Slot Number for each mailslot displays “None,” the copy pool you selected is full.
Viewing Automigration Cartridges in the Firesafe When a cartridge is ejected from the destination library, its matching virtual cartridge is automatically ejected out of the virtual library and moved into the device's firesafe. The firesafe acts as a virtual offline location for the automigration virtual cartridges.
2. In the Copy Pool column, select any instance of the appropriate pool. The ECHO COPY POOL DETAILS screen displays. 3. 4. Select Delete in the taskbar. Select OK from the dialog box. The copy pool details screen refreshes and the message, “The slot map was successfully deleted from [copy pool name]” displays. NOTE: If a tape is in a newly unmapped section of a library, the tape will be moved to the firesafe.
4. 5. 6. 7. On the Summary for Copy Pools screen, select the echo copy pool if interest to open the Echo Copy Pool Details screen for that copy pool. From the task bar, select Edit Slot Maps. On the Edit Slot Maps screen, select Delete corresponding to the slot map to remove. Select OK in the dialog box. The copy pool details screen refreshes and the message, “The slot map was successfully deleted from [copy pool name]” displays. To add slot mapping to any copy pool which does not have slots mapped: 1.
Deleting a Copy Pool You should delete a copy pool when you no longer need it. To delete a copy pool: 1. Select Copy Pools under the appropriate library from the navigation tree. 2. Select the copy pool on the Summary for Copy Pools screen to open the Echo Copy Details screen. 3. From the task bar, select Delete. 4. Select OK from the dialog box. The Copy Pools screen is refreshed and the deleted pool is no longer listed. NOTE: Deleting a copy pool moves the associated virtual tapes into the firesafe.
the options are Connected, Configuration Out of Sync, and Unreachable. The screen also provides the name and model of the library, number of simultaneous transfers, number of slots, management URL, and availability. 3. Expand the destination library in the navigation tree to access more specific information. Cartridge Status and Slot Details To view the status of the destination library's slots, expand the library in the navigation tree and select Slots.
Status message Pool type Description Export Preprocessing* Echo Copy Gathering deduplication instructions needed for replication using tape transport. Exporting* Echo Copy Copying content from the source cartridge onto a physical tape. Partially Exported* Echo Copy Copying content from the source cartridge will continue on another physical tape. Export In Use* Echo Copy Waiting for the remaining cartridges in the pool to finish exporting.
Status message Pool type Description Ready For Import Tape Import Tape in an Import pool slot that is in the catalog. Importing Tape Import Actively copying data from the physical tape to the target virtual cartridge. Import Complete Tape Import All data has been copied from the physical tape to the target virtual cartridges. Signal to tape operator to remove the tape from the physical library. Unloaded Completed Tape Tape Import All data has been copied and the tape has been ejected.
Forcing a Replication Job The Copy Now task allows you to schedule a replication (or automigration) job that forces the cartridge to replicate immediately regardless of whether or not the cartridge is within the policy window. You can only perform this task when the cartridge is holding in the Out of Synch state. In Command View VLS: 1. On the Automigration/Replication tab, expand the destination library in the navigation tree and select Slots to open the Summary for Slots screen. 2.
5. Hover over the Select Slot link for the first slot you want to edit. The screen displays a list of the available destination slots. Select a slot number from the list. After you select a slot from the available destination slots, that slot no longer appears in the list. 6. 7. 8. Hover over each Select Slot link until you have selected a destination slot for each slot you want to edit. Click Next Step. The screen displays a confirmation. Click Move.
Restarting Automigration/Replication Services If you replace a tape drive on your physical tape library, you must restart automigration/replication services afterwards. This resets the services to acknowledge the new tape drive. 1. In Command View VLS, select the System tab. 2. In the navigation tree, select Chassis. 3. Under Maintenance Tasks, select System Maintenance. 4. In the task bar, select Restart Automigration/Replication Services. The screen displays a warning. 5. Select Restart.
4. Select Submit. The SUMMARY FOR ALL DESTINATION LIBRARIES screen refreshes, along with the message, “File [file name] successfully uploaded.” Deploying SAN Destination Library or Tape Drive Firmware After uploading the firmware for a physical library or disk drive on a destination library (see Uploading SAN Destination Library or Tape Drive Firmware (page 71)), install the firmware: 1. Place the appropriate library offline (see Placing a Library Offline or Online (page 69)). 2.
2. From the task bar (in the Destination Library Details window), select Library Assessment Test. A dialog box displays to confirm the selection. 3. From the dialog box, select OK. The Library Assessment Test Results window displays. 4. 5. 6. To view the results of the assessment, select Download Library Assessment Test Results. Select Finish to return to the Destination Library Details window. Place the library online (see Placing a Library Offline or Online (page 69)).
LAN/WAN Destination Library Operations The following sections describe the destination library operations for LAN/WAN libraries available to the user.
4. From the Echo Copy Pool Details screen, you can: • Select Initiate Tape Transport in the task bar to restart the export. • Select Cancel Tape Transport in the task bar to cancel the process and place the echo copy pool into the “Ready” state. Importing Data from Physical Tapes for Tape Initialization 1. 2. 3. 4. 5. 6. 7. Complete the export process. See Exporting Data to Physical tapes for Tape Initialization (page 74).
1. 2. 3. 4. On the Automigration/Replication tab, select “Not migrated in Deduplication timeout limit/Forced Copies” from the Summary of All Cartridges screen. Select Forced Non Deduplicated Copy from the task bar. On the Forced Non Deduplicated Copy screen, select the cartridges you want to replicate. Select Submit. The system immediately registers the selected cartridges into the queue to replicate the whole cartridge when resources are available regardless of the policy windows.
NOTE: If you selected your library from the navigation tree, this pull-down field does not appear because you have already selected the appropriate library. 7. 8. Enter a start slot and an end slot for the copy pool from within the available ranges. Enter the number of maximum simultaneous transfers permitted. This allows you to limit the replication activity on that target. This field defaults to the maximum number of transfers allowed by the VLS. 9.
Summary for Cartridges screens. The state can be “unknown” when only the header transferred to the tape, when nothing transferred to the tape, during the transfer of data, or when a data transfer has failed. Setting the Global LAN/WAN Replication Target Configuration 1. 2. 3. 4. Click the Automigration/Replication tab. In the navigation tree, expand Configuration Summary. Select Global LAN/WAN Target Configuration.
5. Select OK from the dialog box. The LAN/WAN Replication Targets screen displays with the deleted target removed from the list. Changing the LAN/WAN Replication Target Password You may need to change the LAN/WAN replication target's password in the event of a security breach. The following steps will stop all communication between the source and the target, and then re-establish secure communication. 1. Change the password on the LAN/WAN replication target.
Cartridges in this category can also be listed in the following categories: Mirror Failed, Pending, Mirror Active, In Use/Deduplicating, and Waiting for Policy Window. This category displays a green (no cartridges in this category) or red (one or more cartridges in this category) icon. • Mirror failed — Corrective action needed — contains cartridges for which the copy to the mirror has failed.
• Cancel Job — cancel one or more Mirror jobs. See (page 76). • Resume Job — resume one or more paused Mirror jobs. See (page 76). From the summary screen you can also click a specific barcode or an echo copy pool to see the details of that selection. Cartridge Details View the details of a particular cartridge by clicking the barcode from the cartridge summary screen. The details include the last time the cartridge was in the In Sync state, the current physical and logical size, and the job history.
• The node the job is running on • Status – active or pending • Expected completion time • The drive the job is running on Change the number of rows displaying on the screen using the Page Size menu. You can also use the Filter by View menu to display a specific, predefined set of information; see Configuring Automigration Job Reports (page 82) to create the views. After making a choice from one or both of these menus, click Refresh.
• Transfer rate • Source and target locations • The node the job is running on • Completion status • Compression (yes or no) The performance graph maps the MB per second against the time it took the jobs to complete. To change the jobs included in the job history report, select a View previously created on the Configuration screen. If you haven't set up any views: 1. Select a location category. The options are SAN, LAN/WAN, All locations, and predefined Views. 2.
2. 3. 4. 5. 6. 7. In the navigation tree, expand Configuration Summary. Select GUI Configuration from the expanded list. In the Default Number of Rows in Slot/Cartridge Table box, enter the number of table rows you want to display on the slot and cartridge summary screens. Changing the number of rows to display from the actual display screens does not change the default value added here. In the Default Number of Rows in Job box, enter the number of table rows you want to display on the job screens.
6 Deduplication Deduplication is the functionality in which only a single copy of a data block is stored on a device. Duplicate information is removed, allowing you to store more data in a given amount of space and restore data using lower bandwidth links. The HP virtual library system uses Accelerated deduplication. NOTE: The deduplication feature is only available on systems running VLS software version 3.0 or later.
entire backup job and to prevent too many backup jobs from piling up on the same cartridge, but small enough that you are not wasting overall cartridge space. • Additional nodes — For systems with long backup windows, you may want to include additional nodes to speed up the post-processing deduplication. A VLS system using deduplication can support up to six nodes nodes.
to result in the best deduplication ratio is used. Depending on your current setting, the options are: • Backup — Useful when file-level differencing is less space efficient (for example, if the file server is full of very small files). • File — Useful for file servers.
Viewing Deduplication Statistics and Reports In Command View VLS, you can view statistics on the deduplication process by summary, backup report, cartridge report, or system capacity. Deduplication Summary The Deduplication Summary displays a graph depicting the storage savings achieved with data that has been fully deduplicated. 1. Select the System tab. 2. Select Chassis on the navigation tree to expand it. 3. Select Deduplication. The deduplication summary displays.
Delta-diff in Process — the backup has identified another version of itself to difference against and is now running differencing to identify the duplicate data between the two versions. With multi-stream backups, this process may take multiple tries (going back to "Waiting for Next Backup" state each time) until the differencing locates the correct stream.
NOTE: If a cartridge is full and all jobs on it have been delta-differenced except for one job that is waiting for another backup, you can have the cartridge reclaimed by temporarily disabling the one remaining backup job. Disabling the backup type disables all instances of that backup type on all cartridges that have not yet been delta-differenced. When you re-enable the backup type, it allows deduplication for future instances of that backup type. Deduplication System Capacity (version 3.4.
7 Operation This section describes how to power on and power off the VLS nodes and arrays. Powering On VLS Arrays The order in which you power up the disk array enclosures in an array is important. Power on the base enclosure last in order to ensure that the disks in the capacity enclosures have enough time to spin completely before being scanned by the RAID controllers in the base enclosure. CAUTION: source.
Figure 2 Base and Capacity Enclosure Front Panel LED Status – Normal Operation Item Location LED LED state 1 Hard drives Status (blue or yellow) Off or blue Power/Activity (green) On or blinking Fault/Service Required Off Power On/OK On 2 3 Right ear 4 NOTE: The hard drive LEDs may not immediately illuminate when the enclosure is powered on. The LEDs illuminate after the hard drives are configured by the VLS firmware.
Figure 4 VLS9200 Base Enclosure Rear Panel LED Status – Normal Operation Item Location LED LED state 1 Power module Voltage/Fan Fault/Service Required (amber) Off Input Source Power Good (green) On Host 2/4 Gb FC Link Status/Link Activity (green) On, if link speed is 2 or 4 Gbps 2 3 RAID controller 4 Host 8 Gb FC Link Status/Link Activity (green) On, if link speed is 8 Gbps 5 Network Port Activity (green) On 6 Network Port Link Status (green) On 7 OK to Remove (blue) Off 8 Unit Lo
Item Location LED LED state 5 Power On/OK (green) On 6 SAS Out port status (green) On Figure 6 VLS9200 Capacity Enclosure Rear Panel LED Status – Normal Operation Item Location LED LED state 1 Power module Voltage/Fan Fault/Service Required (amber) Off Input Source Power Good (green) On Unit Locator (white) Off 4 OK to Remove (blue) Off 5 FRU OK (green) On 6 Fault/Service Required (amber) Off 7 SAS In Port Status (green) On 8 SAS Out Port Status (green) On 2 3 Expansio
Figure 7 VLS Node LED Status During Normal Operation 7. Item Description Status 1 Internal health LED LED is green. 2 External health LED (power supply) LED is green. 3 NIC 1 link LED LED is green if primary node. LED is off if secondary node. 4 NIC 2 link LED LED is green. 5 Power supply LEDs LED is green. Rebooting the system is complete when you receive the “Initializing node#”, then “Initializing for node# completed.
Powering Off the System WARNING! To reduce the risk of personal injury, electric shock, or damage to the equipment, remove the power cord to remove power from the node before removing the access panel. The front panel Power On/Standby button does not completely shut off system power. Portions of the power supply and some internal circuitry remain active until AC power is removed. From the VLS CLI: 1. Verify that any backup or restore operation has completed and that the VLS is idle. 2.
NOTE: It is not necessary to power off a disk array enclosure when replacing a power module, hard drive, RAID controller, or expansion controller. To power off a VLS array: 1. Power off the system. See “Powering Off the System” (page 96). 2. Turn both power switches on the rear of each disk array enclosure off. Some power supply models do not have a power switch; in this case, power down the enclosure by unplugging the power cord from the enclosure. Always power off the base enclosure first.
8 User Interfaces This section describes the three user interfaces (UIs) that can be used to configure, manage, and monitor the VLS over the web, remotely over the LAN, or through a serial connection. It also provides instructions on how to open and close a connection to the VLS for each type of user interface. User Interface Requirements VLS user interfaces lists the VLS user interfaces and their requirements. Of the three user interfaces, Command View VLS should be used in most circumstances.
• Installing VLS firmware updates • Saving and restoring VLS network settings and virtual library configurations • Restarting VLS device emulations and Command View VLS • Viewing and saving VLS trace log files Command View VLS is installed on the VLS and communicates through the LAN. Users can open a Command View VLS session from a web browser on the LAN, or HP Systems Insight Manager. Window Regions Command View VLS windows consist of five regions. Not all regions are displayed on all windows.
NOTE: Entering “http://” instead of the above URL automatically redirects you to the secure “https://” connection. All communications are over a secure connection. 3. If a Security Alert window opens and prompts you to accept the Secure Sockets Layer (SSL) certificate, install the SSL certificate as described in Installing the SSL Certificate into your Web Browser (page 100).
3. 4. 5. 6. 7. 8. 9. Select View certificates. A Certificate window opens. Select Install Certificate... to launch the Certificate Wizard. Select Next. Make sure that Automatically select the certificate store based on the type of certificate (the default) is chosen and select Next. Select Finish. A Security Window opens. Select Yes. Select OK or Finish on each window that displays until the Command View VLS login window displays. 10. Restarting Command View VLS To restart Command View VLS: 1.
A secure shell or serial session provides the following: • Setting the VLS network settings • Configuration and management of VLS virtual devices (libraries and tape drives) and cartridges • Changing of the default Fibre Channel host port settings • Viewing and deleting VLS notification alerts • Configuring VLS mail and SNMP notification alert settings • Editing VLS account passwords • Enabling and disabling storage capacity oversubscription • Viewing VLS hardware status • Saving and restor
VLS Critical Diagnostics Services VLS Critical Diagnostics Services is a mini HTTP service built into VLS to provide the status and details of the hardware, console access, and a support ticket service so you can check the VLS vitals before the main GUI is running. You can also use it to examine the VLS if it hangs during a bootup and does not come all the way up.
After completion, the ticket is listed under Available Tickets; click Download to download the ticket or Delete to delete it. • Start Web Console Enter the service (administrator) password. You can use the web console just as you use a serial session when connected to the serial port of the VLS node. The web console may be slower than a serial session depending on the condition of the network.
9 Configuration This section describes how to configure and manage the VLS network settings, user preferences, Fibre Channel host ports (optional), virtual libraries, tape drives, and cartridges. Setting the Network Settings Before you can open a Command View VLS or secure shell session, set the network settings.
Figure 8 VLS discovery utility — main window 3. To visually identify a device listed, select the device from the list and click Beacon. This will illuminate an LED on the device for the specified length of time. In the case of the VLS, the UID LED button on the VLS node illuminates. 4. Select the VLS from the list of devices and click Configure. The Device Configuration window opens. 5. Leave the default host name or enter a new host name in the Host Name box.
2. To see the current configuration settings, at the prompt enter: showConfig 3. Set each desired configuration value by entering: setConfigValue <-tag> [value] where <-tag> can be any of the following: Tag Description -host Host name (such as vlsexamp) (unqualified) -domain DNS domain name (such as xyz.com) -fullhost Fully qualified name (such as vlsexamp.xyz.
4. Current network configuration, NTP settings, and time zone settings are displayed. Modify these as needed (Figure 9 (page 108)). Figure 9 Set Network Configuration Wizard window 5. Click Finish to apply the settings. NOTE: The system automatically reboots after any change. NOTE: If you need to clear the DNS completely, clear the Use DHCP checkbox and enter 0.0.0.0 for both the primary and secondary DNS server addresses.
10. Enter the warranty serial number in the Warranty Serial Number dialog box. This is displayed on the Identity tab and is saved and restored as part of the VLS device configuration. 11. Click Apply Settings. Editing the Default Fibre Channel Host Port Settings Only edit the Fibre Channel host port settings if you do not want to use the default settings, if some system problem is occurring, or if the “AUTO” setting is not working properly.
Enabling and Disabling Oversubscription To enable oversubscription: In Command View VLS: 1. Select the System tab. 2. Select Chassis in the navigation tree. The chassis details window opens. 3. 4. In the Oversubscription section, select Enabled. The Notify when storage capacity is [x] % Full box defaults to 90. You may change the value or leave it at 90. This percentage value is the threshold of storage space consumed that when reached triggers a storage capacity notification alert.
Reclaiming Storage Space The Reclaim Space task appears on the Chassis status screen when the storage capacity consumed reaches the user-defined threshold (or the default of 90%). This allows you to schedule reclamation of the additional storage you make available by erasing cartridges. First erase cartridges from your backup application, then follow the procedure below. From Command View VLS: 1. On the System tab, select Chassis from the navigation screen to open the Chassis status screen. 2.
Operating System LUN Requirements and Restrictions Most operating systems require that each VLS Fibre Channel host port connected to the SAN has a virtual device with the LUN number LUN0 and no gaps in the LUN numbering (LUN0, LUN1, LUN2, and so on). If the operating system does not see a LUN0 on a VLS Fibre Channel host port when it is scanning for new hardware on the SAN, it will stop looking for LUNs on that port and erroneously report that there are no LUNs (virtual devices) on that port.
Setting the Default LUN Mapping You can set a global default to disable or enable LUN mapping. The setting you choose will apply to every new host that you add to the VLS. • All Devices (LUN mapping disabled) – The default. The VLS allows all hosts connected to the VLS through the SAN to access all virtual devices configured on the VLS.
5. Select a library from the Choose a Library list to view its mapped devices. The window refreshes to show the appropriate list. 6. 7. 8. Use the View By list to narrow the list of devices based on the node. Select the devices you want to map to a particular host. Select the host in the Choose hosts list at the bottom of the window. These hosts currently do not have any of the devices shown mapped to them. You can select multiple hosts using Ctrl+click. 9. Select Map next to Choose hosts.
NOTE: After you map or unmap the virtual devices, the VLS automatically reassigns a logical unit number (LUN) to each virtual library and tape drive created on the VLS to ensure that the virtual device LUN numbering meets the operating system LUN requirements. Setting Up the Hosts You can configure the hosts in Command View VLS. You will make all of the changes to the hosts from the Host Setup window. To open the Host Setup window: 1. Select the System tab. 2. Expand Chassis in the navigation tree. 3.
Dual Port Virtual Devices When creating a library robot LUN or tape drive LUNs, you can present the virtual devices to a pair of host ports rather than just one port. Both ports must be on the same node. The Port Mapping list displays selections for each individual port plus possible port pairs (for example: 0, 1, 0&1). The benefit of dual port virtual devices is that they are still accessible when one path fails.
Figure 11 Create Virtual Library Wizard window (2 of 12) 7. 8. Change the library name if you prefer. You can use letters, numbers, and underscores (no blank spaces). Enter the maximum number of cartridge slots that may be added to the library in the Maximum Slots box. The default values in the Maximum Slots box is based on the physical tape library you selected.
Editing a Virtual Library To edit the slots and drives of a virtual library, from Command View VLS: 1. Click the System tab. 2. Expand Chassis in the navigation tree. 3. Expand Virtual Libraries in the navigation tree. 4. Select the virtual library you want to edit. 5. Select Edit Virtual Library in the task bar. 6. On the Library Parameters screen, change the values as appropriate. You can change the maximum number of slots, maximum number of ports, and maximum number of drives. 7. Select Next Step.
8. Choose one of the following options: • To perform LUN mapping for the virtual tape drive, click Map LUNs and proceed to “LUN Mapping” (page 112) for further instructions. • To create more tape drives, click Create More Tape Drives. • To add cartridges to the virtual library, click Create Cartridges and proceed to “Creating Cartridges” (page 119). • To exit the wizard, click Cancel. At this point the library and tape drives have been created, but the library does not contain any cartridges.
NOTE: NetBackup has total barcode limit of eight characters. HP Data Protector has a total barcode limit of 16 characters. Check your user guide for other backup applications. 5. 6. 7. 8. Click Next Step. Select the type of physical cartridge to emulate. Click Next Step. Enter the number of cartridges and the cartridge size in the appropriate boxes (Figure 13 (page 120)). The default number of cartridges is based on the maximum number of slots configured for the virtual library.
NOTE: You cannot destroy a library that is currently being accessed by a backup application. When a virtual library is destroyed, all the tape drives associated with the library are also destroyed. The cartridges in the virtual library, however, are not destroyed. They are moved to the Firesafe where they are stored until you either destroy them or associate them with a virtual library. See “Managing Cartridges” (page 127). To destroy (delete) a virtual library, from Command View VLS: 1.
10 Management This section details the VLS management procedures such as changing the account passwords, managing high availability, and saving configuration settings. Changing the Account Passwords To change the administrator and/or user account password, from Command View VLS: 1. Click the System tab. 2. Select Chassis from the navigation tree. 3. Click Edit Accounts under Maintenance Tasks. The Edit Accounts window opens. 4. Enter the current password in the Old Password box.
LUN Path Failover LUN path failover allows the VLS to automatically reroute data traffic usually assigned to one (preferred) path to another (secondary) path when the preferred path fails. Path status is shown in Command View VLS under Storage LUN Details. A failover is indicated in the Storage LUN Details screen by the yellow warning icon and the notification message: Fibre Channel Path Failed Over to {#:#:#:#}.
When a failure occurs, repair the failure. In most cases, the system will automatically recognize that the repair is complete and restore the path or paths without having to reboot the system; however, you may need to reboot the system if the repair includes installing a new USB LAN adapter. Managing Disk Arrays Some VLS firmware versions allow you to manage the disk arrays.
• Reconstructing — The virtual disk is being reconstructed. • Verifying — The virtual disk is being verified. • VDisk Scrubbing — The virtual disk is being scrubbed. Deleting Unused Virtual Disks On the Manage Virtual Disks screen, you can delete an unused virtual disk. By default, it lists the virtual disks in all of the disk arrays. 1. Select the virtual disks you want to delete. To narrow the list of disks displayed, use the Select Disk Array list, then select Update. 2. 3. 4.
1. 2. 3. Navigate to the Manage Virtual Disks screen (see “Managing Disk Arrays” (page 124)). Select Update Firmware from the task bar. Select the disks you want to update to the new firmware. To narrow the list of disks displayed by disk array, enclosure, or revision, use the Select Disks list, then select Update. 4. 5. Select Submit. A warning message displays. Review the warning and select Continue. The Update Firmware screen displays.
3. Select the RAID Mode. • Default – No Hot-spare: the system uses a 10+2 configuration where ten disks are part of the available virtual disks and two are parity disks. • Hot-spare: the system uses a 9+2+1 configuration where nine disks are part of the available virtual disks, two are parity disks, and one is a hot-spare disk. NOTE: 4. Using the hot-spare mode reduces the VLS capacity and performance by 10%. Click Submit. The screen displays a warning. 5. Click Submit.
2. Select Cartridges in the navigation tree. The Cartridge Details window opens. 3. 4. Select the number of cartridges to display from the Cartridges per Page list beside the group of cartridges you wish to edit. Options are 10, 50, 100 (default), 500, or 1024 cartridges. Click the View button beside the group of cartridges you want to edit. If viewing by barcode, enter a cartridge range to view a specific cartridges or leave the default values to view all the cartridges with the barcode.
• Moving a source cartridge from its existing library to a different library or to the firesafe results in the target cartridge disappearing from the echo copy pool and moving to the firesafe. • Moving a target cartridge from its existing library to a different library or to the firesafe, or to a different slot that is not part of the echo copy pool, does not move the source cartridge.
1. On the Cartridge Details screen, select all the cartridges that you want to delete and erase. 2. If you have installed a Secure Erasure license, select the With Secure Erasure option if you want to use Secure Erasure. This option is only available if you have the license installed. Click Go. 3. The Destroy Cartridge wizard opens and requests confirmation. 4. Click Yes to continue. (You can click No or Cancel to return to the Cartridge Details screen without deleting any cartridges.
To add or delete a barcode template, from Command View VLS: 1. Click the System tab. 2. Select Cartridges in the navigation tree. 3. Click Add/Remove Barcode Templates in the task bar. The Add/Remove Barcode Templates window opens. 4. To delete a barcode template, click the Remove button for the barcode template. The window refreshes when the deletion operation is finished. 5. To add a barcode template: a. Enter the barcode prefix (one to five alpha characters) in the Barcode Prefix box. b.
• After adding a virtual tape drive and the default LUN number assigned to it is not consecutive with the other virtual tape drives in the same library • After deleting external array LUNs. CAUTION: Restarting VLS device emulations changes the default virtual device LUN numbers if there is a gap in the LUN numbering, or if there is a tape drive whose LUN number is not consecutive with the other tape drives in the same library.
Saving Configuration Settings NOTE: The VLS firmware ensures a persistent VLS serial number and Fibre Channel port WWPNs, so that in the event of any hardware failure and replacement (such as the system board or Fibre Channel host bus adapter card), the VLS still appears exactly the same to the external SAN. It does this by generating a VLS serial number and Fibre Channel port WWPNs at first boot, which are based on the system board's MAC address.
11 Monitoring This section describes the various tools you can use to monitor the status of the VLS hardware and virtual devices (libraries and tape drives) and how to use them. Status Information in the Status Pane Status information for the VLS hardware components and virtual devices is displayed in Command View VLS on the status pane when an individual hardware component or virtual device is selected in the navigation tree.
Figure 14 Device status icon in the status banner A device status icon can be one of four states: Unknown—A component's operating condition is unknown. Contact HP Technical Support. Normal—All components within the VLS are operating normally. Warning—A component's operating condition has degraded. Error—A component has failed. Navigation Tree Icon An icon appears just to the left of objects in the navigation tree when an unknown, warning, or error condition is present with a component.
A notification alert can be one of four states: Unknown—The operating condition of the component or component part is unknown. Contact HP Technical Support. Info—The component or component part's operating condition has improved to good (OK). Warning—The component or component part's operating condition has degraded. Error—The component or component part has failed. Command View VLS To view the current and historical notification alerts for all the VLS hardware components: 1. Click the Notifications tab.
Edit the Email Settings Email notification is sent to the persons you include on the email distribution list in the email settings. You specify the email notification alert severity and format settings for each person on the distribution list. To create an email distribution list for notification alerts, add an email address to the list, or remove an email address from the list: 1. Log in to Command View VLS as the administrator. See Opening a Command View VLS Session from a Web Browser (page 99). 2.
6. To test an email address entry, click Test Email. If the test message is not received at the email address, check the email server settings. SNMP Notification To receive VLS notification alerts on a management consoles, you must edit the SNMP settings to specify the management consoles you want to receive VLS SNMP traps.
6. Test the system using the new community strings to ensure your changes were applied. SMI-S Support SMI-S support allows applications attached to the VLS to detect the virtual library configuration and to allow some users to change the state of the VLS. To protect access to the VLS via the SMI-S agent, and to allow a higher level of security for the device, there are two access categories: • Read-only access allows you to view SMI-S objects but not change them.
The Capacity Manager screens are designed to provide quick information for monitoring and diagnostic purposes. The overall data reduction (compression plus optional deduplication) of your VLS is displayed by the ratio provided under the various views. Capacity Manager screens are accessible to both the administrator and guest users. The Capacity Manager provides storage statistics based on the existing backup data on your VLS.
Table 14 System Capacity Table Total Physical Capacity Total physical storage capacity purchased and installed on the system. This is the sum of all LUN capacity in the pool, minus the space reserved for formatting overhead. Reserved for System The space required for system overhead and metadata. Storage Pool 1 or the FireSafe can have more space reserved than other storage pools due to Deduplication metadata that can be up to 2 TB. Usable Capacity The physical storage capacity available for user data.
Table 15 Storage Pool Capacity Table (continued) Reserved for System The space required for system overhead and metadata. Storage Pool 1 or the FireSafe can have more space reserved than other storage pools due to Deduplication metadata that can be up to 2 TB. Usable Capacity The physical storage capacity available for user data. This is the total Physical Capacity less the space reserved for the system. Logical Data The size of all backup data currently retained and visible to the backup application.
Table 16 Storage Pool Capacity Table (continued) reserved than other storage pools due to Deduplication metadata that can be up to 2 TB. Usable Capacity The physical storage capacity available for user data. This is the total Physical Capacity less the space reserved for the system. Logical Data The size of all backup data currently retained and visible to the backup application. Used Capacity The physical storage used for data whether or not it is deduplicated.
Figure 21 Library Capacity Screen The Library Capacity table lists the following capacity values: Table 18 Library Capacity Table Allocated Capacity Total storage capacity allocated to the Library. This is the product of the number and size of the cartridges in the Library. This value might be oversubscribed. Logical Data The size of all backup data currently retained and visible to the backup application. Used Capacity The physical storage used for data whether or not it is deduplicated.
• On the information screen, click the Barcode name field link. The Cartridge Capacity screen displays capacity information for this cartridge and a graphical representation showing the Logical and Used storage capacity. Figure 22 Cartridge Capacity Screen The Cartridge Capacity table lists the following capacity values: Table 20 Cartridge Capacity Table Allocated Capacity Total storage capacity allocated to the Cartridge.
The Libraries screen displays the list of libraries and FireSafe capacity utilization in your VLS. Figure 23 Libraries Screen The Libraries Capacity table lists the following capacity values: Table 22 Libraries Capacity Table Library The name of the library or FireSafe. This is a link to display capacity information about the library or FireSafe. Allocated Capacity Total storage capacity allocated to the Library. This is the product of the number and size of the cartridges in the Library.
Table 23 Cartridges Capacity Table Barcode The barcode of the cartridges in the library. The barcode name is a link to display the capacity information about the cartridge. Allocated Capacity Total storage capacity allocated to the Cartridge. Logical Data The size of all backup data currently retained and visible to the backup application. Used Capacity The physical storage capacity consumed in the cartridge. Ratio The ratio of Logical Data to Used Capacity.
3. 4. 5. 6. Using the >> button, move the devices of interest into the Selected Devices box. These are the devices that will display in the report. You can use << button to remove devices from the Selected Devices box. Select another device category and repeat steps 2 and 3. Enter a name for this view in the Create a New View field. Select Create View. This view is now available as a selection in the Pre-defined Views list on the Current Status and Performance History tabs.
Show the performance of: • All Nodes • Pre-defined Views Select one of the views from the list. You can create these views on the Configuration tab.
Item Data 8 Time stamp 9 Ignore this field SAN Health The SAN Health tab displays information on the number and types of errors encountered on the SAN. See (page 151). To export the CSV data, in the Export Data section of the screen enter the number of days to include in the report and click Export. When you open the SAN Health tab, the graph at the bottom of the screen displays information for the top 16 locations from all location categories with the most errors.
Figure 26 SAN Health tab Logical Capacity This tab displays different views of the current logical capacity usage for an individual library or the entire VLS system. Logical capacity is the amount of data the backup application wrote, while the physical capacity is the amount of data actually stored on the disk. Select the Show Details link in the first section to display the breakdown of the logical and physical capacity and the deduplication ratio.
you show four days of data the graphs show one data point for every four-hour period. Use the Advanced Setting list to indicate which data point out of that four-hour period is used: 3. 4. • First data point — the first data point for each time period. • Maximum data point — the data point with the highest value for each time period. • Minimum data point — the data point with the lowest value for each time period.
Using the Workload Assessment Templates Deleting a workload assessment template: 1. Select the template from the template summary screen. 2. Select Delete Template. The template is removed from the template summary list. Adding a new workload assessment template: 1. Select Add New Template. 2. Enter the template name and all other values. 3. For each day of the week, select the backup type and the start time and duration in 24–hour time. 4. Select Create Template.
3. Select Update Graphs. The graphs update to reflect the data options you chose. Deduplication Job History This tab displays the count of both active and pending jobs over time to reveal trends in the deduplication jobs such as when the job load is usually light. This is useful information for job scheduling. To export the replication traffic CSV data, enter the number of days to include in the report and select Export. (See Exporting CSV Data (page 147) for more information.
ISV~~~ See the HP VLS Solutions Guide for import example scripts. 5. 6. 7. 8. 9. • Physical Capacity Usage — includes the total physical capacity and the physical capacity used by individual libraries and storage pools. • Logical Capacity Usage — includes the total logical capacity and the logical capacity used by individual libraries.
• Number of Concurrent Jobs — the number of read or write operations (called streams) running at the same time. The larger the number, the more the storage system is stressed; you can run up to six at once. NOTE: A Background job can only involve one stream unless multiple storage pools are present. • 6. Notification Generation Options — the notifications displayed on the Notifications tab. Choose to generate notifications per time period (in hours and minutes) or per number of job iterations.
3. If you want to choose which nodes will be tested, follow the steps below. Otherwise, all available nodes are selected by default. a. Click the Select Nodes link. b. Select the nodes you want to test. c. Click Done. 4. If you want to choose by barcodes which cartridges to read , follow the steps below. Otherwise, all cartridges are read by default. a. Select Read by BarCode. b. Enter a search pattern in the empty field.
2. Select the Background Job tab. This tab displays information for all previous and current Background jobs. The Storage Pool, Number of Concurrent Jobs, and Compressibility Ratio fields contain the default information entered in the Configuration tab. 3. 4. 5. 6. If you want the job to stop after a particular time period, enter it in the Test Duration field. Otherwise, leave the Unlimited box checked to allow the test to run indefinitely.
The log monitor table displays: • Time — the date and time the decompression error was logged in the system log. • SDev Number — the Set Device number logged in the decompression error. • LBA — Logical Block Address, representing the hex value of the logical location of the error in the RAID set. • Offset — the distance in Hex from the beginning of the LBA, to the occurrence of the decompression error. • Length — the length in Hex of the decompression error.
• Offset • Length • UUID • IP address • Enclosure number • Range of suggested disk numbers within the enclosure • Part number of the faulty drive Jobs are only logged in event of a job failure.
4. On the task bar, select Clear Compression Faults. The screen refreshes and the correct status is displayed. (If the status does not change, it was already correct.) Any incorrect fault notifications are cleared from the Notifications tab. Trace Log Files You can view the current diagnostic VLS trace log files for troubleshooting purposes. You can also save one or more of the trace log files to external text files, or to a single zip file to create a support ticket. Viewing Trace Log Files You 1. 2. 3.
7. 8. Right-click Download. Select Save Target As. The name of a zip file is displayed in the File name box. Do not change the generated file name. 9. Click Save. 10. Click Close. 11. Click Finish. NOTE: Some versions of Internet Explorer will not download support tickets with a file size greater than 2 GB. VLS systems that are large or have been running a long time may generate larger support tickets.
12 CLI Command Set This section describes the VLS command-line interface (CLI) command set. The CLI command allows you to remotely configure, manage, and monitor the VLS over the LAN using a secure shell session. It also allows you to locally configure, manage, and monitor the VLS through the serial connection. Commands There are two types of CLI commands: • CLI-only commands Commands that are processed by the CLI and affect only the CLI.
Output Commands Use the CLI commands in CLI output commands to control the output and display help information for the CLI commands. Table 25 CLI Output Commands Command Description trace Displays the stack trace after an exception has occurred. verbose Toggles verbose output on and off. When on, all messages are output to the screen. version Indicates current CLI version. If verbose is on, the module revisions display also. help Displays CLI command usage information.
Table 26 CLI Network Settings Configuration Commands (continued) Command Description getDateTime Displays the day, date, time, time zone, and year (such as Mon March 14 11:30:46 EST 2005). setDateTime Sets the date and time. Where the options are: -d <”s”> - Date and time in yyyy-mm-dd hh:mm format (hh is 24 hour from 0) (required). Example: setDateTime -d “2009-06-09 09:45:00” -h - Displays command usage information (optional) commitConfig NOTE: Saves the system values changed using setConfigValue.
Table 27 CLI Configuration Commands (continued) Command Usage 1 getOverSubscription Returns whether the oversubscription feature is enabled or disabled and the capacity remaining percentage for notification alert. Oversubscription is enabled when enabled = 0. Oversubscription is disabled when enabled = 1. getLibTypes Returns a list of available library emulation types. Displays each library emulation's name, type, product, revision, and vendor information.
Table 27 CLI Configuration Commands (continued) Command Usage 1 -p - Product (DLT7000, SDLT320, ...) (required) -pm - FC port to which this tape drive is mapped. (required) -r - Revision (R138, ...) (required) -t - Tape drive type name (required) -v - Vendor (Quantum, HP, ...) (required) -y - Tape drive type (3, 4, ...) (required) -h - Displays command usage information (optional) getTapeDrives Returns a list of all tape drives defined in the VLS.
Table 27 CLI Configuration Commands (continued) Command Usage 1 getCartTypes Returns a list of available cartridge emulation types. Displays each cartridge emulation's name, type, and capacity information. Where the options are: -l - List only licensed types (optional) -h - Displays command usage information (optional) getCartTypesByTape Returns a list of available cartridge emulation types for the tape drive specified. Displays each cartridge emulation's name, type, and capacity information.
Table 27 CLI Configuration Commands (continued) Command Usage 1 removeCartridge Deletes the specified cartridge and its user data from the VLS. Where the options are: -a - VLS filename of cartridge to delete (required) -b - Barcode value of cartridge to delete (required) -c - Capacity of cartridge to delete in gigabytes (required) -f - Force.
Table 27 CLI Configuration Commands (continued) Command Usage 1 listAccessMode Lists the current host access mode for all enabled hosts in the system. setAccessMode Sets the host access mode for all enabled hosts in the system. setAlias Sets the alias for the hostname of the specified host. removeHost Deletes the specified host from the SAN list. addLunMap Adds the specified device to the host. listLunMap Lists the host LUN map for specified device.
Table 28 CLI Management Commands (continued) Command Usage 1 -y - Cartridge emulation type (2, 3, ...) (required) -h - Displays command usage information (optional) restartEmulations Restarts the VLS device emulations. restartCommandViewVLS Restarts Command View VLS. restartSystem Shuts down and restarts the VLS node. shutdownSystem Shuts down the VLS node so it can be powered off. shutdownNode Shuts down the VLS node so it can be powered off.
Table 29 CLI Monitoring Commands (continued) Command Usage 1 getNotificationsDate Returns all the notification alert messages that occurred starting with the specified date. Where the options are: -d - mm/dd/yy on or after this date (required) -h - Displays command usage information (optional) deleteNotifications Deletes the specified notification alerts from the VLS.
Table 29 CLI Monitoring Commands (continued) Command Usage 1 -h - Displays command usage information (optional) getSnmp Returns the SNMP management console configuration settings for notification alerts. deleteSnmpServer Deletes the specified SNMP management console from the SNMP notification alert settings. Where the options are: -a - SNMP server IP address (required) -c - VLS node IP address (required) -f - Force.
13 Component Identification This section provides illustrations and descriptions of the node, disk array enclosure, Fibre Channel (FC) switch, and Ethernet switch components, LEDs, and buttons. NOTE: For lights that blink or flash, the frequency of Hz is about the same number of blinks or flashes per second. VLS9000 Node Components, LEDs, and Buttons This section identifies and describes the front and rear panel components, LEDs, and buttons of the VLS nodes.
Item Description Status Off = Power cord is not attached, power supply failure has occurred, no power supplies are installed, facility power is not available, or disconnected power button cable. 2 UID button/LED Blue = Identification is activated. Flashing blue = System is being remotely managed. Off = Identification is deactivated. 3 Internal health LED Green = System health is normal. Amber = System health is degraded.
Item Description 4 Quad port FC card, host port, port 1 5 Power supply 2 6 Power supply 1 7 NIC 2, on primary node connects to port 1 of switch 2810-24G 8 NIC 1, on primary node only, connects to the customer-provided external network (array) 9 Keyboard connector 10 Mouse connector 11 Video connector 12 Serial connector to access CLI 13 Rear USB connector 14 USB connector, on primary node connects to USB/Ethernet adapter, then to port 1 of switch 2510-24 15 iLO 2 NIC connector (serv
Item Description Status Off = No activity exists. 9 10/100/1000 NIC 2 link LED Green = Link exists. Off = No link exists. 10 UID button/LED Blue = Identification is activated. Flashing blue = System is being managed remotely. Off = Identification is deactivated.
Item Description 14 Power supply connector 2 15 Internal USB connector 16 System battery 17 PCI riser board connector 2 18 PCI riser board connector 1 Accessing the HP Systems Insight Display To eject the HP Systems Insight Display: 1. Press and release the display. 2. Extend the display from the chassis. The display can be rotated up to 90 degrees. HP Systems Insight Display and LEDs The display provides status for all internal LEDs and enables diagnosis with the access panel installed.
Item Description Status 1 Online spare memory LED Green = Protection enabled Flashing amber = Memory configuration error Amber = Memory failure occurred Off = No protection 2 Mirrored memory LED Green = Protection enabled Flashing amber = Memory configuration error Amber = Memory failure occurred Off = No protection All other LEDs Amber = Failure Off = Normal.
HP Systems Insight Internal health LED Display LED and color color Status PPM failure, slot X (amber) One or more of the following conditions may exist: Red • PPM in slot X has failed. • PPM is not installed in slot X, but the corresponding processor is installed. FBDIMM failure, slot X Red (amber) FBDIMM failure, all slots in one bank (amber) FBDIMM in slot X has failed. Amber FBDIMM in slot X is in a pre-failure condition. Red One or more FBDIMMs has failed.
Hard Drive LED Combinations Online/activity LED (green) Fault/UID LED (amber/blue) Interpretation On, off, or flashing Alternating amber and The drive has failed, or a predictive failure alert has been received for blue this drive; it also has been selected by a management application. On, off, or flashing Steadily blue The drive is operating normally, and it has been selected by a management application.
VLS9200 Node Components, LEDs, and Buttons This section identifies and describes the front and rear panel components, LEDs, and buttons of the VLS nodes. Front Panel Components Item Description 1 Hard drive 1 2 Hard drive 2 3 DVD-ROM drive 4 Hard drive blank 5 Hard drive blank 6 Video connector 7 HP Systems Insight Display 8 Front USB connector Front Panel LEDs and Buttons Item Description Status 1 UID button/LED Blue = Identification is activated.
Item Description Status Off = System health is normal (when in standby mode). 3 Power On/Standby button and system power LED Green = System is on. Amber = System is in standby, but power is still applied. Off = Power cord is not attached, power supply failure has occurred, no power supplies are installed, facility power is not available, or the power button cable is disconnected.
1 (PCIe2 = Gen2 signaling rate, x8 = physical connector link width, (8, 4, 2, 1) = negotiable link widths) Rear Panel LEDs and Buttons Item Description Status 1 10/100/1000 NIC ctivity LED Green = Activity exists. Flashing green = Activity exists. Off = No activity exists. 2 10/100/1000 NIC link LED Green = Link exists. Off = No link exists. 3 iLO 3 NIC activity LED Green = Activity exists. Flashing green = Activity exists. Off = No activity exists. 4 iLO 3 NIC link LED Green = Link exists.
System Board Components Item Description 1 NMI jumper 2 System maintenance switch 3 10 Gb sideband connector 4 SATA DVD-ROM drive connector 5 SAS cache module connector 6 Power button connector 7 Hard drive data connector 1 (drives 1–4) 8 Hard drive data connector 2 (drives 5–8) 9 Processor 1 DIMM slots (9) 10 Fan module 4 connector 11 Processor socket 1 (populated) 12 Fan module 3 connector 13 Fan module 2 connector 14 Processor socket 2 15 Fan module 1 connector 16 Proces
Item Description 24 PCI power connector 25 TPM connector 26 PCIe riser board connectors (2) Accessing the HP System Insight Display You access the HP System Insight Display the same way for the VLS9000 and VLS9200 systems. See “Accessing the HP Systems Insight Display” (page 178). HP Systems Insight Display and LEDs The display provides status for all internal LEDs and enables diagnosis with the access panel installed. To view the LEDs, access the HP Systems Insight Display.
NOTE: The HP Systems Insight Display LEDs represent the system board layout. HP Systems Insight Display LEDs and Internal Health LED Combinations When the internal health LED on the front panel illuminates either amber or red, the server is experiencing a health event. Combinations of illuminated system LEDs and the internal health LED indicate system status.
IMPORTANT: If more than one DIMM slot LED is illuminated, further troubleshooting is required. Test each bank of DIMMs by removing all other DIMMs. Isolate the failed DIMM by replacing each DIMM in a bank with a known working DIMM. Hard Drive LEDs The hard drive LEDs are the same for the VLS9000 and VLS9200 systems. See “Hard Drive LEDs” (page 180). Hard Drive LED Combinations The hard drive LED combinations are the same for the VLS9000 and VLS9200 systems. See “Hard Drive LED Combinations” (page 181).
Front Panel LEDs and Buttons Item Description Status 1 UID button/LED Blue = Identification is activated. Flashing blue = System is being remotely managed. Off = Identification is deactivated. 2 System health LED Green = System health is normal. Amber = System health is degraded. To identify the component in a degraded state, see HP Systems Insight Display LEDs and Internal Health LED Combinations. Red = System health is critical.
Rear Panel Components Item Description 1 PCI slot 5 2 PCI slot 6 3 PCI slot 4 4 PCI slot 2 5 PCI slot 3 6 PCI slot 1 7 Power supply 2 8 Power supply 1 9 USB connectors (2) 10 Video connector 11 NIC 1 connector 12 NIC 2 connector 13 Mouse connector 14 Keyboard connector 15 Serial connector 16 iLO 3 connector 17 NIC 3 connector 18 NIC 4 connector 190 Component Identification
Rear Panel LEDs and Buttons Item Description Status 1 Power supply LED Green = Normal Off = System is off or power supply has failed 2 UID button/LED Blue = Identification is activated. Flashing blue = System is being managed remotely. Off = Identification is deactivated. 3 iLO NIC activity LED Green = Activity exists. Flashing green = Activity exists. Off = No activity exists. 4 iLO NIC link LED Green = Link exists. Off = No link exists.
Front Panel LEDs and Buttons Item Description Status 1 Maintenance button Dual-function momentary switch. Its purpose is to reset the switch or to place the switch in maintenance mode. To reset the switch, use a pointed tool to momentarily press and release (less than 2 seconds) the Maintenance button. The switch will respond as follows: 1. All the chassis LEDs will illuminate except the System Fault LED. 2.
Heartbeat LED Blink Patterns The Heartbeat LED indicates the operational status of the switch. When the POST completes with no errors, the Heartbeat LED blinks at steady rate of once per second. When the switch is in maintenance mode, the Heartbeat LED illuminates continuously. All other blink patterns indicate critical errors. In addition to producing a Heartbeat error blink patterns, a critical error also illuminates the System Fault LED.
Front Panel LEDs and Buttons Item Description Status 1 Maintenance button Dual-function momentary switch. Its purpose is to reset the switch or to place the switch in maintenance mode. To reset the switch, use a pointed tool to momentarily press and release (less than 2 seconds) the Maintenance button. The switch will respond as follows: 1. All the chassis LEDs will illuminate except the System Fault LED. 2.
Rear Panel Components Item Description 1 Power supply 0 2 Power supply 1 Rear Panel LEDs and Buttons Item Description Status 1 Power supply status LED Green = Power supply is receiving AC voltage and producing the proper DC voltages. Off = Power supply is not receiving AC voltage. 2 Power supply fault LED Amber = Power supply fault exists and requires attention. Off = Power supply is operating normally.
Item Description 3 Fibre Channel ports 4 XPAK transponder ports (not in use) Front Panel LEDs and Buttons This section provides images and descriptions of the front panel LEDs and buttons of the Fibre Channel Switch 8/20q. Item Description Status 1 Input Power LED Green = The switch is receiving power. Off = One of these conditions exist: • The switch is NOT receiving power. • The switch is in maintenance mode. 2 Heartbeat LED Green = The switch is in maintenance mode.
Item Description Status Off = One of these conditions exist: • The port connection is broken. • An error occurred that disabled the port. Port Activity LED (on bottom for each port) Green = data is passing through the port. Ethernet Switch 2510–24 Components, LEDs, and Buttons This section provides images and descriptions of the front and rear panels of the Ethernet Switch 2510–24.
Item Description Status 4 Fault LED Orange = On briefly after the switch is powered on or reset, at the beginning of switch self test. If this LED is on for a prolonged time, the switch has encountered a fatal hardware failure, or has failed its self test. Blinking orange1 = A fault has occurred on the switch, one of the switch ports, or the fan. The Status LED for the component with the fault will blink simultaneously. Off = The normal state; indicates that there are no fault conditions on the switch.
1 The blinking behavior is an on/off cycle once every 1.6 seconds, approximately. Ethernet Switch 2810–24G Components, LEDs, and Buttons This section provides images and descriptions of the front and rear panels of the Ethernet Switch 2810–24G.
Item Description Status 5 Power LED Green = The switch is receiving power. Off = The switch is not receiving power. 6 RPS status LED Green = An HP ProCurve EPS/RPS unit is connected and operating correctly. The EPS/RPS could be powering the unit. Blinking green1 = The EPS/RPS is connected but may be powering another switch or the EPS/RPS has experienced a fault. Off = The EPS/RPS is not connected or is not powered. 6 Fan status LED Green = The cooling fan is working properly.
Item Description Status • If the Full Duplex (FDx) indicator LED is lit, the port LEDs light for those ports that are operating in full duplex mode. • If the Speed (Spd) indicator LED is lit, the port LEDs behave as follows to indicate the connection speed for the port: 10 T/M LEDs ◦ OFF = 10 Mb/s ◦ Flashing = 100 Mb/s (the flashing behavior is a repeated on/off cycle once every 0.5 sec.
Item Description Status Off = The switch is not operating correctly or is not receiving power. 2 Fault LED Orange = The switch has encountered a fatal hardware failure or has failed its self-test. This LED comes on briefly after the switch is powered on or reset, at the beginning of switch self test. Blinking orange = A fault has occurred on the switch, one of the switch ports, or the fan. The status LED for the component with the fault will blink simultaneously.
Item Description Status Spd = Indicates that the port LEDs are displaying the connection speed at which each port is operating. If the port LED is off, the port is operating at 10 Mb/s. If the port LED is flashing, the port is operating at 100 Mb/s, and if the port LED is on continuously, the port is operating at 1000 Mb/s. Usr = Indicates the port is displaying customer-specified information. 5 Auxiliary LED Blinking green = Data transfer between the switch and a USB device is occurring.
Item Description Status 9 Reset button Used to reset the switch while it is turned on. This action clears any temporary error conditions that may have occurred and executes the switch self-test. When pressed with the Clear button in a specific pattern, any configuration changes you may have made through the switch console, the Web browser interface, and SNMP management are removed, and the factory default configuration is restored to the switch.
Item Description 3 Drives 6, 7, and 8 4 Drives 9, 10, and 11 Front Panel LEDs Item Description Status 1 Enclosure ID LED A hex LED shows the enclosure ID, which enables you to correlate an enclosure with logical views presented by Command View VLS. The enclosure ID for a base disk array enclosure is zero (0); the enclosure ID for an attached expansion disk array enclosure is nonzero. (“F” for 3–4 seconds at power up) Continuous “F” = The display has a problem.
Rear Panel Components Base Disk Array Enclosure Item Description 1 Power module 0 2 RAID controller 0 3 RAID controller 1 4 Fibre Channel port 0 5 Fibre Channel port 1 (not used) 6 Service port (for service only) 7 CLI port (not used) 8 Ethernet port 9 SAS output port 10 Power module 1 Expansion Disk Array Enclosure Item Description 1 Power module 0 2 Expansion controller 0 3 Expansion controller 1 4 SAS port 0, input port 5 Service port (for service only) 206 Component I
Item Description 6 SAS port 1, output port 7 Power module 1 Rear Panel LEDs and Buttons Base Disk Array Enclosure Item Description Status 1 Power switch1 Toggle, where O is Off. 2 AC Power Good LED Green = AC power is on and input voltage is normal. Off = AC power is off or input voltage is below the minimum threshold. 3 DC-Fan Fault/ Service Yellow = DC output voltage is out of range or a fan is operating below the minimum Required LED required RPM. Off = DC output voltage is normal.
Item Description Status 11 Fibre Channel port activity LED Blinking green = At least one FC port has I/O activity. 12 Off = The FC ports have no I/O activity. Ethernet link status LED Green = The Ethernet link is up. Off = The Ethernet port is not connected or the link is down. 13 Ethernet activity LED Blinking green = The Ethernet link has I/O activity. Off = The Ethernet link has no I/O activity. 14 SAS port status LED Green = The port link is connected.
1 Some power supply models do not have a power switch. In this case, power down the enclosure by unplugging the power cords from the enclosure. VLS9200 Disk Array Enclosure Components, LEDs, and Buttons This section provides images and descriptions of the front and rear panels of the VLS9200 disk array enclosures.
Item Description Status Fluttering green = There is hard drive activity, or the array is running a background parity check of the data in the RAID set. Off = The hard drive has no power, is offline, or not configured. 4 Unit locator LED Blinking white = Enclosure is selected (for identification purposes only). (on for 3–4 seconds at Off = Not active. power up, then off) 5 Fault/Service required Amber = An enclosure-level fault occurred. Service action is required.
Capacity Enclosure Item Description 1 Power module 0 2 Expansion controller 0 3 Expansion controller 1 4 SAS port 0, input port 5 Service port (for service only) 6 SAS port 1, output port 7 Power module 1 Rear Panel LEDs and Buttons Base Enclosure Item Description Status 1 Power switch1 Toggle, where O is Off. 2 AC Power Good LED Green = AC power is on and input voltage is normal. Off = AC power is off or input voltage is below the minimum threshold.
Item Description Status 5 FC link speed (S) LED Green = The data transfer rate is 4 Gbps. Off = The data transfer rate is 2 Gbps. 6 Unit locator LED Blinking white = RAID controller is selected (for identification purposes only). Off = Not active. 7 OK to remove LED Blue = The RAID controller can be removed. Off = The RAID controller is not prepared for removal. 8 Fault/Service required Yellow = A fault has been detected or a service action is required.
Item Description Status 3 DC-Fan Fault/ Service Yellow = DC output voltage is out of range or a fan is operating below the minimum Required LED required RPM. Off = DC output voltage is normal. 4 Unit locator LED Blinking white = Expansion controller is selected (for identification purposes only). Off = Not active. 5 SAS port 0, input port, Green = The port link is connected. status LED Off = The port is empty or the link is down.
14 Component Replacement This section provides detailed instructions for replacing customer-replaceable VLS components. See Customer Self Repair for details. CAUTION: Always replace components with the same make, size, and type of component. Changing the hardware configuration voids the warranty. Safety Considerations Before performing component replacement procedures, review all the safety information in this guide.
Warnings and Cautions Before removing the node access panel, be sure that you understand the following warnings and cautions. WARNING! To reduce the risk of electric shock or damage to the equipment: • Do not disable the AC power cord grounding plug. The grounding plug is an important safety feature. • Plug the power cord into a grounded (earthed) electrical outlet that is easily accessible at all times. • Unplug the power cord from each power supply to disconnect power to the equipment.
Removing a VLS Node from the Rack To remove the node from a rack: 1. Power off the node. See Powering Off the System. 2. Disconnect the cabling. 3. Extend the node from the rack. See Extending a VLS Node from the Rack. 4. Remove the node from the rack. For more information, refer to the documentation that ships with the rack mounting option. 5. Place the node on a sturdy, level surface.
2. Pull the hard drive (3) out of the node by the latch handle (2). Figure 27 Removing a Node Hard Drive To replace the component, pull out the latch handle (2) out as far as it can go and slide the drive into the bay until the latch mechanism engages the chassis. Then, firmly push in the latch handle to lock the drive in the drive bay.
DVD-CD Drive CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all bays are populated with either a component or a blank. 1. Power off the node. NOTE: The ejector button for the CD-ROM drive is recessed to prevent accidental ejection; it may be helpful to use a small, flat, blunt object, such as a key or pen, to push the ejector button. 2. Press the ejector button in firmly until the DVD-CD drive ejects (1). 3. Pull the DVD-CD drive out of the node (2).
2. Press the power supply release lever (1), and then pull the power supply from the node. To replace the component: WARNING! To reduce the risk of electric shock or damage to the equipment, do not connect the power cord to the power supply until the power supply is installed. 1. 2. Remove the protective cover from the connector pins on the power supply. Slide the power supply into the bay until it clicks. 3. Use the strain relief clip to secure the power cord. 4. 5. 6.
Fan Module CAUTION: Do not operate the node for long periods without the access panel. Operating the node without the access panel results in improper airflow and improper cooling that can lead to thermal damage. 1. 2. 3. 4. Power off the node. Extend or remove the node from the rack. See Extending a VLS Node from the Rack or Removing a VLS Node from the Rack. Remove the access panel. To remove fan module 1: a. Remove the power supply air baffle. b. Remove fan module 1.
5. To remove fan module 2 or 3: a. Remove the power supply air baffle. b. Remove fan module 2 or 3. To replace the component, reverse the removal procedure. IMPORTANT: After installing the fan module, firmly press the top of the module connectors to ensure the connectors are seated properly. FBDIMM 1. 2. 3. 4. 5. Power off the node. Extend or remove the node from the rack. See Extending a VLS Node from the Rack or Removing a VLS Node from the Rack. Remove the access panel. Open the FBDIMM slot latches.
NOTE: FBDIMMs do not seat fully if turned the wrong way. When replacing a FBDIMM, align the FBDIMM with the slot and insert the FBDIMM firmly (1), pressing down until the FBDIMM snaps into place. When fully seated, the FBDIMM slot latches (2) lock into place. Replacing a Primary Node CAUTION: Each VLS node weighs 17.9 kg (39.5 lb) full. At least two people are required to lift and move each node. To replace a primary node: 1. Remove the existing node from the rack: a. Power off the system.
a. b. On the primary node, connect to the serial port or use the keyboard and mouse ports to connect to a console. Power on the primary node. After several minutes, a menu will appear on your monitor asking whether the node is a primary (master, m) or secondary (slave, s) node. c. d. Enter m. The node will then run cable checks and configuration checks. After the checks are complete the node will reboot automatically. Wait for the primary node to fully boot.
4. 5. 6. 7. 8. Disconnect all power cords from the node. From the back of the node, make a note of all cable connections then disconnect the cables. Remove the node from the front of the chassis. See “Replacing a Primary Node” (page 222) for details. Install the replacement VLS9200 node into the rack. Reconnect the cables to the new node. The VLS9200 master node does not need the USB dongle. The Ethernet switch cables should now connect to NIC ports 3 and 4. 9. Reconnect the power cords to the node. 10.
12. Power on the new switch. Fibre Channel Transceiver Replacement To replace a Fibre Channel transceiver 1. Power off the system. See Powering Off the System. 2. Disconnect the Fibre Channel cable by squeezing the end of the cable connector. If removing more than one cable, make sure that they are labeled before removing them. The cables are fragile; use care when handling them. CAUTION: Mishandling Fibre Channel cables can degrade performance. Do not twist, fold, pinch, or step on cables.
2. Pull the drive out of the disk array by its latch handle about 3 cm (1 inch) so that it is disconnected from the backplane connector. CAUTION: A drive with a rapidly spinning disk can be difficult to hold securely. To decrease the chance of dropping the drive, do not remove it completely from the disk array until the disk has stopped rotating. This usually takes a few seconds. 3. When the disk is no longer spinning, remove the drive from the disk array. To replace the component: 1.
4. 5. Rotate the latch downward to about 45 degrees, supplying leverage to disconnect the power module from the internal connector. Use the latch to pull the power module out of the chassis. NOTE: Do not lift the power module by the latch. This could break the latch. Hold the power module by the metal casing. 6. Position the new power module so that AC connector and power switch are on the right side, and slide the power module into the power module slot as far as it will go. 7.
IMPORTANT: RAID controllers should only be replaced while the array is powered up to ensure that the array will copy configuration data from the surviving controller into the newly added controller. CAUTION: When removing a controller, allow 60 seconds for the failover to complete before fully inserting a replacement. When you remove a controller with the disk array enclosure powered on, install a replacement controller or a blank within two minutes. Otherwise, the disk array enclosure might overheat.
7. Press the latches upward until they are flush with the top edge of the controller, then turn the thumbscrew on each latch clockwise until they are finger-tight. The controller begins initializing. The Power On/OK LED illuminates green when the controller completes initializing and is online. 8. 9. Connect the disconnected cables to the new controller in reverse order of Step 1. If you are replacing a RAID controller, restore the failed path: a. In Command View VLS, access the System tab. b.
15 Disaster Recovery This section details the VLS disaster recovery procedures. It includes recovering from operating system failures, disk array failures, and node failures. Recovering from Operating System Failure Re-install the operating system if it becomes corrupted or is lost as a result of node RAID volume failure. CAUTION: Only install the VLS operating system on the node hard drives. Installing any other operating system on the node hard drives voids the warranty.
Manually Restoring the System After re-installing the operating system, the warm failover feature restores the licenses and configuration settings. However, if the warm failover does not occur (for example, due to a corrupt or missing file), the VLS virtual library configuration and network settings can be quickly restored from the configuration file created by performing a Save Configuration. See Restoring the Configuration from a Configuration File.
8. 9. Click Next Step. A message displays indicating that the file was uploaded successfully. Click Next to start loading the configuration file. After the configuration file is loaded, the system automatically applies the configuration and reboots. Manually Rebuilding the Virtual Library Configuration If you are unable to manually restore the system from the configuration file, you must manually reconfigure the network settings and rebuild the virtual library configuration: 1.
3. Install the operating system on the new hard drives and restore the VLS. See Recovering from Operating System Failure. Recovering from a Primary Node Failure using a Cold Spare Primary Node On a multi-node VLS, the primary node maintains the configuration for the entire VLS library. In the unlikely event of a primary node failure, the VLS library would be unavailable until the node is replaced or repaired.
4. Record the backend Fibre Channel WWPN from the console and configure them for the automigration tape libraries. The Fibre Channel host port WWPNs on the spare primary node will be set to the same as the original primary node when the VLS configuration is restored.
7. 8. • Cartridges configured • Automigration configuration • Host LUN mapping configuration Power up all secondary nodes. The boot up can take 10 to 20 minutes. Verify all secondary nodes. At this point, your VLS system is up and in working order. Do not connect the old primary node to the VLS because its configuration will be out of sync with the system. 9. Repair the old primary node and then Quick Restore it; do not configure the node after the Quick Restore.
16 Support and Other Resources Related Information Documents HP provides the following documentation to support this product: • HP Virtual Library System release notes • HP VLS Solutions Guide • HP VLS9000 Virtual Library System User Guide • HP Virtual Library System installation posters See the media kit provided with the VLS and our website for related documentation. Websites • HP website: http://www.hp.com • HP VLS Support: http://hp.com/support/vls • HP VLS Manuals: http://www.hp.
Table 30 Document Conventions (continued) Convention Element Monospace text • File and directory names • System output • Code • Commands, their arguments, and argument values Monospace, italic text • Code variables • Command variables Monospace, bold text WARNING! CAUTION: NOTE: Emphasized monospace text Indicates that failure to follow directions could result in bodily harm or death. Indicates that failure to follow directions could result in damage to equipment or data.
Rack Stability Rack stability protects personnel and equipment. WARNING! To reduce the risk of personal injury or damage to equipment: • Extend leveling jacks to the floor. • Ensure that the full weight of the rack rests on the leveling jacks. • Install stabilizing feet on the rack. • In multiple-rack installations, fasten racks together securely. • Extend only one rack component at a time. Racks can become unstable if more than one component is extended.
After subscribing, locate your products by selecting Business support and then Storage under Product Category. Customer Self Repair HP customer self repair (CSR) programs allow you to repair your Storage product. If a CSR part needs replacing, HP ships the part directly to you so that you can install it at your convenience. Some parts do not qualify for CSR. Your HP-authorized service provider will determine whether a repair can be accomplished by CSR.
17 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
A Troubleshooting This appendix lists iLO troubleshooting features, and also describes some common issues you may encounter while configuring or using the VLS including automigration/replication and deduplication issues. Using iLO The VLS supports many of the features of iLO 2 Standard (non-licensed). If you are troubleshooting the VLS, especially if the system is down, you may find these features helpful: • Power on the VLS. • Power off the VLS.
Symptom Possible causes Solution virtual devices the host can see, such that the virtual device LUN numbers include a LUN0 and no gaps in the LUN numbering. See LUN Masking and LUN Mapping for instructions. There is a gap in the LUN numbering on the FC host port. Most operating systems will stop looking for virtual devices on an FC host port once a gap in the LUN numbering is detected. For example, if LUN0, LUN1, and LUN3 are mapped to an FC host port, the operating system will see LUN0 and LUN1.
Symptom Possible causes Solution Netbackup does not display the cartridge barcodes for Autoloader library emulations on the VLS. Real autoloader libraries do not support barcodes. This is normal and will not cause problems. HP Data Protector 5.1 does not display the VLS cartridge barcodes. By default, the barcode reader support To turn on barcode reader support in is turned off in Data Protector 5.1. Data Protector: 1. Click Device & Media. 2. Right-click the VLS library name and select Properties.
Symptom Possible causes Solution 4. Check Increase performance by disabling support for Microsoft Backup Utility. 5. Repeat this procedure for each server visible to each SDLT tape drive. At reboot, there are spurious critical This is expected behavior and does FC port failures reported as not indicate a problem. notification alerts, usually on every port. Later, Info notification alerts for each FC host port are generated, indicating the FC ports are operating normally.
Deduplication Issues Symptom Possible causes Solution The VLS is not deduplicating the backup jobs. The VLS does not free up storage on a cartridge until: Consider using cartridges that are smaller than the sum of your daily backup jobs so the cartridges deduplicate sooner.
B Specifications This section provides the basic VLS node, Fibre Channel switch, Ethernet switch, and disk array enclosures specifications. For a complete list of specifications, see the HP QuickSpecs for each product. VLS9000Node Item Specification Height 4.3 cm (1.70 in) Depth 69.2 cm (27.3 in) Width 42.6 cm (16.8 in) Weight (fully loaded 17.9 kg (39.5 lb) Weight (no drives installed) 14.1 kg (31.
Item Specification Cache 12 GB L3 Memory type DDR3 RDIMM Standard memory DDR3 Maximum memory Up to 192 GB Memory slots 18 DIMM Storage Storage type Hot-plug SFF SATA Maximum internal storage 4 TB Maximum internal drive bays 8 Expansion slots 2 PCIe x8 Gen 2 mezzanine Storage controller Smart Array P410i Controller VLS9200 High Performance Node Specification Value Physical Dimensions (HxWxD) 8.59 x 44.55 x 69.22 cm (3.38 x 17.54 x 27.
VLS9000 Disk Array Enclosure Item Specification Dimensions 59.7 x 44.7 x 8.8 cm (23.5 x 17.6 x 3.5 in) Weight • Controller enclosure (with drives): 33.6 kg (74 lb) • Expansion enclosure (with drives): 31.3 kg (69 lb) Input frequency 50/60 Hz Input voltage 208 to 264 VAC Input current requirement • Controller enclosure: ◦ Spin up: 2.7 A at 220 V, 60 Hz ◦ Operating: 1.7 A at 220 V, 60 Hz • Expansion enclosure: Steady-state maximum input power ◦ Spin up: 2.
Fibre Channel Switch 4/10q Item Specification Fibre Channel ports 20 universal device ports, 4 stacking (ISL) ports (10 Gbps Fibre Channel, upgradeable to 20 Gbps) Performance • 8 Gbps line speed, full duplex • 10 Gbps and 20 Gbps stacking (ISL) port line speed, full duplex Switch core Non-blocking Fabric latency <0.2μ sec.
Item Specification Temperature • Operating: 41° to 104° F (5° to 40° C) • Non-operating: 4° to 158° F (20° to 70° C) Humidity • Operating: 10% to 90% non-condensing • Non-operating: 10% to 95% , non condensing Altitude • Operating: 0 to 10,000 ft. • Non-operating: 0 to 50,000 ft. Shock • Operating: 4 G, 11 ms, 20 repetitions • Non-operating: 30 G, 13 ms, trapezoidal Vibration • Operating: 5-500 Hz, random, 0.2 G • Non-operating: 2-200 Hz, random, 0.
Item Specification Media type • Hot-pluggable, industry-standard SFPs (Small Form Pluggable) for 4Gb ports • Hot-pluggable, industry-standard XPAK optics or copper stacking cables for 10Gb ports Supported SFP types • Shortwave (optical) • Longwave (optical) Media transmission ranges (@ 2 GB speeds) Optical • Shortwave: 500 m (1,640 ft.) • Longwave: 10 km (6.2 mi.) Cable types 50/62.5 micron multimode fiber optic 9 micron single-mode fiber optic Fabric latency • Less than 0.
Item Specification Access methods • In-band • Ethernet 10/100 BaseT with RJ45 • RS-232 serial port with DB9 Diagnostics • Power-up self-test of all functionality except media modules • Field-selectable full self-test including media modules Fabric services • Simple name server • Fabric zoning ◦ Hardware-based - Access Control List (port) ◦ Name Server (WWN) ◦ Orphan Zoning ◦ All zoning assigned on per-node basis, even across Multi-stage fabrics • Registered State Change Notification (RSCN) •
Item Specification Maximum heat dissipation 68 BTU/hr Voltage 100-127 VAC/200-240 VAC Current 0.75 A /0.4 A Power 20 W Frequency 50/60 Hz Ethernet Switch 2810–24G Item Specification Dimensions 12.7 x 17.4 x 1.7 in. (32.26 x 44.2 x 4.32 cm) 1U height Weight 7.21 lb (3.27 kg) fully loaded Ports 20 auto-sensing 10/100/1000 ports (IEEE 802.3 Type 10Base-T, IEEE 802.3u Type 100Base-TX, IEEE 802.
Item Specification 1000 Mb Latency < 3.4 μs (FIFO 64-byte packets) 10 Gbps Latency < 2.4 μs (FIFO 64-byte packets) Throughput up to 75.7 million pps (64-byte packets) Routing/Switching capacity 101.8 Gbps Switch fabric speed 105.6 Gbps Routing table size 10000 entries MAC address table size 64000 entries Maximum heat dissipation 697 BTU/hr (735.33 kJ/hr) Voltage 100-120/200-240 VAC Idle power 167.6 W Maximum power rating 204.
C Regulatory Information For the purpose of regulatory compliance certifications and identification, this product has been assigned a unique regulatory model number. The regulatory model number can be found on the product nameplate label, along with all required approval markings and information. When requesting compliance information for this product, always refer to this regulatory model number. The regulatory model number is not the marketing name or model number of the product.
Glossary This glossary defines terms used in this guide or related to this product and is not a comprehensive glossary of computer terms. A Accelerated deduplication A method of deduplication that uses object-level differencing technology. See also deduplication.. appliance An intelligent device programmed to perform a single well-defined function.
deduplication The process of eliminating duplicate data from the backups on a virtual cartridge to reduce the amount of disk space required. disk array Two or more hard drives combined as a single logical unit for increased capacity, speed, and fault-tolerant operation. Disk arrays are logically grouped into a storage pool. disk mirroring Also known as data mirroring.
inputs/outputs per second A performance measurement for a host-attached storage device or RAID controller. L library A storage device that handles multiple units of media and provides one or more drives for reading and writing them, such as a physical tape library and virtual tape library. Software emulation of a physical tape library is called a virtual tape library. See also virtual tape library.. logical unit number (LUN) An address used in the SCSI protocol to access a device within a target.
RAID1-level data storage A RAID that consists of at least two drives that use mirroring (100 percent duplication of the storage of data). There is no striping. Read performance is improved since either disk can be read at the same time. Write performance is the same as for single disk storage. RAID5-level data storage A RAID that provides data striping at the byte level and also stripe error correction information. RAID5 configurations can tolerate one drive failure.
V virtual tape A disk drive buffer that emulates one physical tape to the host system and appears to the host backup application as a physical tape. The same application used to back up to tape is used, but the data is stored on disk. Also known as a piece of virtual media or a VLS cartridge. Data can be written to and read from the virtual tape, and the virtual tape can be migrated to physical tape.
Index Symbols 20–port Fibre Channel Switch specifications, 249 40–port connectivity kit shipping carton contents, 19 A accelerated deduplication see deduplication adding workload assessment template, 153 adding slot mapping LAN/WAN, 64 SAN, 63 additional information, 236 Advanced Search (for slots), 68 array, 122 see also disk array adding, 45 dual pathing, 122 load balancing, 122 powering off, 96 powering on, 91 assembly, overview, 20 At End of the Policy Window LAN/WAN, 58 SAN, 57 authorized reseller, 23
systems without deduplication, 69 summary, 80 unloading, 130 viewing details, 127, 168 viewing in automigration source libraries, 61 viewing the slot details, 66 viewing the status, 66 certificate error, in browser, 100 changing cartridge access, 128 changing cartridge capacity, 128 changing slot mapping LAN/WAN, 64 SAN, 63 Clear All Faults, 160 Clear Compression Faults, 160 clearing leftover disks, 125 CLI command set, 163 configuration commands, 165 connection commands, 163 conventions, 163 help, 164 mana
connecting, 53 import/export details, 68 managing, 53 status, 65 status of all, 65 unmanaging, 53 DHCP deselecting, 165 selecting, 107, 164 diagnostics see VLS Critical Diagnostics Services disaster recovery disk array enclosure RAID volume failure, 232 node RAID volume failure, 232 operating system failure, 230 Discover Unconfigured Storage, 45 disk array enclosure adding, 45 adding hot-spare, 126 front panel components, 209 front panel LEDs, 209 powering off, 96 powering on, 91 rack mounting, 25 rack moun
rear panel LEDs, 195 Fibre Channel Switch 4/16q front panel components, 193 front panel LEDs, 194 specifications, 250 Fibre Channel Switch 8/24q rack mounting, 37 Fibre Channel transceiver replacing, 225 firesafe, 121 automigration, 62 firmware, updating, 132 Forced Non Deduplicated Copy, 75 fully qualified name, setting, 107, 164 G gateway to network, setting, 107, 164 Global LAN/WAN Replication Target Settings, 78 glossary, 256 grounding methods, 214 H hard drive replacing, 225 help, obtaining, 238 host
Library Assessment Test, 72 library policy editing, 64 licenses capacity, 48 deduplication, 48 iLO 2 Advanced, 48 installing, 48 re-installing, 231 replication, 48 Secure Erasure, 48 load balancing, 122 Load Blank Media echo copy pool, 60 Load Media for Overwrite echo copy pool, 60 Load Media for Restore, 59 logical capacity report, 151 LUN management, 111 default LUN numbering, 111 LUN mapping, 112 LUN masking, 112 operating system LUN requirements and restrictions, 112 LUNs dual pathing on a private LAN,
P passwords changing, 122, 170 default, 100 forgot administrator password, 102 Paused (cartridge status), 80 PDUs installing, 20 Pending, 80 performance history report, 149 performance repports, 147 physical capacity report, 152, 153 policy see library policy echo copy pool, 56 polling frequency, setting, 108 power module replacing, 226 powering off arrays and enclosures, 96 VLS system, 96 powering on arrays, 91 VLS system, 94 Priority, 58 Q quarantined virtual disks, defined, 124 quick restore using DVD,
installing license, 48 secure erasure, 129 secure shell session closing, 102 opening, 102 Send notification if cartridge not migrated in, 57 Send notification if cartridge not replicated in, 58 serial number VLS, 133 warranty, 109 serial user interface closing a session, 102 emergency login, 102 opening a session, 102 Set RAID Mode, 126 shipping carton contents 40-port connectivity kit, 19 base disk array enclosure, 16, 17 VLS9000 node, 18 VLS9200 node, 18 Sizing factor, 56 slot mapping adding LAN/WAN, 64 S
Initiate Tape Transport, 74 Load Blank Media, 60 Load Media for Overwrite, 60 Load Media for Restore, 59 Move Media, 69 Non Deduplicated Copy, 75 Rebuild All Storage Pools, 47 Rebuild Storage Pool, 47 Reclaim Space, 111 Restart Emulations in Maintenance Mode, 47 Run Pool Policy, 48 Set RAID Mode, 126 Stop Tape Export, 74 View Log, 81 technical support, 238 telco racks, 215 text symbols, 237 thresholds for notifications, 152 tools, installation, 14 trace log files creating a support ticket, 161 saving to ind