HP MSA 1040 User Guide Abstract This document describes initial hardware setup for HP MSA 1040 controller enclosures, and is intended for use by storage system administrators familiar with servers and computer networks, network administration, storage system installation and configuration, storage area network management, and relevant protocols.
© Copyright 2014 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 MSA 1040 Storage models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Features and benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Single-controller configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . One server/single network/two switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dual-controller configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiple servers/single network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Can you successfully use the Remote Snap feature?. . . . . . . . . . . . . Can you view information about remote links? . . . . . . . . . . . . . . . . Can you create a replication set? . . . . . . . . . . . . . . . . . . . . . . . . . Can you replicate a volume? . . . . . . . . . . . . . . . . . . . . . . . . . . . . Can you view a replication image?. . . . . . . . . . . . . . . . . . . . . . . . Can you view remote systems? . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
Figures 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 MSA 1040 Array SFF enclosure: front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MSA 1040 Array LFF or supported 12-drive enclosure: front panel . . . . . . . . . . . . . . . . . . . . . . . . . MSA 1040 Array: rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures
Tables 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Terminal emulator display settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Terminal emulator connection settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tables
1 Overview HP MSA Storage models are high-performance storage solutions combining outstanding performance with high reliability, availability, flexibility, and manageability. MSA 1040 enclosure models are designed to meet NEBS Level 3, MIL-STD-810G (storage requirements), and European Telco specifications. MSA 1040 Storage models The MSA 1040 controller enclosures support either large form factor (LFF 12-disk) or small form factor (SFF 24-disk) 2U chassis, using either AC or DC power supplies.
Overview
2 Components Front panel components HP MSA 1040 models support small form factor (SFF) and large form factor (LFF) enclosures. The SFF chassis, configured with 24 2.5" SFF disks, is used as a controller enclosure. The LFF chassis, configured with 12 3.5" LFF disks, is used as either a controller enclosure or a drive enclosure. Supported drive enclosures, used for adding storage, are available in LFF or SFF chassis. The MSA 2040 6 Gb 3.
Disk drives used in MSA 1040 enclosures MSA 1040 enclosures support LFF/SFF Midline SAS, LFF/SFF Enterprise SAS, and SFF SSD disks. For information about creating vdisks and adding spares using these different disk drive types, see the HP MSA 1040 SMU Reference Guide and HP SED Drives Read This First document. Controller enclosure—rear panel layout The diagram and table below display and identify important component items comprising the rear panel layout of the MSA 1040 controller enclosure.
MSA 1040 controller module—rear panel components Figure 4 shows host ports configured with either 8 Gb FC or 10GbE iSCSI SFPs. The SFPs look identical. Refer to the LEDs that apply to the specific configuration of your host ports.
IMPORTANT: See Connecting to the controller CLI port for information about enabling the controller enclosure USB Type - B CLI port for accessing the Command-line Interface via a telnet client. Drive enclosures Drive enclosure expansion modules attach to MSA 1040 controller modules via the mini-SAS expansion port, allowing addition of disk drives to the system. MSA 1040 controller enclosures support adding the 6 Gb drive enclosures described below.
The CompactFlash card is located at the midplane-facing end of the controller module as shown below.
Components
3 Installing the enclosures Installation checklist The following table outlines the steps required to install the enclosures and initially configure the system. To ensure a successful installation, perform the tasks in the order they are presented. Installation checklist Table 1 Step Task Where to find procedure 1. Install the controller enclosure and optional drive enclosures in the rack, and attach ear caps. See the racking instructions poster. 2.
Connecting the MSA 1040 controller to the SFF drive enclosure The SFF D2700 25-drive enclosure, supporting 6 Gb internal disk drive and expander link speeds, can be attached to an MSA 1040 controller enclosure using supported mini-SAS to mini-SAS cables of 0.5 m (1.64') to 2 m (6.56') length [see Figure 9 (page 21)]. Connecting the MSA 1040 controller to the LFF drive enclosure The LFF MSA 1040 6 Gb 3.
NOTE: For clarity, the schematic illustrations of controller and expansion modules shown in this section provide only relevant details such as expansion ports within the module face plate outline. For detailed illustrations showing all components, see "Controller enclosure—rear panel layout" (page 14).
Controller A 1A 1A Controller A Controller B 1B 1B Controller B 2A 2A 2B 2B 3A 3A 3B 3B 4A 4A In Out 4B 4B In Out In Out In Out In Out In Out In Out In Out Fault-tolerant cabling Out In In Out In Out In Out Straight-through cabling Figure 10 Cabling connections between MSA 1040 controllers and LFF drive enclosures The diagram at left (above) shows fault-tolerant cabling of a dual-controller enclosure cabled to MSA 2040 6 Gb 3.
Controller A Controller B 1A 1A 1B 1B Controller A Controller B P1 P2 2A 2A P1 P2 P1 P2 2B 2B P1 P2 P1 P2 3A 3A P1 P2 P1 P2 3B 3B P1 P2 P1 P2 4A 4A P1 P2 P1 P2 4B 4B P1 P2 Fault-tolerant cabling Straight-through cabling Figure 11 Cabling connections between MSA 1040 controllers and SFF drive enclosures The figure above provides sample diagrams reflecting cabling of MSA 1040 controller enclosures and D2700 6 Gb drive enclosures.
Controller A 1A 1A Controller A Controller B 1B 1B Controller B 2A 2A 1 2B 2B 1 In Out In Out 2 2 P1 P2 3A 3A P1 P2 3B 3B P1 P2 4A 4A P1 P2 4B 4B Out In In Out 2 2 P1 P2 P1 P2 P1 P2 P1 P2 Straight-through cabling Fault-tolerant cabling Drive enclosure IOM face plate key: 1 = LFF 12-drive enclosure 2 = SFF 25-drive enclosure Figure 12 Cabling connections between MSA 1040 controllers and drive enclosures of mixed model type The figure above provides sample diagr
Testing enclosure connections NOTE: Once the power-on sequence for enclosures succeeds, the storage system is ready to be connected to hosts, as described in "Connecting the enclosure to data hosts" (page 29). Powering on/powering off Before powering on the enclosure for the first time: • Install all disk drives in the enclosure so the controller can identify and configure them at power-up. • Connect the cables and power cords to the enclosures as explained in the quick start instructions.
Power cord connect Figure 13 AC power supply To power on the system: 1. Obtain a suitable AC power cord for each AC power supply that will connect to a power source. 2. Plug the power cord into the power cord connector on the back of the drive enclosure (see Figure 13). Plug the other end of the power cord into the rack power source. Wait several seconds to allow the disks to spin up. Repeat this sequence for each power supply within each drive enclosure. 3.
Connect power cable to DC power supply Locate two DC power cables that are compatible with your controller enclosure. Connector pins (typical 2 places) +L +L +L +L GND -L GND -L GND -L GND -L Ring/lug connector (typical 3 places) Connector (front view) Figure 15 DC power cable featuring sectioned D-shell and lug connectors See Figure 15 and the illustration at left (in Figure 14) when performing the following steps: 1. Verify that the enclosure power switches are in the Off position. 2.
• Use the SMU to shut down both controllers, as described in the online help and HP MSA 1040 SMU Reference Guide. Proceed to step 3. • Use the command-line interface to shut down both controllers, as described in the HP MSA 1040 CLI Reference Guide. 3. Press the power switches at the back of the controller enclosure to the Off position. 4. Press the power switches at the back of each drive enclosure to the Off position.
4 Connecting hosts Host system requirements Data hosts connected to HP MSA 1040 arrays must meet requirements described herein. Depending on your system configuration, data host operating systems may require that multi-pathing is supported. If fault-tolerance is required, then multi-pathing software may be required. Host-based multi-path software should be used in any configuration where two logical paths between the host and any storage volume may exist at the same time.
Fibre Channel ports are used in either of two capacities: • To connect two storage systems through a Fibre Channel switch for use of Remote Snap replication. • For attachment to FC hosts directly, or through a switch used for the FC traffic. The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second option requires that the host computer supports FC and optionally, multipath I/O. TIP: Use the SMU Configuration Wizard to set FC port speed.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second option requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O. Connecting direct attach configurations The MSA 1040 controller enclosures support up to four direct-connect server connections, two per controller module. Connect appropriate cables from the server HBAs to the controller host ports as described below, and shown in the following illustrations.
Two servers/one HBA per server/dual path MSA 1040 Server 1 Server 2 6Gb/s 6Gb/s Figure 18 Connecting hosts: direct attach—two servers/one HBA per server/dual path Connecting switch attach configurations Dual controller configuration Two servers/two switches MSA 1040 Server 1 Server 2 Switch A Switch B 6Gb/s 6Gb/s Figure 19 Connecting hosts: switch attach—two servers/two switches Connecting remote management hosts The management host directly manages systems out-of-band over an Ethernet network.
replication set need only be connected to the primary system. If the primary system goes offline, a connected server can access the replicated data from the secondary system. Replication configuration possibilities are many, and can be cabled—in switch attach fashion—to support MSA 1040 systems on the same network, or on different networks.
Once the MSA 1040 systems are physically cabled, see the SMU Reference Guide or online help for information about configuring, provisioning, and using the optional Remote Snap feature. NOTE: See the HP MSA 1040 SMU Reference Guide for more information about using Remote Snap to perform replication tasks. The SMU Replication Setup Wizard guides you through replication setup. Host ports and replication MSA 1040 controller modules use qualified SFP options of the same type (FC or iSCSI).
The diagram below shows host interface connections and replication, with I/O and replication occurring on different networks. For optimal protection, use two switches. Connect one port from each controller module to the first switch to facilitate I/O traffic, and connect one port from each controller module to the second switch to facilitate replication.
IMPORTANT: See the “About firmware update” and “Updating firmware” topics within the MSA 1040 SMU Reference Guide before performing a firmware update. NOTE: To locate and download the latest software and firmware updates for your product, go to http://www.hp.com/support.
5 Connecting to the controller CLI port Device description The MSA 1040 controllers feature a command-line interface port used to cable directly to the controller and initially set IP addresses, or perform other configuration tasks. This port employs a mini-USB Type B form factor, requiring a cable that is supplied with the controller, and additional support, so that a server or other computer running a Linux or Windows operating system can recognize the controller enclosure as a connected device.
NOTE: For more information, see Using the Configuration Wizard > Configuring network ports within the HP MSA 1040 SMU Reference Guide. Setting network port IP addresses using the CLI port and cable You can set network port IP addresses manually using the command-line interface port and cable. If you have not done so already, you need to enable your system for using the command-line interface port [also see "Using the CLI port and cable—known issues on Windows" (page 40)].
• Linux customers should enter the command syntax provided in "Preparing a Linux computer before cabling to the CLI port" (page 37). • Windows customers should locate the downloaded device driver described in "Downloading a device driver for Windows computers" (page 37), and follow the instructions provided for proper installation. 4.
Network parameters, including the IP address, subnet mask, and gateway address are displayed for each controller. 9. Use the ping command to verify network connectivity. For example: # ping 192.168.0.1 (gateway) Info: Pinging 192.168.0.1 with 4 packets. Success: Command completed successfully. - The remote computer responded with 4 packets. 10.
2. Connect using the USB COM port and Detect Carrier Loss option. a. Select Connect To > Connect using: > pick a COM port from the list. b. Select the Detect Carrier Loss check box. The Device Manager page should show “Ports (COM & LPT)” with an entry entitled “Disk Array USB Port (COMn)”—where n is your system’s COM port number. 3. Set network port IP addresses using the CLI (see procedure on page 38). To restore a hung connection when the MC is restarted (any supported terminal emulator): 1.
Connecting to the controller CLI port
6 Basic operation Verify that you have completed the sequential “Installation Checklist” instructions in Table 1 (page 19). Once you have successfully completed steps 1 through 8 therein, you can access the management interface using your web browser, to complete the system setup. Accessing the SMU Upon completing the hardware installation, you can access the web-based management interface—SMU (Storage Management Utility)—from the controller module to monitor and manage the storage system.
Basic operation
7 Troubleshooting USB CLI port connection MSA 1040 controllers feature a CLI port employing a mini-USB Type B form factor. If you encounter problems communicating with the port after cabling your computer to the USB device, you may need to either download a device driver (Windows), or set appropriate parameters via an operating system command (Linux). See "Connecting to the controller CLI port" (page 37) for more information.
Use the CLI As an alternative to using the SMU, you can run the show system command in the CLI to view the health of the system and its components. If any component has a problem, the system health will be Degraded, Fault, or Unknown, and those components will be listed as Unhealthy Components. Follow the recommended actions in the component Health Recommendations field to resolve the problem.
• Informational. A configuration or state change occurred, or a problem occurred that the system corrected. No immediate action is required. See the HP MSA Event Descriptions Reference Guide for information about specific events, located at your HP MSA 1040 manuals page: http://www.hp.com/support/msa1040/manuals. The event logs record all system events. It is very important to review the logs, not only to identify the fault, but also to search for events that might have caused the fault to occur.
This provides you a specific window of time (the interval between requesting the statistics) to determine if data is being written to or read from the disk. 3. If any reads or writes occur during this interval, a host is still reading from or writing to this vdisk. Continue to stop IOPS from hosts, and repeat step 1 until the Number of Reads and Number of Writes statistics are zero. See the HP MSA 1040 CLI Reference Guide for additional information, at your HP MSA 1040 manuals page: http://www.hp.
Is the enclosure rear panel Fault/Service Required LED amber? Answer Possible reasons Actions No System functioning properly. No action required. Yes (blinking) One of the following errors occurred: • Table 6 • • • Hardware-controlled power-up error Cache flush error Cache self-refresh error • • Restart this controller from the other controller using the SMU or the CLI. If the above action does not resolve the fault, remove the controller and reinsert it.
Is a connected host port Host Link Status LED off? Answer Possible reasons Actions No System functioning properly. No action required. (see Link LED note: page 72) Yes The link is down. • • • • • • • • Table 9 Check cable connections and reseat if necessary. Inspect cables for damage. Replace cable if necessary. Swap cables to determine if fault is caused by a defective cable. Replace cable if necessary. Verify that the switch, if any, is operating properly. If possible, test with another port.
Is the power supply Input Power Source LED off? Answer Possible reasons Actions No System functioning properly. No action required. Yes The power supply is not receiving adequate power. • • • • Verify that the power cord is properly connected and check the power source to which it connects. Check that the power supply FRU is firmly locked into position. In the SMU, check the event log for specific information regarding the fault; follow any Recommended Actions.
If the controller has failed or does not start, is the Cache Status LED on/blinking? Answer Actions No, the Cache LED status is off, and the controller does not boot. If valid data is thought to be in Flash, see Transporting cache; otherwise, replace the controller module. No, the Cache Status LED is off, and the controller boots. The system has flushed data to disks. If the problem persists, replace the controller module. Yes, at a strobe 1:10 rate - 1 Hz, and the controller does not boot.
1. Halt all I/O to the storage system as described in "Stopping I/O" (page 47). 2. Check the host link status/link activity LED. If there is activity, halt all applications that access the storage system. 3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives. • Solid – Cache contains data yet to be written to the disk. • Blinking – Cache data is being written to CompactFlash.
NOTE: Do not perform more than one step at a time. Changing more than one variable at a time can complicate the troubleshooting process. 1. Halt all I/O to the storage system as described in "Stopping I/O" (page 47). 2. Check the host activity LED. If there is activity, halt all applications that access the storage system. 3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives. • Solid – Cache contains data yet to be written to the disk.
Replication setup and verification After storage systems and hosts are cabled for replication, you can use the Replication Setup Wizard in the SMU to prepare to use the Remote Snap feature. Optionally, you can use telnet to access the IP address of the controller module and access the Remote Snap feature using the CLI.
Answer Possible reasons Actions No Compatible firmware revision supporting Remote Snap is not running on each system used for replication. Perform the following actions on each system used for replication: • • No Invalid cabling connection. (Check cabling for each system) Verify controller enclosure cabling. • • • • • Table 15 In the Configuration View panel in the SMU, right-click the system, and select Tools > Update Firmware.
Answer Possible reasons Actions No Unable to select the replication mode (Local or Remote)? • • In the SMU, review event logs for indicators of a specific fault in a host or replication data path component. Follow any Recommended Actions. Local Replication mode replicates to a secondary volume residing in the local storage system. • Verify valid links On dual-controller systems, verify that A ports can access B ports on the partner controller, and vice versa.
Answer Possible reasons Actions No Network error occurred during in-progress replication. • • • No Communication link is down. In the SMU, review event logs for indicators of a specific fault in a replication data path component. Follow any Recommended Actions. In the Configuration View panel in the SMU, right-click the secondary volume, and select View > Overview to display the Replication Volume Overview table: • Check for replication interruption (suspended) status.
4. Try replacing each power supply module one at a time. 5. Replace the controller modules one at a time. 6. Replace SFPs one at a time. Sensor locations The storage system monitors conditions at different points within each enclosure to alert you to problems. Power, cooling fan, temperature, and voltage sensors are located at key points in the enclosure.
Table 23 Controller module temperature sensor descriptions Description Normal operating range Warning operating range Critical operating range Shutdown values CPU temperature 3°C–88°C 0°C–3°C, 88°C–90°C > 90°C 0°C 100°C FPGA temperature 3°C–97°C 0°C–3°C, 97°C–100°C None 0°C 105°C Onboard temperature 1 0°C–70°C None None None Onboard temperature 2 0°C–70°C None None None Onboard temperature 3 (Capacitor temperature) 0°C–70°C None None None CM temperature 5°C–50°C ≤ 5°C, ≥ 50
8 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
• Servers and computer networks • Network administration • Storage system installation and configuration • Storage area network management • Relevant protocols: • Fibre Channel (FC) • Internet SCSI (iSCSI) • Ethernet Troubleshooting resources See Chapter 7 for simple troubleshooting procedures pertaining to initial setup of the controller enclosure hardware.
NOTE: TIP: Provides additional information. Provides helpful hints and shortcuts. Rack stability Rack stability protects personnel and equipment. WARNING! To reduce the risk of personal injury or damage to equipment: • Extend leveling jacks to the floor. • Ensure that the full weight of the rack rests on the leveling jacks. • Install stabilizing feet on the rack. • In multiple-rack installations, fasten racks together securely. • Extend only one rack component at a time.
Support and other resources
9 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docs.feedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
Documentation feedback
A LED descriptions Front panel LEDs HP MSA 1040 models support small form factor (SFF) and large form factor (LFF) enclosures. The SFF chassis, configured with 24 2.5" SFF disks, is used as a controller enclosure. The LFF chassis, configured with 12 3.5" LFF disks, is used as either a controller enclosure or drive enclosure. Supported drive enclosures, used for adding storage, are available in LFF or SFF chassis. The MSA 2040 6 Gb 3.
MSA 1040 Array LFF or supported 12-drive expansion enclosure Left ear Right ear 1 4 7 10 2 5 8 11 3 6 9 12 4 7 10 3 6 9 12 1 4 1 2 3 5 Note: Integers on disks indicate drive slot numbering sequence. LED Description Definition 1 Enclosure ID Green — On Enables you to correlate the enclosure with logical views presented by management software. Sequential enclosure ID numbering of controller enclosures begins with the integer 1.
Disk drive LEDs 3.5" LFF disk drive 2 1 2.5" SFF disk drive (sled grate is not shown) Disk drive LED key: 2 1 = Fault/UID (amber/blue) 2 = Online/Activity (green) 1 Online/Activity (green) Fault/UID (amber/blue) Description On Off Normal operation. The disk drive is online, but it is not currently active. Blinking irregularly Off The disk drive is active and operating normally. Off Amber; blinking regularly (1 Hz) Offline; the disk is not being accessed.
Rear panel LEDs Controller enclosure—rear panel layout The diagram and table below display and identify important component items comprising the rear panel layout of the MSA 1040 controller enclosure. Diagrams and tables on the following pages further describe rear panel LED behavior for component field-replaceable units.
MSA 1040 controller module—rear panel LEDs 5 1 7 6Gb/s CACHE PORT 1 SERVICE−2 PORT 2 CLI ACT LINK SERVICE−1 CLI 3 2 = FC LEDs 4 6 8 9 : = iSCSI LEDs LED Description Definition 1 Host 4/8 Gb FC1 Link Status/ Link Activity Off — No link detected. Green — The port is connected and the link is up. Blinking green — The link has I/O or replication activity. 2 Host 10GbE iSCSI2,3 Link Status/ Link Activity Off — No link detected. Green — The port is connected and the link is up.
5 1 7 6Gb/s CACHE PORT 1 SERVICE−2 PORT 2 CLI ACT LINK SERVICE−1 CLI 3 2 = FC LEDs 4 6 8 9 : = iSCSI LEDs LED Description Definition 1 Not used in example1 iSCSI2,3 The FC SFP is not show in this example [see Figure 29 (page 71)]. 2 Host 1 Gb Link Status/ Link Activity Off — No link detected. Green — The port is connected and the link is up; or the link has I/O or replication activity.
When a controller is shutdown or otherwise rendered inactive—its Link Status LED remains illuminated—falsely indicating that the controller can communicate with the host. Though a link exists between the host and the chip on the controller, the controller is not communicating with the chip. To reset the LED, the controller must be properly power-cycled [see "Powering on/powering off" (page 25)]. Cache Status LED details If the LED is blinking evenly, a cache flush is in progress.
NOTE: See "Powering on/powering off" (page 25) for information on power-cycling enclosures. MSA 2040 6 Gb 3.5" 12-drive enclosure—rear panel layout MSA 1040 controllers support the MSA 2040 6 Gb 3.5" 12-drive enclosure. The front panel of the drive enclosure looks identical to that of an MSA 1040 Array LFF. The rear panel of the drive enclosure is shown below.
B Environmental requirements and specifications Safety requirements Install the system in accordance with the local safety codes and regulations at the facility site. Follow all cautions and instructions marked on the equipment. Also, refer to the documentation included with your product ship kit. Site requirements and guidelines The following sections provide requirements and guidelines that you must address when preparing your site for the installation.
• Site wiring must include an earth ground connection to the DC power source. Grounding must comply with local, national, or other applicable government codes and regulations. • Power circuits and associated circuit breakers must provide sufficient power and overload protection. Weight and placement guidelines Refer to "Physical requirements" (page 77) for detailed size and weight specifications. • The weight of an enclosure depends on the number and type of modules installed.
modules with an Internet Protocol (IP) address, you then use a remote management host on an Ethernet network to configure, manage, and monitor. NOTE: Connections to this device must be made with shielded cables–grounded at both ends–with metallic RFI/EMI connector hoods, in order to maintain compliance with NEBS and FCC Rules and Regulations.
Environmental requirements NOTE: For operating and non-operating environmental technical specifications, see QuickSpecs: http://www.hp.com/support/msa1040/QuickSpecs. Electrical requirements Site wiring and power requirements Each enclosure has two power supply modules for redundancy. If full redundancy is required, use a separate power source for each module.
C Electrostatic discharge Preventing electrostatic discharge To prevent damaging the system, be aware of the precautions you need to follow when setting up the system or handling parts. A discharge of static electricity from a finger or other conductor may damage system boards or other static-sensitive devices. This type of damage may reduce the life expectancy of the device. To prevent electrostatic damage: • Avoid hand contact by transporting and storing products in static-safe containers.
Electrostatic discharge
Index Numerics 2U12 large form factor (LFF) enclosure 77 2U24 small form factor (SFF) enclosure 77 A accessing CLI (command-line interface) 38 SMU (Storage Management Utility) 43 C cables 10GbE iSCSI 31 1Gb iSCSI 31 Ethernet 32 FCC compliance statement 32, 77 Fibre Channel 31 routing requirements 76 shielded 32, 77 USB for CLI 38 cabling connecting controller and drive enclosures 19 direct attach configurations 31 switch attach configurations 32 to enable Remote Snap replication 32 cache read ahead 16 sel
methodology 45 H host interface ports FC host interface protocol loop topology 29 point-to-point protocol 29 iSCSI host interface protocol 1 Gb 30 10GbE 30 mutual CHAP 30 SFP transceivers 11, 29 hosts defined 29 stopping I/O 47 HP customer self-repair (CSR) 63 product warranty 63 I IDs, correcting for enclosure 47 installing enclosures installation checklist 19 IP addresses setting using CLI 38 setting using DHCP 37 L LEDs disk drives 69 enclosure front panel Enclosure ID 67, 68 Fault ID 67, 68 Heartbeat
expansion port connection fault 53 host-side connection fault 52 Remote Snap replication faults 54 using event notification 46 using system LEDs 48 using the CLI 46 using the SMU 45 V ventilation requirements 76 W warnings rack stability 63 voltage and temperature 58 HP MSA 1040 User Guide 83
Index