HPE ProLiant DL380 Gen10 Server User Guide Abstract This document is for the person who installs, administers, and troubleshoots servers and storage systems. Hewlett Packard Enterprise assumes you are qualified in the servicing of computer equipment and trained in recognizing hazards in products with hazardous energy levels.
© Copyright 2017, Hewlett Packard Enterprise Development LP Notices The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Contents Component identification........................................................................... 8 Front panel components......................................................................................................................8 Front panel LEDs and buttons...........................................................................................................10 UID button functionality.................................................................................................
Temperature requirements...........................................................................................45 Power requirements.....................................................................................................46 Electrical grounding requirements............................................................................... 46 Server warnings and cautions................................................................................................ 46 Rack warnings...............
Supported PCIe form factors.................................................................................................. 91 Installing expansion boards.................................................................................................... 92 Installing a 12G SAS Expander Card..................................................................................... 94 Installing a GPU card............................................................................................................
UEFI System Utilities.......................................................................................................................136 Selecting the boot mode ...................................................................................................... 137 Secure Boot.......................................................................................................................... 137 Launching the Embedded UEFI Shell .................................................................
Documentation feedback........................................................................
Component identification Front panel components SFF front panel components Item Description 1 Box 1 (optional drives or universal media bay) 2 Box 2 (optional drives) 3 Box 3 Drives 1-8 4 Serial label pull tab or optional Systems Insight Display 5 iLO service port 6 USB 3.0 port Universal media bay components 8 Item Description 1 USB 2.
12-drive LFF front panel components Item Description 1 Drive bays 8-drive LFF model front panel components Item Description 1 Drives (optional) 2 LFF power switch module 3 Drive bays LFF power switch module components Component identification 9
Item Description 1 Optical disk drive 2 Serial label pull tab 3 USB 3.
Item Description Status 3 NIC status LED* Solid green = Link to network Flashing green (1 Hz/cycle per sec) = Network active Off = No network activity 4 UID button/LED* Solid blue = Activated Flashing blue: • • • 1 Hz/cycle per sec = Remote management or firmware upgrade in progress 4 Hz/cycle per sec = iLO manual reboot sequence initiated 8 Hz/cycle per sec = iLO manual reboot sequence in progress Off = Deactivated *When all four LEDs described in this table flash simultaneously, a power fault ha
Item Description Status 1 Health LED* Solid green = Normal Flashing green (1 Hz/cycle per sec) = iLO is rebooting Flashing amber = System degraded Flashing red (1 Hz/cycle per sec) = System critical** 2 Power On/Standby button and system power LED* Solid green = System on Flashing green (1 Hz/cycle per sec) = Performing power on sequence Solid amber = System in standby Off = No power present† 3 NIC status LED* Solid green = Link to network Flashing green (1 Hz/cycle per sec) = Network active Off
LFF power switch module LEDs and button Item Description Status 1 UID button/LED* Solid blue = Activated Flashing blue: • • • 1 Hz/cycle per sec = Remote management or firmware upgrade in progress 4 Hz/cycle per sec = iLO manual reboot sequence initiated 8 Hz/cycle per sec = iLO manual reboot sequence in progress Off = Deactivated 2 Health LED* Solid green = Normal Flashing green (1 Hz/cycle per sec) = iLO is rebooting Flashing amber = System degraded Flashing red (1 Hz/cycle per sec) = System cri
**If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the system health status. †Facility power is not present, power cord is not attached, no power supplies are installed, power supply failure has occurred, or the power button cable is disconnected. UID button functionality The UID button can be used to display the HPE ProLiant Pre-boot Health Summary when the server will not power on.
Description Status Processor LEDs Off = Normal Amber = Failed processor DIMM LEDs Off = Normal Amber = Failed DIMM or configuration issue Fan LEDs Off = Normal Amber = Failed fan or missing fan NIC LEDs Off = No link to network Solid green = Network link Flashing green = Network link with activity If power is off, the front panel LED is not active. For status, see Rear panel LEDs on page 18.
Systems Insight Display LED and color Health LED System power LED Status Processor (amber) Red Amber One or more of the following conditions may exist: • • • • Processor in socket X has failed. Processor X is not installed in the socket. Processor X is unsupported. ROM detects a failed processor during POST. Processor (amber) Amber Green Processor in socket X is in a prefailure condition. DIMM (amber) Red Green One or more DIMMs have failed.
Systems Insight Display LED and color Health LED System power LED Status Power cap (off) — Amber Standby Power cap (green) — Flashing green Waiting for power Power cap (green) — Green Power is available. Power cap (flashing amber) — Amber Power is not available. IMPORTANT: If more than one DIMM slot LED is illuminated, further troubleshooting is required. Test each bank of DIMMs by removing all other DIMMs.
Rear panel LEDs Item Description Status 1 UID LED Off = Deactivated Solid blue = Activated Flashing blue = System being managed remotely 2 Link LED Off = No network link Green = Network link 3 Activity LED Off = No network activity Solid green = Link to network Flashing green = Network activity 4 Power supply LEDs Off = System is off or power supply has failed.
System board components Item Description 1 FlexibleLOM connector 2 System maintenance switch 3 Primary PCIe riser connector 4 Front display port/USB 2.
Item Description 5 x4 SATA port 1 6 x4 SATA port 2 7 x2 SATA port 3 8 x1 SATA port 4 9 Optical disk drive/SATA port 5 10 Front power/USB 3.0 connector 11 Drive backplane power connectors 12 Smart Storage Battery connector 13 Chassis intrusion detection connector 14 Drive backplane power connector 15 Micro SD card slot 16 Dual internal USB 3.
Position Default Function S7 — Reserved S8 — Reserved S9 — Reserved S10 — Reserved S11 — Reserved S12 — Reserved 1You can access the redundant ROM by setting S1, S5, and S6 to On. 2When the system maintenance switch position 6 is set to the On position, the system is prepared to restore all configuration settings to their manufacturing defaults. When the system maintenance switch position 6 is set to the On position and Secure Boot is enabled, some configurations cannot be restored.
SAS/SATA drive components and LEDs Item Description Status 1 Locate • • 2 Activity ring LED • • Rotating green = Drive activity. Off = No drive activity. 3 Do not remove LED • Solid white = Do not remove the drive. Removing the drive causes one or more of the logical drives to fail. Off = Removing the drive does not cause a logical drive to fail. • 4 Drive status LED • • • • • • 22 Solid blue = The drive is being identified by a host application.
NVMe drive components and LEDs Item Description 1 Release lever 2 Activity ring 3 Do Not Remove LED1 4 Request to Remove NVMe Drive button 1Do not remove an NVMe SSD from the drive bay while the Do Not Remove button LED is flashing. The Do Not Remove button LED flashes to indicate the device is still in use. Removal of the NVMe SSD before the device has completed and ceased signal/traffic flow can cause loss of data.
Item Description Status 3 Do not remove LED • • 4 Drive status LED • • • • • • • 5 Adapter ejection release latch and handle Fan bay numbering 24 Fan bay numbering Off—OK to remove the drive. Removing the drive does not cause a logical drive to fail. Solid white—Do not remove the drive. Removing the drive causes one or more of the logical drives to fail.
Drive box identification Front boxes Item Description 1 Box 1 2 Box 2 3 Box 3 Item Description 1 Box 1 2 Box 2 3 Box 3 Rear boxes Item Description 1 Box 4 2 Box 5 3 Box 6 Drive box identification 25
Item Description 1 Box 4 2 Box 6 Midplane box (LFF only) Item Description 1 Box 7 Drive bay numbering Drive bay numbering depends on how the drive backplanes are connected: • To a controller • ◦ Embedded controllers use the onboard SATA ports. ◦ Type-a controllers install to the type-a smart array connector. ◦ Type-p controllers install to a PCIe riser.
Component identification 27
Drive bay numbering: SAS expander Drive numbering through a SAS Expander is continuous. • • • • • • • • • SAS expander port 1 always connects to port 1 of the controller. SAS expander port 2 always connects to port 2 of the controller. SAS expander port 3 = drive numbers 1-4. SAS expander port 4 = drive numbers 5-8. SAS expander port 5 = drive numbers 9-12. SAS expander port 6 = drive numbers 13-16. SAS expander port 7 = drive numbers 17-20. SAS expander port 8 = drive numbers 21-24.
When any stacked 2SFF drive cage is connected to the SAS expander, the drive numbering skips the second number to allow uFF drive bay numbering on page 31. For example, when a rear 2SFF drive cage is connected to SAS expander port 9, then the drive numbers are 25 and 27. When the front 24SFF bays are populated, any installed rear 2SFF drives are always 25 and 27. If a 2SFF drive cage is connected to SAS expander port 3, then the drive numbers are 1 and 3.
Front 12LFF + Midplane 4LFF + All rear 2SFF: Drive bay numbering: NVMe drives If the server is populated with NVMe drives and NVMe risers: 30 Drive bay numbering: NVMe drives
uFF drive bay numbering There are two uFF drives in each drive carrier. If the drives are connected to a controller: • • The left bay = The default bay number of the server The right bay = The default bay number of the server + 100 If the drives are connected to a SAS expander: For example: • • If the drives are connected to port 3 of the SAS expander, then the uFF drives are 1-4. If the drives are connected to port 9 of the SAS expander, then the uFF drives are 25-28.
Operations Powering up the server To power up the server, press the Power On/Standby button. Power down the server Before powering down the server for any upgrade or maintenance procedures, perform a backup of critical server data and programs. IMPORTANT: When the server is in standby mode, auxiliary power is still being provided to the system. To power down the server, use one of the following methods: • • • Press and release the Power On/Standby button.
3. After performing the installation or maintenance procedure, slide the server back into the rack, and then press the server firmly into the rack to secure it in place. WARNING: To reduce the risk of personal injury, be careful when pressing the server rail-release latches and sliding the server into the rack. The sliding rails could pinch your fingers.
Removing the server from the rack To remove the server from a Hewlett Packard Enterprise, Compaq-branded, Telco, or third-party rack: Procedure 1. Power down the server. 2. Extend the server from the rack. 3. Disconnect the cabling and remove the server from the rack. For more information, see the documentation that ships with the rack mounting option. 4. Place the server on a sturdy, level surface. Installing the server into the rack Procedure 1.
6. Secure the cables to the cable management arm. IMPORTANT: When using cable management arm components, be sure to leave enough slack in each of the cables to prevent damage to the cables when the server is extended from the rack. 7. Connect the power cord to the AC power source. WARNING: To reduce the risk of electric shock or damage to the equipment: • • • • Do not disable the power cord grounding plug. The grounding plug is an important safety feature.
Remove the access panel WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal system components to cool before touching them. CAUTION: Do not operate the server for long periods with the access panel open or removed. Operating the server in this manner results in improper airflow and improper cooling that can lead to thermal damage. To remove the component: Procedure 1. Power down the server. 2. Extend the server from the rack. 3.
Installing the fan cage CAUTION: Do not operate the server for long periods with the access panel open or removed. Operating the server in this manner results in improper airflow and improper cooling that can lead to thermal damage. IMPORTANT: For optimum cooling, install fans in all primary fan locations. Removing the air baffle or midplane drive cage CAUTION: Do not detach the cable that connects the battery pack to the cache module.
CAUTION: For proper cooling, do not operate the server without the access panel, baffles, expansion slot covers, or blanks installed. If the server supports hot-plug components, minimize the amount of time the access panel is open. Procedure 1. Power down the server. 2. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. 3. Do one of the following: • Extend the server from the rack. • Remove the server from the rack. 4.
Installing the air baffle Procedure 1. Observe the following alerts. CAUTION: For proper cooling, do not operate the server without the access panel, baffles, expansion slot covers, or blanks installed. If the server supports hot-plug components, minimize the amount of time the access panel is open. CAUTION: Do not detach the cable that connects the battery pack to the cache module. Detaching the cable causes any unsaved data in the cache module to be lost. 2. Install the air baffle.
Removing a riser cage CAUTION: To prevent damage to the server or expansion boards, power down the server and remove all AC power cords before removing or installing the PCI riser cage. Procedure 1. Power down the server. 2. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. 3. Do one of the following: • Extend the server from the rack. • Remove the server from the rack. 4. Remove the access panel. 5. Remove the riser cage.
Removing a riser slot blank CAUTION: To prevent improper cooling and thermal damage, do not operate the server unless all PCI slots have either an expansion slot cover or an expansion board installed. Procedure 1. Power down the server. 2. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. 3. Do one of the following: • Extend the server from the rack. • Remove the server from the rack . 4. Remove the access panel. 5. Remove the riser cage.
Releasing the cable management arm Release the cable management arm and then swing the arm away from the rack. Accessing the Systems Insight Display The Systems Insight Display is only supported on SFF models. To access a Systems Insight Display, use the following procedure. Procedure 1. Press and release the panel. 2. After the display fully ejects, rotate the display to view the LEDs.
Operations 43
Setup HPE support services Delivered by experienced, certified engineers, HPE support services help you keep your servers up and running with support packages tailored specifically for HPE ProLiant systems. HPE support services let you integrate both hardware and software support into a single package. A number of service level options are available to meet your business and IT needs.
• • • Leave a minimum clearance of 63.5 cm (25 in) in front of the rack. Leave a minimum clearance of 76.2 cm (30 in) behind the rack. Leave a minimum clearance of 121.9 cm (48 in) from the back of the rack to the back of another rack or row of racks. Hewlett Packard Enterprise servers draw in cool air through the front door and expel warm air through the rear door.
CAUTION: To reduce the risk of damage to the equipment when installing third-party options: • • Do not permit optional equipment to impede airflow around the server or to increase the internal rack temperature beyond the maximum allowable limits. Do not exceed the manufacturer’s TMRA. Power requirements Installation of this equipment must comply with local and regional electrical regulations governing the installation of information technology equipment by licensed electricians.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal system components to cool before touching them. WARNING: To reduce the risk of personal injury, electric shock, or damage to the equipment, remove the power cord to remove power from the server. The front panel Power On/Standby button does not completely shut off system power. Portions of the power supply and some internal circuitry remain active until AC/DC power is removed.
• • Avoid touching pins, leads, or circuitry. Always be properly grounded when touching a static-sensitive component or assembly. Use one or more of the following methods when handling or installing electrostatic-sensitive parts: ◦ ◦ ◦ ◦ Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist straps are flexible straps with a minimum of 1 megohm ±10 percent resistance in the ground cords. To provide proper ground, wear the strap snug against the skin.
• • For the latest information on supported operating systems, see the Hewlett Packard Enterprise website. The server does not ship with OS media. All system software and firmware is preloaded on the server. Registering the server To experience quicker service and more efficient support, register the product at the Hewlett Packard Enterprise Product Registration website.
Hardware options installation Product QuickSpecs For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). Introduction If more than one option is being installed, read the installation instructions for all the hardware options and identify similar steps to streamline the installation process.
Power supply options Hot-plug power supply calculations For hot-plug power supply specifications and calculators to determine electrical and heat loading for the server, see the Hewlett Packard Enterprise Power Advisor website. Installing a redundant hot-plug power supply CAUTION: All power supplies installed in the server must have the same output power capacity. Verify that all power supplies have the same part number and label color.
3. Insert the power supply into the power supply bay until it clicks into place. 4. Connect the power cord to the power supply. 5. Route the power cord. Use the cable management arm and best practices when routing cords and cables. 6. Connect the power cord to the power source. 7. Observe the power supply LED. Drive options Drive guidelines Depending on the configuration, the server supports SAS, SATA, and NVMe drives.
• Do not remove an NVMe SSD from the drive bay while the Do Not Remove button LED is flashing. The Do Not Remove button LED flashes to indicate that the device is still in use. Removal of the NVMe SSD before the device has completed and ceased signal/traffic flow can cause loss of data. Drives with the same capacity provide the greatest storage space efficiency when grouped into the same drive array.
4. Observe the LED status of the drive. Installing an NVMe drive CAUTION: To prevent improper cooling and thermal damage, do not operate the server unless all drive and device bays are populated with either a component or a blank. Procedure 1. Remove the drive blank. 2. Prepare the drive. 3. Install the drive.
4. Observe the LED status of the drive. Installing a uFF drive and SCM drive carrier IMPORTANT: Not all drive bays support the drive carrier. To find supported bays, see the server QuickSpecs. Procedure 1. If needed, install the uFF drive into the drive carrier. 2. Remove the drive blank. 3. Install the drives. Push firmly near the ejection handle until the latching spring engages with the drive bay.
4. Power on the server. To configure the drive, use HPE Smart Storage Administrator. Installing an M.2 drive This procedure is for replacing M.2 drives located on an expansion card, riser, or the system board only. Do not use this procedure to replace uFF drives. Prerequisites Before installing this option, be sure that you have the following: • • The components included with the hardware option kit T-10 Torx screwdriver Procedure 1. Power down the server . 2. Remove all power: a.
The installation is complete. Fan options CAUTION: To avoid damage to server components, fan blanks must be installed in fan bays 1 and 2 in a singleprocessor configuration. CAUTION: To avoid damage to the equipment, do not operate the server for extended periods of time if the server does not have the optimal number of fans installed. Although the server might boot, Hewlett Packard Enterprise does not recommend operating the server without the required fans installed and operating.
Installing high-performance fans CAUTION: Caution: To prevent damage server, ensure that all DIMM latches are closed and locked before installing the fans. CAUTION: Do not operate the server for long periods with the access panel open or removed. Operating the server in this manner results in improper airflow and improper cooling that can lead to thermal damage. Procedure 1. Extend the server from the rack. 2. Remove the access panel. 3. If installed, remove all fan blanks. 4. Remove the air baffle. 5.
6. Install high-performance fans in all fan bays. 7. Install the air baffle. 8. Install the access panel. 9. Install the server into the rack. Memory options IMPORTANT: This server does not support mixing LRDIMMs and RDIMMs. Attempting to mix any combination of these DIMMs can cause the server to halt during BIOS initialization. All memory installed in the server must be of the same type.
HPE Smart Memory speed information For more information about memory speed information, see the Hewlett Packard Enterprise website (https:// www.hpe.com/docs/memory-speed-table). DIMM label identification To determine DIMM characteristics, see the label attached to the DIMM. The information in this section helps you to use the label to locate specific information about the DIMM.
Item Description Definition 6 CAS latency P = CAS 15-15-15 T = CAS 17-17-17 U = CAS 20-18-18 V = CAS 19-19-19 (for RDIMM, LRDIMM) V = CAS 22-19-19 (for 3DS TSVLRDIMM) 7 DIMM type R = RDIMM (registered) L = LRDIMM (load reduced) E = Unbuffered ECC (UDIMM) For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs).
7. 8. 9. 10. 11. Install the access panel. Install the server in the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server. Use the BIOS/Platform Configuration (RBSU) in the UEFI System Utilities to configure the memory mode. For more information about LEDs and troubleshooting failed DIMMs, see "Systems Insight Display combined LED descriptions.
3. Do one of the following: • Extend the server from the rack. • Remove the server from the rack. 4. Remove the access panel. 5. Do one of the following: • Remove the air baffle. • If installed, remove the 4LFF midplane drive cage. 6. Do one of the following: • For Type-a Smart Array controllers, install the controller into the Smart Array connector. • For Type-p Smart Array controllers, install the controller into an expansion slot. 7. Cable the controller. The installation is complete.
7. Remove the bay blank. 8. Route the USB and video cables through the opening. 9. If installing a two-bay SFF front drive cage, install the drive cage. 10. Install the universal media bay. 11. (Optional) Install the optical disk drive.
12. 13. 14. 15. 16. 17. 18. 19. Connect the cables. Install the fan cage. Install the air baffle. Install the access panel. Slide the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server. The installation is complete. Drive cage options Installing a front 8NVMe SSD Express Bay drive cage Observe the following: • • • • The drive cage can be installed in any box. This procedure covers installing the drive cage in box 1.
CAUTION: To prevent damage to electrical components, properly ground the server before beginning any installation procedure. Improper grounding can cause ESD. 2. 3. Power down the server . Do one of the following: 4. 5. 6. 7. • Extend the server from the rack. • Remove the server from the rack . Remove the access panel. Remove the air baffle. Remove the fan cage. Remove the blank. 8. Install the drive cage: a. Remove all drives and drive blanks. b. Install the drive cage. 9.
11. Connect the data cables from the drive backplane to the NVMe riser. 12. Install drives or drive blanks. The installation is complete. Installing a front 6SFF SAS/SATA + 2NVMe Premium drive cage The drive cage can be installed in any box. This procedure covers installing the drive cage in box 1. Prerequisites A storage controller and high-performance fans are required when installing this drive cage. Procedure 1. Observe the following alerts.
9. 10. 11. 12. Connect the power cable. Install a storage controller. Connect the data cables from the drive backplane to the controller. Install drives or drive blanks. The installation is complete.
◦ If the 2SFF drive cage is not installed, then install airflow labels as shown. ◦ If a 2SFF drive cage is installed, then install the airflow labels as shown. Installing a front 8SFF SAS/SATA drive cage in box 1 Prerequisites Before installing this option, be sure that you have the following: • • T-10 Torx screwdriver The components included with the hardware option kit Procedure 1. 2. 3. Power down the server. Remove all power: a. Disconnect each power cord from the power source. b.
4. 5. 6. 7. • Extend the server from the rack. • Remove the server from the rack . Remove the access panel. Remove the air baffle. Remove the fan cage. Remove the bay blank. 8. Install the 8SFF front drive cage option. 9. 10. 11. 12. 13. 14. 15. 16. Connect the power and data cables. Install the fan cage. Install the air baffle. Install the access panel. Slide the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server.
Installing a front 8SFF SAS/SATA drive cage in box 2 Procedure 1. 2. 3. Power down the server. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. Do one of the following: 4. 5. 6. 7. • Extend the server from the rack. • Remove the server from the rack . Remove the access panel. Remove the air baffle. Remove the fan cage. Remove the bay blank. 8. Install the 8SFF front drive cage option. 9. Connect the power and data cables.
10. 11. 12. 13. 14. 15. Install the fan cage. Install the access panel. Slide the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server. The installation is complete. Installing a front 2SFF NVMe/SAS/SATA Premium drive cage Prerequisites Before installing this option, be sure that you have the following: • • • T-10 Torx screwdriver The components included with the hardware option kit This installation requires a universal media bay.
7. Remove the SFF drive blank from the universal media bay. 8. Install the drive cage into the universal media bay.
9. Install the optical disk drive tray. 10. Install the universal media bay.
11. 12. 13. 14. 15. 16. Connect the power and data cables. Install the access panel. Slide the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server. The installation is complete. Installing a midplane 4LFF SAS/SATA drive cage Observe the following: • • • A 1U heatsink is required for each processor when installing this option. If you have a TPM, install it prior to this option.
6. Remove all riser cages. 7. 8. 9. 10. Connect the power cable to the drive backplane power connector on the system board. If connecting the data cable to the system board or a controller, connect the data cable. Prepare the drive cage for installation by lifting the latches on the drive cage. Install the drive cage: CAUTION: Do not drop the drive cage on the system board. Dropping the drive cage on the system board might damage the system or components.
11. Install drives or drive blanks. 12. Push down on the latches to lower the drive cage into place. 13. Connect the power and data cables to the drive backplane. The installation is complete. Installing a rear 2SFF SAS/SATA drive cage in the primary or secondary riser Prerequisites Before installing this option, be sure that you have the following: • • • • T-10 Torx screwdriver The components included with the hardware option kit The front drive bays are fully populated with 12 LFF or 24 SFF drives.
Procedure 1. 2. 3. 4. 5. Power down the server. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. Do one of the following: • Extend the server from the rack. • Remove the server from the rack . Remove the access panel. Do one of the following: For primary bays, remove the riser cage. For secondary bays, remove the rear wall blank. 6. 7. 78 Install a SAS expander or other expansion card, if needed. Install the drive cage.
8. 9. 10. 11. 12. 13. 14. Cable the drive backplane. Install drives or drive blanks. Install the access panel. Slide the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server. The installation is complete.
Remove the secondary wall blank. 6. 80 Remove the tertiary wall blank.
7. Install the drive cage compatible rear wall. 8. Install the drive cage.
9. 10. 11. 12. 13. 14. 15. 16. Install drives or drive blanks. Install the secondary rear wall or a riser cage. Cable the drive backplane. Install the access panel. Slide the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server. The installation is complete. Installing a rear 3LFF SAS/SATA drive cage Before installing this option, the front bays must be fully populated with 12 LFF drives.
6. Remove the rear wall blank. 7. Install the three-bay LFF rear drive cage option.
8. 9. 10. 11. 12. 13. 14. Install drives or drive blanks. Connect the power and data cables. Install the access panel. Slide the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server. The installation is complete.
7. Install the riser. 8. 9. 10. 11. Install any expansion boards, if needed Connect data cables to the riser or expansion board, if needed. Install the riser cage. If needed, connect data cables to drive backlanes. The installation is complete. Installing tertiary risers Prerequisites Before installing this option, be sure that you have the following: • • • The components included with the hardware option kit T-10 Torx screwdriver A tertiary riser cage is required to install this option. Procedure 1.
3. • Disconnect each power cord from the power source. • Disconnect each power cord from the server. Do one of the following: 4. 5. 6. • Extend the server from the rack. • Remove the server from the rack. Remove the access panel. Remove the riser cage. Install the riser. 7. 8. 9. 10. Install any expansion cards, if needed. Connect any data cables to riser or expansion boards. Install the tertiary riser cage. Connect cables to drive backplanes, if needed. The installation is complete.
• Extend the server from the rack. • Remove the server from the rack. 5. Remove the access panel. 6. Remove the rear wall blank. 7. Install any expansion boards, if needed. 8. Install the riser cage: The installation is complete.
Procedure 1. Observe the following alert. CAUTION: To prevent damage to the server or expansion boards, power down the server and remove all AC power cords before removing or installing the PCI riser cage. 2. Power down the server. 3. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. 4. Do one of the following: • Extend the server from the rack. • Remove the server from the rack. 5. Remove the access panel. 6. Remove the rear wall blanks.
8. Install any expansion boards, if needed 9. Install the tertiary riser cage: The installation is complete. Installing the 2NVMe slimSAS riser option Prerequisites Before installing this option, be sure that you have the following: • • The components included with the hardware option kit T-10 Torx screwdriver Procedure 1. Power down the server . 2.
• Disconnect each power cord from the power source. • Disconnect each power cord from the server. 3. Do one of the following: 4. 5. 6. 7. • Extend the server from the rack. • Remove the server from the rack. Remove the access panel. Using the labels on the cable, connect the cables to the riser. Install the tertiary riser cage. Connect the cable to the drive backplane. The installation is complete.
To install the riser in the secondary position, install the secondary riser cage. 7. Connect data cables to the drive backplane. Expansion slots Supported PCIe form factors All slots support full-height expansion cards. Use the following information to find supported lengths for each slot.
Primary riser connector PCIe slot and card length 3-slot riser* 2-slot riser (Optional) 2-slot riser (Optional) Slot 1 - Full-length/Fullheight (FL/FH) PCIe3 x8 (8, 4, 2, 1) — PCIe3 x16 (16, 8, 4, 2, 1) Slot 2 - Half-length/Fullheight (HL/FH) PCIe3 x16 (16, 8, 4, 2, 1) PCIe3 x16 (16, 8, 4, 2, 1) PCIe3 x16 (16, 8, 4, 2, 1) Slot 3 - Half-length/Fullheight (HL/FH) PCIe3 x8 (8, 4, 2, 1) PCIe3 x16 (16, 8, 4, 2, 1) — *The server ships with one PCIe3 riser cage installed in the primary riser cage conn
Procedure 1. 2. 3. Power down the server. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. Do one of the following: 4. 5. • Extend the server from the rack. • Remove the server from the rack . Remove the access panel. Remove the riser cage. 6. Identify and then remove the PCIe blank from the riser cage. 7. Install the expansion board.
8. 9. If internal cables are required for the expansion board, connect the cables. Install the riser cage. 10. 11. 12. 13. 14. Install the access panel. Slide the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server. The installation is complete. Installing a 12G SAS Expander Card • • • • • For 24SFF configurations, install 8SFF front drive cages in boxes 1 and 2.
• • • The components included with the hardware option kit Storage cables for each drive box A storage controller Procedure 1. 2. 3. Power down the server. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. Do one of the following: 4. 5. 6. 7. • Extend the server from the rack. • Remove the server from the rack . Remove the access panel. Remove the air baffle. Remove the fan cage. Remove the riser cage. 8. 9.
13. Connect cables from the 12G SAS expander to the drive backplanes. A standard configuration is shown. For additional cabling diagrams, see "Cabling diagrams on page 118". 14. 15. 16. 17. 18. 19. 20. Install the fan cage. Install the air baffle. Install the access panel. Install the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server . The installation is complete.
Prerequisites Before installing this option, be sure that you have the following: • • • • The components included with the hardware option kit T-30 Torx screwdriver T-10 Torx screwdriver High-performance heatsinks must be installed with this option. Procedure 1. Observe the following alert. CAUTION: To prevent improper cooling and thermal damage, do not operate the server unless all PCIe slots have either an expansion slot cover or an expansion board installed. 2. 3. 4. Power down the server.
9. Install high-performance heatsinks. 10. Install the air baffle. 11. Remove the rear wall blank. To install a tertiary GPU, remove the 12. Remove the PCIe blank. 13. Install the GPU into the riser.
14. Connect the power cable from the GPU to the riser. 15. Slide the retention clips to the unlocked position. 16. Install the riser cage.
17. Slide the retention clips to the locked position. The installation is complete. Installing an intrusion detection switch Prerequisites Before installing this option, be sure that you have the following: • The components included with the hardware option kit Procedure 1. Power down the server. 2. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. 3.
• Extend the server from the rack. • Remove the server from the rack . 4. Remove the access panel. 5. Remove the air baffle. 6. Install the intrusion detection switch. Installing a Smart Storage Battery Prerequisites Before installing this option, be sure that you have the following: The components included with the hardware option kit Procedure 1. 2. Power down the server . Do one of the following: 3. • Disconnect each power cord from the power source. • Disconnect each power cord from the server.
7. Install the cable. 8. 9. 10. 11. 12. 13. 14. Install the fan cage. Install the air baffle. Install the access panel. Slide the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server . The installation is complete. Installing a rear serial port interface If a tertiary riser cage is installed, you can install the serial port into the slot 6.
Procedure 1. 2. Power down the server . Do one of the following: 3. • Disconnect each power cord from the power source. • Disconnect each power cord from the server. Do one of the following: 4. 5. • Extend the server from the rack. • Remove the server from the rack. Remove the access panel. Do one of the following: • If a tertiary riser cage is not installed, install the serial port. • Be sure to remove the backing from the double-sided tape.
Install the serial port. 6. 7. 8. 9. 10. Install the access panel. Install the server in the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server . The installation is complete.
Procedure 1. 2. 3. Power down the server. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. Do one of the following: 4. 5. • Extend the server from the rack. • Remove the server from the rack . Remove the access panel. Do one of the following: 6. 7. 8. • Remove the air baffle. • If installed on LFF models, remove the midplane drive cage. Remove the fan cage. Disconnect the cable from the front power/USB 3.0 connector.
10. 11. 12. 13. 14. 15. 16. 17. Connect the SID module cable to the front power/USB 3.0 connector. Install the fan cage. Install the air baffle. Install the access panel. Slide the server into the rack. Connect each power cord to the server. Connect each power cord to the power source. Power up the server. The installation is complete.
7. Install the FlexibleLOM adapter: 8. 9. 10. 11. 12. 13. 14. Install the riser cage. Install the access panel. Slide the server into the rack. Connect the LAN segment cables. Connect each power cord to the server. Connect each power cord to the power source. Power up the server . The installation is complete.
Installing a 1U or high-performance heatsink This procedure shows a standard heatsink as an example. The installation process is the same for all heatsinks. HPE recommends identifying the processor, heatsink, and socket components before performing this procedure. Prerequisites Before installing this option, be sure that you have the following: • • The components included with the hardware option kit T-30 Torx screwdriver Procedure 1. Observe the following alerts.
8. c. Lift the processor heatsink assembly and move it away from the system board. d. Turn the assembly over and place it on a work surface with the processor facing up. e. Install the dust cover. Separate the processor from the heatsink: a. Locate the release slot between the frame and heatsink. The release slot is across from the Pin 1 indicator and is labeled with a screwdriver. b. Insert a 1/4" flathead screwdriver into the release slot.
11. Install the processor heatsink assembly. The installation is complete. Installing a processor Before performing this procedure, HPE recommends identifying the processor-heatsink module components. Prerequisites Before installing this option, be sure that you have the following: • • The components included with the hardware option kit T-30 Torx screwdriver Procedure 1. Observe the following alerts. CAUTION: When handling the heatsink, always hold it along the top and bottom of the fins.
CAUTION: THE CONTACTS ARE VERY FRAGILE AND EASILY DAMAGED. To avoid damage to the socket or processor, do not touch the contacts. 2. Power down the server. 3. Remove all power: a. Disconnect each power cord from the power source. b. Disconnect each power cord from the server. 4. Do one of the following: • Extend the server from the rack. • Remove the server from the rack. 5. Remove the access panel. 6. Do one of the following: • Remove the air baffle. • If installed, remove the 4LFF midplane drive cage. 7.
The installation is complete. HPE Trusted Platform Module 2.0 Gen10 Option Overview Use these instructions to install and enable an HPE TPM 2.0 Gen10 Kit in a supported server. This option is not supported on Gen9 and earlier servers. This procedure includes three sections: 1. Installing the Trusted Platform Module board. 2. Enabling the Trusted Platform Module. 3. Retaining the recovery key/password. HPE TPM 2.
IMPORTANT: In UEFI Boot Mode, the HPE TPM 2.0 Gen10 Kit can be configured to operate as TPM 2.0 (default) or TPM 1.2 on a supported server. In Legacy Boot Mode, the configuration can be changed between TPM 1.2 and TPM 2.0, but only TPM 1.2 operation is supported. HPE Trusted Platform Module 2.0 Guidelines CAUTION: Always observe the guidelines in this document. Failure to follow these guidelines can cause hardware damage or halt data access.
WARNING: To reduce the risk of personal injury, electric shock, or damage to the equipment, remove power from the server by removing the power cord. The front panel Power On/Standby button does not shut off system power. Portions of the power supply and some internal circuitry remain active until AC power is removed. WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal system components to cool before touching them. 2. Update the system ROM. 5.
3. Install the TPM cover: a. Line up the tabs on the cover with the openings on either side of the TPM connector. b. To snap the cover into place, firmly press straight down on the middle of the cover. 4. Proceed to Preparing the server for operation on page 115. Preparing the server for operation Procedure 1. Install any options or cables previously removed to access the TPM connector. 2. Do one of the following: 3. 4. 5. 6. • Install the air baffle. • Install the 4LFF midplane drive cage.
Enabling the Trusted Platform Module When enabling the Trusted Platform module, observe the following guidelines: • • • By default, the Trusted Platform Module is enabled as TPM 2.0 when the server is powered on after installing it. In UEFI Boot Mode, the Trusted Platform Module can be configured to operate as TPM 2.0 or TPM 1.2. In Legacy Boot Mode, the Trusted Platform Module configuration can be changed between TPM 1.2 and TPM 2.0, but only TPM 1.2 operation is supported.
The server reboots a second time without user input. During this reboot, the TPM setting becomes effective. 8. Enable TPM functionality in the OS, such as Microsoft Windows BitLocker or measured boot. For more information, see the Microsoft website. Retaining the recovery key/password The recovery key/password is generated during BitLocker setup, and can be saved and printed after BitLocker is enabled. When using BitLocker, always retain the recovery key/password.
Cabling HPE ProLiant Gen10 DL Servers Storage Cabling Guidelines When installing cables, observe the following: • • • • • All ports are labeled: ◦ System board ports ◦ Controller ports ◦ 12G SAS Expander ports Most data cables have labels near each connector with destination port information. Some data cables are pre-bent. Do not unbend or manipulate the cables. Before connecting a cable to a port, lay the cable in place to verify the length of the cable.
Option kit Cable part number* From To Power cable part number Rear 2SFF SAS/SATA riser drive cage 869823-001 3 Drive backplane System board 869806-001 6 SAS Expander Controller Rear 3LFF SAS/SATA drive cage 869823-001 3 Drive backplane System board 869810-001 6 SAS Expander Controller 12G SAS Expander card 869802-001 5 SAS Expander Controller - Drive backplane System board - 869803-001 5 SAS/SATA 3-position cable 869830-001 3 869816-001 3 * To order spare cables, use the following
Option kit Cable part number From To Power cable part number HPE DL380 Gen10 2-Port Slim SAS Riser, Tertiary PCIe 869812-001 1 PCIe riser Backplane - PCIe riser Backplane - PCIe riser Backplane - 869812-001 1 HPE DL380 Gen10 4-Port Slim SAS Riser 869811-001 HPE DL380 Gen10 1-Port Slim SAS Riser 869812-001 1 776402-001 1 NVMe Direct Attach Kit (875092-001) 2 NVMe SFF Riser Kit (875091-001) 3 Power cables kit (875096-001) Table 3: GPU power Option kit Cable part number From To
Option 2: SAS Expander Option 3 (not shown): A controller Cable routing: Front 2SFF drive option for LFF Option 1: System board Cable routing: Front 2SFF drive option for LFF 121
Option 2: Controller Option 3 (not shown): SAS Expander Cable routing: Front 2SFF drive options (3 position cable) SFF models 122 Cable routing: Front 2SFF drive options (3 position cable)
LFF models Cable routing: Front 8SFF drive options Box 1 to SAS Expander Cable routing: Front 8SFF drive options 123
Box 2 to SAS Expander All boxes 124 Cabling
Cable routing: Front 8SFF NVMe/SAS premium drive option The backplane shown is in box 3.
Box 2 Box 3 126 Cabling
Cable routing: Front 2SFF NVMe drive option for SFF Cable routing: Front 2SFF NVMe drive option for SFF 127
Cable routing: Front 2SFF NVMe drive option for LFF Cable routing: Midplane 4LFF drive option 128 Cable routing: Front 2SFF NVMe drive option for LFF
Cable routing: Rear 3LFF drive option Cable routing: Rear 2SFF drive options Rear 2SFF drive option to a SAS expander, both in the primary slot Rear 2SFF drive option in the secondary slot to a SAS Expander in primary slot Rear 2SFF drive option above the power supplies to a controller Cable routing: Rear 3LFF drive option 129
Cable routing: HPE 12G SAS Expander to a controller Observe the following: • • Port 1 always connects to port 1 of the controller. Port 2 always connects to port 2 of the controller.
Cable routing: Systems Insight Display An SFF model is shown. The routing is the same for LFF.
Software and configuration utilities Server mode The software and configuration utilities presented in this section operate in online mode, offline mode, or in both modes.
• • Consolidated health and service alerts with precise time stamps Agentless monitoring that does not affect application performance For more information about the Active Health System, see the iLO user guide on the Hewlett Packard Enterprise website. Active Health System data collection The Active Health System does not collect information about your operations, finances, customers, employees, or partners.
iLO 5 supports the following features: • • • • • • • Group health status—View server health and model information. Group Virtual Media—Connect scripted media for access by the servers in an iLO Federation group. Group power control—Manage the power status of the servers in an iLO Federation group. Group power capping—Set dynamic power caps for the servers in an iLO Federation group. Group firmware update—Update the firmware of the servers in an iLO Federation group.
For more information, see the following website: http://www.hpe.com/info/resttool. iLO Amplifier Pack The iLO Amplifier Pack is an advanced server inventory and firmware and driver update solution that enables rapid discovery, detailed inventory reporting, and firmware and driver updates by leveraging iLO advanced functionality. The iLO Amplifier Pack performs rapid server discovery and inventory for thousands of supported servers for the purpose of updating firmware and drivers at scale.
• • SUSE Linux Enterprise Server VMware ESXi/vSphere Custom Image Not all versions of an OS are supported. For information about specific versions of a supported operating system, see the OS Support Matrix on the Hewlett Packard Enterprise website (http://www.hpe.com/info/ ossupport). Management Security HPE ProLiant Gen10 servers are built with some of the industry's most advanced security capabilities, out of the box, with a foundation of secure embedded management applications and firmware.
Selecting the boot mode This server provides two Boot Mode configurations: UEFI Mode and Legacy BIOS Mode. Certain boot options require that you select a specific boot mode. By default, the boot mode is set to UEFI Mode. The system must boot in UEFI Mode to use certain options, including: • • Secure Boot, UEFI Optimized Boot, Generic USB Boot, IPv6 PXE Boot, iSCSI Boot, and Boot from URL Fibre Channel/FCoE Scan Policy NOTE: The boot mode you use must match the operating system installation.
• • • Using the System Utilities options described in the following sections. Using the RESTful API to clear and restore certificates. For more information, see the Hewlett Packard Enterprise website (www.hpe.com/support/restfulinterface/docs). Using the secboot command in the Embedded UEFI Shell to display Secure Boot databases, keys, and security reports. Launching the Embedded UEFI Shell Use the Embedded UEFI Shell option to launch the Embedded UEFI Shell.
For more information, see HPE Smart Array SR Gen10 Configuration Guide at the Hewlett Packard Enterprise website. USB support Hewlett Packard Enterprise Gen10 servers support all USB operating speeds depending on the device that is connected to the server. External USB functionality Hewlett Packard Enterprise provides external USB support to enable local connection of USB devices for server administration, configuration, and diagnostic procedures.
SUM overview SUM is a tool for firmware and driver maintenance on ProLiant servers, BladeSystem enclosures, Moonshot systems, and other nodes. It provides a browser-based GUI or a command-line scripting interface for flexibility and adaptability. SUM identifies associated nodes you can update at the same time to avoid interdependency issues. Key features of SUM include: • • • • • • • • • • • Discovery engine that finds installed versions of hardware, firmware, and software on nodes.
NOTE: Do not manage one node with iLO Amplifier Pack and HPE OneView. Updating firmware from the System Utilities Use the Firmware Updates option to update firmware components in the system, including the system BIOS, NICs, and storage cards. Procedure 1. Access the System ROM Flash Binary component for your server from the Hewlett Packard Enterprise Support Center. 2. Copy the binary file to a USB media or iLO virtual media. 3. Attach the media to the server. 4.
Drivers IMPORTANT: Always perform a backup before installing or updating device drivers. The server includes new hardware that may not have driver support on all OS installation media. If you are installing an Intelligent Provisioning-supported OS, use Intelligent Provisioning on page 135 and its Configure and Install feature to install the OS and latest supported drivers. If you do not use Intelligent Provisioning to install an OS, drivers for some of the new hardware are required.
Change control and proactive notification Hewlett Packard Enterprise offers Change Control and Proactive Notification to notify customers 30 to 60 days in advance of the following: • • • Upcoming hardware and software changes Bulletins Patches Let us know what Hewlett Packard Enterprise commercial products you own and we will send you the latest updates to keep your business running smoothly. For more information, see the Hewlett Packard Enterprise website.
Troubleshooting NMI functionality An NMI crash dump enables administrators to create crash dump files when a system is hung and not responding to traditional debugging methods. An analysis of the crash dump log is an essential part of diagnosing reliability problems, such as hanging operating systems, device drivers, and applications. Many crashes freeze a system, and the only available action for administrators is to cycle the system power.
Safety, warranty, and regulatory information Safety and regulatory compliance For important safety, environmental, and regulatory information, see Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise website.
• Belarus: • Kazakhstan: Manufacturing date: The manufacturing date is defined by the serial number. CCSYWWZZZZ (serial number format for this product) Valid date formats include: • • YWW, where Y indicates the year counting from within each new decade, with 2000 as the starting point; for example, 238: 2 for 2002 and 38 for the week of September 9. In addition, 2010 is indicated by 0, 2011 by 1, 2012 by 2, 2013 by 3, and so forth.
Specifications Environmental specifications Specification Value Temperature range 1 — Operating 10°C to 35°C (50°F to 95°F) Nonoperating -30°C to 60°C (-22°F to 140°F) Relative humidity (noncondensing) — Operating Minimum to be the higher (more moisture) of -12°C (10.4°F) dew point or 8% relative humidity Maximum to be 24°C (75.2°F) dew point or 90% relative humidity Nonoperating 5% to 95% 38.7°C (101.7°F), maximum wet bulb temperature 1 All temperature ratings shown are for sea level.
• • • • • • • • • • • • • SFF drive (1) Drive blanks (7) Drive bay blanks for bays 1 and 2 (2) Fan assemblies (4) Fan blanks (2) Standard heatsink (1) 1P air baffle (1) X8 HPE Flexible Smart Array Controller (1) Primary riser cage (1) Secondary riser cage blank (1) Power supply (1) Power supply blank (1) Cables for the above components The LFF configuration includes the following components: • • • • • • • • • LFF drives (12) Fan assemblies (6) SE heatsinks (2) 2P air baffle (1) X8 HPE Flexible Smart Arra
Rated input frequency 50 Hz to 60 Hz Not applicable to 240 VDC Rated input current 5.8 A at 100 VAC 2.8 A at 200 VAC 2.
Rated input current 9.4 A at 100 VAC 4.5 A at 200 VAC 3.
Maximum rated input power 870 W at 200 VAC 870 W at 240 VAC 870 W at 240 VDC for China only BTUs per hour 2969 at 200 VAC 2969 at 240 VAC 2969 at 240 VDC for China only Power supply output Rated steady-state power 800 W at 200 VAC to 240 VAC input 800 W at 240 VDC input for China only Maximum peak power 800 W at 200 VAC to 240 VAC input 800 W at 240 VDC input for China only HPE 800W Flex Slot Universal Hot-plug Power Supply Specification Value Input requirements Rated input voltage 200 V to 277 V
Rated steady-state power 800 W at 200 VAC to 277 VAC input 800 W at 380 VDC input Maximum peak power 800 W at 200 VAC to 277 VAC input 800 W at 380 VDC input HPE 800W Flex Slot -48VDC Hot-plug Power Supply Specification Value Input requirements Rated input voltage -40 VDC to -72 VDC -48 VDC nominal input Rated input current 26 A at -40 VDC input 19 A at -48 VDC input, nominal input 12.
CAUTION: This equipment is designed to permit the connection of the earthed conductor of the DC supply circuit to the earthing conductor at the equipment. If this connection is made, all of the following must be met: • • • • This equipment must be connected directly to the DC supply system earthing electrode conductor or to a bonding jumper from an earthing terminal bar or bus to which the DC supply system earthing electrode conductor is connected.
Support and other resources Accessing Hewlett Packard Enterprise Support • For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website: • http://www.hpe.com/assistance To access documentation and support services, go to the Hewlett Packard Enterprise Support Center website: http://www.hpe.
Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized service provider will determine whether a repair can be accomplished by CSR. For more information about CSR, contact your local service provider or go to the CSR website: http://www.hpe.com/support/selfrepair Remote support Remote support is available with supported devices as part of your warranty or contractual support agreement.
Regulatory information To view the regulatory information for your product, view the Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise Support Center: www.hpe.
Documentation feedback Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback. When submitting your feedback, include the document title, part number, edition, and publication date located on the front cover of the document. For online help content, include the product name, product version, help edition, and publication date located on the legal notices page.