HP XC System Software Hardware Preparation Guide Version 3.
© Copyright 2003, 2004, 2005, 2006, 2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Table of Contents About This Document.......................................................................................................11 Intended Audience................................................................................................................................11 New and Changed Information in This Edition...................................................................................11 Typographic Conventions.....................................................................
3.3.3.1 Switch Connections and HP Workstations.....................................................................39 3.3.4 Super Root Switch...................................................................................................................39 3.3.5 Root Administration Switch....................................................................................................40 3.3.6 Root Console Switches...........................................................................................
B.3 InfiniBand Interconnect With Mixed Height Server Blades.........................................................112 Glossary.........................................................................................................................115 Index...............................................................................................................................
List of Figures 1-1 2-1 2-2 2-3 2-4 2-5 2-6 2-7 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 4-1 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 4-10 4-11 4-12 4-13 4-14 4-15 4-16 4-17 4-18 4-19 4-20 4-21 4-22 B-1 B-2 B-3 Administration Network: Console Branch (Without HP Server Blades)......................................25 Administration Network Connections..........................................................................................28 Console Network Connections............................................
List of Tables 1-1 1-2 1-3 1-4 3-1 3-2 4-1 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 4-10 4-11 4-12 4-13 4-14 4-15 4-16 4-17 4-18 4-19 4-20 4-21 4-22 4-23 4-24 4-25 4-26 4-27 4-28 4-29 4-30 4-31 4-32 Supported Processor Architectures and Hardware Models.........................................................20 Supported HP ProLiant Server Blade Models...............................................................................21 Supported Console Management Devices.................................................
About This Document This document describes how to prepare the nodes in your HP cluster platform before installing HP XC System Software. An HP XC system is integrated with several open source software components. Some open source software components are being used for underlying technology, and their deployment is transparent. Some open source software components require user-level documentation specific to HP XC systems, and that kind of information is included in this document when required.
• Chapter 2 (page 27) discusses the required cabling for server blades for the following: — Administration Network — Console Network — Interconnect Network (for the Gigabit Ethernet Interconnect and for the InfiniBand Interconnect, as well as cabling the Interconnect Network over the Administration Network) — External Network • • Appendix B (page 111) The setup procedure for the HP Integrity Model rx1620 has changed.
HP XC and Related HP Products Information The HP XC System Software Documentation Set, the Master Firmware List, and HP XC HowTo documents are available at this HP Technical Documentation Web site: http://www.docs.hp.com/en/linuxhpc.html The HP XC System Software Documentation Set includes the following core documents: HP XC System Software Release Notes Describes important, last-minute information about firmware, software, or hardware that might affect the system.
HP Scalable Visualization Array The HP Scalable Visualization Array (SVA) is a scalable visualization solution that is integrated with the HP XC System Software. The SVA documentation is available at the following Web site: http://www.docs.hp.com/en/linuxhpc.html HP Cluster Platform The cluster platform documentation describes site requirements, shows you how to set up the servers and additional devices, and provides procedures to operate and manage the hardware.
The Platform Computing Corporation LSF manpages are installed by default. lsf_diff(7) supplied by HP describes LSF command differences when using LSF-HPC with SLURM on an HP XC system. The following documents in the HP XC System Software Documentation Set provide information about administering and using LSF on an HP XC system: — HP XC System Software Administration Guide — HP XC System Software User's Guide • http://www.llnl.
Related Software Products and Additional Publications This section provides pointers to Web sites for related software products and provides references to useful third-party publications. The location of each Web site or link to a particular topic is subject to change without notice by the site provider. Linux Web Sites • http://www.redhat.com Home page for Red Hat®, distributors of Red Hat Enterprise Linux Advanced Server, a Linux distribution with which the HP XC operating environment is compatible.
Software RAID Web Sites • http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html and http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/pdf/Software-RAID-HOWTO.pdf A document (in two formats: HTML and PDF) that describes how to use software RAID under a Linux operating system. • http://www.linuxdevcenter.com/pub/a/linux/2002/12/05/RAID.html Provides information about how to use the mdadm RAID management utility.
1 Hardware and Network Overview This chapter addresses the following topics: • • • • • • • • • “Supported Cluster Platforms” (page 19) “Server Blade Enclosure Components” (page 21) “Server Blade Mezzanine Cards” (page 22) “Server Blade Interconnect Modules” (page 22) “Supported Console Management Devices” (page 23) “Administration Network Overview” (page 25) “Administration Network: Console Branch” (page 25) “Interconnect Network” (page 25) “Large-Scale Systems” (page 26) 1.
Table 1-1 Supported Processor Architectures and Hardware Models Cluster Platform Processor Architecture and Hardware Model CP3000 Intel® Xeon™ with EM64T • HP ProLiant DL140 G2 • HP ProLiant DL140 G3 • HP ProLiant DL360 G4 and G4p • HP ProLiant DL360 G5 • HP ProLiant DL380 G4 • HP ProLiant DL380 G5 • HP ProLiant DL580 G4 • HP xw8200 and xw8400 Workstation CP3000BL Intel Xeon with EM64T • HP ProLiant BL460c Server Blade (Half-height) • HP ProLiant BL480c Server Blade (Full-height) CP4000 AMD Opteron®
Table 1-2 Supported HP ProLiant Server Blade Models Core HP Proliant Blade Model Height Number Built-In NICs Hot Plug Drives Mezzanine Slots BL460c half Intel Xeon up to two quad core or up to two dual core 2 2 2 BL465c half AMD Opteron up to two single core or up to two dual core 2 2 2 BL480c full Intel Xeon 4 4 3 BL685c full AMD Opteron up to four dual core 4 2 3 BL860c full Intel Itanium® up to two quad core or dual core 4 2 3 Processor up to two quad core or up to
The following enclosure setup guidelines are specific to HP XC: • • • • On every enclosure, an Ethernet interconnect module (either a switch or pass-thru module) is installed in bay 1 for the administration network. Hardware configurations that use Gigabit Ethernet as the interconnect require an additional Ethernet interconnect module (either a switch or pass-thru module) to be installed in bay 2 for the interconnect network.
Interconnect Bay Port Mapping Connections between the server blades and the interconnect bays are hard wired. Each of the 8 interconnect bays in the back of the enclosure has a connection to each of the 16 server bays in the front of the enclosure. The built-in NIC or mezzanine card into which the interconnect blade connects depends on which interconnect bay it is plugged into. Because full-height blades consume two server bays, they have twice as many connections to each of the interconnect bays.
Table 1-3 Supported Console Management Devices (continued) Cluster Platform and Hardware Model Integrated Lights Out (iLO and iLO2) Lights–Out 100i (LO–100i) Management Processor (MP) CP3000BL HP ProLiant BL460c G1 Server Blade (Half-height) X HP ProLiant BL480c G1 Server Blade (Full-height) X CP4000 HP ProLiant DL145 G1 X HP ProLiant DL145 G2 X HP ProLiant DL145 G3 X HP ProLiant DL385 G1 X HP ProLiant DL385 G2 X HP ProLiant DL585 G1 X HP ProLiant DL585 G2 X CP4000BL HP ProLiant BL465c
1.6 Administration Network Overview The administration network is a private network within the HP XC system that is used primarily for administrative operations. This network is treated as a flat network during run time (that is, communication between any two points in the network is equal in communication time between any other two points in the network).
Table 1-4 Supported Interconnects INTERCONNECT FAMILIES Cluster Platform and Hardware Model Gigabit Ethernet CP3000 X CP3000BL X CP4000 X CP4000BL X CP6000 X 1 InfiniBand PCI Express Single Data Rate and Double Data Rate Myrinet® (Rev. D, InfiniBand® PCI-X (DDR) E, and F) X X QsNetII X X X X X X1 X X X This does not apply to the HP ProLiant DL385 G2 and DL145 G3. Mixing Adapters Within a given interconnect family, several different adapters can be supported.
2 Cabling Server Blades The following topics are addressed in this chapter: • • • • • “Network Overview ” (page 27) “Cabling for the Administration Network” (page 27) “Cabling for the Console Network” (page 28) “Cabling for the Interconnect Network” (page 29) “Cabling for the External Network” (page 31) 2.1 Network Overview An HP XC system consists of several networks: administration, console, interconnect, and external (public).
Figure 2-1 Administration Network Connections Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT C-Class Blade Enclosure FULL HEIGHT BLADE NIC 1 NIC 2 NIC 3 NIC 4 MEZZ 1 MEZZ 2 MEZZ 3 HALF HEIGHT BLADE iLO2 NIC 1 NIC 2 MEZZ 1 MEZZ 2 iLO2 NIC NIC PCI SLOT PCI SLOT Admin ProCurve 2800 Series Switch Interconnect bay 1 Interconnect bay 2 Interconnect bay 3 Console ProCurve 2600 Series Switch Interconnect bay 4 Interconnect bay 5 Interconnect bay 6 Gigabit Ethernet Interconnect
Figure 2-2 Console Network Connections Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT NIC C-Class Blade Enclosure NIC 1 NIC 2 NIC 3 NIC 4 MEZZ 1 MEZZ 2 MEZZ 3 iLO2 NIC 1 NIC 2 MEZZ 1 MEZZ 2 iLO2 PCI SLOT NIC PCI SLOT Admin ProCurve 2800 Series Switch Interconnect bay 1 Interconnect bay 2 Interconnect bay 3 Console ProCurve 2600 Series Switch Interconnect bay 4 Interconnect bay 5 Interconnect bay 6 Gigabit Ethernet Interconnect Switch Interconnect bay 7 Interconnect bay
Figure 2-3 Gigabit Ethernet Interconnect Connections Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT NIC C-Class Blade Enclosure PCI SLOT NIC PCI SLOT Admin ProCurve 2800 Series Switch Interconnect bay 1 NIC 1 NIC 2 Interconnect bay 2 NIC 3 NIC 4 Console ProCurve 2600 Series Switch Interconnect bay 3 MEZZ 1 Interconnect bay 4 MEZZ 2 MEZZ 3 Interconnect bay 5 iLO2 Gigabit Ethernet Interconnect Switch Interconnect bay 6 NIC 1 Interconnect bay 7 NIC 2 Interconne
2.4.3 Configuring the Interconnect Network Over the Administration Network In cases where an additional Gigabit Ethernet port or switch may not be available, the HP XC System Software enables you to configure the interconnect on the administration network. When the interconnect is configured on the administration network, only a single LAN is used.
server blades do not have three NICs, and therefore, half-height server blades are not included in this example Because NIC1 and NIC3 on a full-height server blade are connected to interconnect bay 1, you must use VLANs on the switch in that bay to separate the external network from the administration network. Also, in this example, PCI Ethernet cards are used in the non-blade server nodes.
Figure 2-6 External Network Connections: Half-Height Server Blades and NIC1 and NIC2 in Use Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT NIC NIC PCI SLOT PCI SLOT Ethernet PCI Cards C-Class Blade Enclosure FULL HEIGHT BLADE NIC 1 NIC 2 NIC 3 NIC 4 MEZZ 1 MEZZ 2 MEZZ 3 HALF HEIGHT BLADE iLO2 NIC 1 NIC 2 MEZZ 1 MEZZ 2 iLO2 Ethernet Mezzanine Cards Admin ProCurve 2800 Series Switch Interconnect bay 1 Interconnect bay 2 Interconnect bay 3 Console ProCurve 2600 Series Swi
Figure 2-7 External Network Connections: Half and Full-Height Server Blades and NIC1 in Use Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT C-Class Blade Enclosure FULL HEIGHT BLADE NIC 1 NIC 2 NIC 3 NIC 4 MEZZ 1 MEZZ 2 MEZZ 3 HALF HEIGHT BLADE iLO2 NIC 1 NIC 2 MEZZ 1 MEZZ 2 iLO2 NIC NIC PCI SLOT PCI SLOT Admin ProCurve 2800 Series Switch Interconnect bay 1 Interconnect bay 2 Interconnect bay 3 Console ProCurve 2600 Series Switch Interconnect bay 4 Interconnect bay 5 Int
3 Making Node and Switch Connections This chapter provides information about the connections between nodes and switches that are required for an HP XC system. The following topics are addressed: • • • • “Cabinets” (page 35) “Trunking and Switch Choices” (page 35) “Switches” (page 36) “Interconnect Connections” (page 44) IMPORTANT: The specific node and switch port connections documented in this chapter do not apply to hardware configurations containing HP server blades and enclosures.
to create a higher bandwidth connection between the Root Administration Switches and the Branch Administration Switches. For physically small hardware models (such as a 1U HP ProLiant DL145 G1 server), a large number of servers (more than 30) can be placed in a single cabinet, and are all attached to a single branch switch. The branch switch is a ProCurve Switch 2848, and two-port trunking is used for the connection between the Branch Administration Switch and the Root Administration Switch.
Root Administration Switch Root Console Switch Branch Administration Switch Branch Console Switch IMPORTANT: This switch connects directly to Gigabit Ethernet ports of the head node, the Root Console Switch, Branch Administration Switches, and other nodes in the utility cabinet. This switch connects to the Root Administration Switch, Branch Console Switches, and connects to the management console ports of nodes in the utility cabinet.
Figure 3-2 Node and Switch Connections on a Typical System Specialized Role Nodes Head Node Administration Switches Console Switches Root Administration Root Console Br anch Switches Br anch Switches Compute Nodes Figure 3-3 shows a graphical representation of the logical layout of the switches and nodes in a large-scale system with a Super Root Switch. The head node connects to Port 42 on the Root Administration Switch in Region 1.
3.3.3.1 Switch Connections and HP Workstations HP model xw workstations do not have console ports. Only the Root Administration Switch supports mixing nodes without console management ports with nodes that have console management ports (that is, all other supported server models). HP workstations connected to the Root Administration Switch must be connected to the next lower-numbered contiguous set of ports immediately below the nodes that have console management ports.
3.3.5 Root Administration Switch The Root Administration Switch for the administration network of an HP XC system can be either a ProCurve 2848 switch or a ProCurve 2824 switch for small configurations. If you are using a ProCurve 2848 switch as the switch at the center of the administration network, use Figure 3-5 to make the appropriate port connections.
Figure 3-6 ProCurve 2824 Root Administration Switch 3 1 hp procurve switch 2824 J 4903 A 1 St atus RP S Po w er Fa u lt C ons o le R eset C lear LE D Mode 2 3 4 5 6 7 8 9 10 11 Ln k Ac t Fa n FD x Te s t Sp d 12 13 14 15 16 17 18 4 6 19 20 21 22 23 24 T M T M T M T M 2 5 The callouts in the figure enumerate the following: 1. 2. 3. 4. 5. 6. Uplinks from branches start at port 1 (ascending). 10/100/1000 Base-TX RJ-45 ports.
The ProCurve 2650 switch is shown in Figure 3-7. In the figure, white ports should not have connections, black ports can have connections, and ports with numbered callouts are used for specific purposes, described after the figure.
Allocate the ports on this switch for consistency with the administration switches, as follows: • Ports 1–10, 13–21 — Starting with port 1, the ports are used for links from Branch Console Switches. Trunking is not used. — Starting with port 21 and in descending order, ports are assigned for use by individual nodes in the utility cabinet. Nodes in the utility cabinet are connected directly to the Root Administration Switch.
The callout in the figure enumerates the following: 1. Port 22 is used for the link to the Root Administration Switch. Allocate the ports on this switch for maximum performance, as follows: • Ports 1–10 and 13–21 are used for the administration ports for the individual nodes (up to 19 nodes). • Ports 11, 12, 23, and 24 are unused. 3.3.8 Branch Console Switches The Branch Console Switch of an HP XC system is a ProCurve 2650 switch.
NOTE: You can choose a number smaller than the absolute maximum number of interconnect ports for max-node, but you can not expand the system to a size larger than this number in the future without completely rediscovering the system, thereby renumbering all nodes in the system. This restriction does not apply to hardware configurations that contain HP server blades and enclosures.
Because the first logical Gigabit Ethernet port on each node is always used for connectivity to the administration network, there must be a second Gigabit Ethernet port on each node if you are using Gigabit Ethernet as the interconnect. Depending upon the hardware model, the port can be built-in or can be an installed card. Any node with an external interface must also have a third Ethernet connection of any kind to communicate with external networks. 3.4.
4 Preparing Individual Nodes This chapter describes how to prepare individual nodes in the HP XC hardware configuration.
Table 4-1 Firmware Dependencies (continued) Hardware Component Firmware Dependency HP ProLiant DL385 G1 iLO, system BIOS HP ProLiant DL385 G2 iLO, system BIOS HP ProLiant DL585 G1 iLO, system BIOS HP ProLiant DL585 G2 iLO, system BIOS CP4000BL HP ProLiant BL465c G1 Server Blade (Half-height) iLO2, system BIOS, OA HP ProLiant BL685c G1 Server Blade (Full-height) iLO2, system BIOS, OA CP6000 HP Integrity rx1620 Management Processor (MP), BMC, Extensible Firmware Interface (EFI) HP Integrity rx
IMPORTANT: The Ethernet port connections listed in Table 4-2 do not apply to hardware configurations with HP server c-Class blades and enclosures. Table 4-2 Ethernet Ports on the Head Node Gigabit Ethernet Interconnect All Other Interconnect Types • Physical onboard Port #1 is always the connection to the administration network. • Physical onboard Port #2 is the connection to the interconnect. • Add-on NIC card #1 is available as an external connection.
4.4 Setting the Onboard Administrator Password If the hardware configuration contains server blades and enclosures, you must define and set the user name and password for the Onboard Administrator on every enclosure in the hardware configuration. IMPORTANT: You cannot set the Onboard Administrator password until the head node is installed and the switches are discovered. For more information on installing the head node and discovering switches, see the HP XC System Software Installation Guide.
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems Follow the procedures in this section to prepare each node before installing and configuring the HP XC System Software.
BIOS ROM ID: BIOS Version: BIOS Build Date: Record this information for future reference. 3. For each node, make the following BIOS settings from the Main window. The settings differ depending upon the generation of hardware model: • BIOS settings for HP ProLiant DL140 G2 nodes are listed in Table 4-3. • BIOS settings for HP ProLiant DL140 G3 nodes are listed in Table 4-4.
Table 4-3 BIOS Settings for HP ProLiant DL140 G2 Nodes (continued) Menu Name Submenu Name Option Name Set to This Value Wake On LAN Disabled Set the following boot order on all nodes except the head node: 1. CD-ROM 2. Removable Devices 3. PXE MBA V7.7.2 Slot 0200 4. Hard Drive 5. ! PXE MBA V7.7.2 Slot 0300 (! means disabled) Boot Set the following boot order on the head node: 1. CD-ROM 2. Removable Devices 3. Hard Drive 4. PXE MBA v7.7.2 SLot 0200 5. PXE MBA v7.7.
Table 4-4 BIOS Settings for HP ProLiant DL140 G3 Nodes (continued) Menu Name Submenu Name Option Name Set to This Value Set the following boot order on all nodes except the head node: 1. CD-ROM 2. Removable Devices 3. Embedded NIC1 4. Hard Drive 5. Embedded NIC2 Power Embedded NIC1 PXE Enabled Embedded NIC2 PXE Disabled Resume On Modem Off Ring Wake On LAN 4. 5. Disabled From the Main window, select Exit→Save Changes and Exit to exit the utility.
1. 2. 3. The iLO Ethernet is the port used as the connection to the Console Switch. NIC1 is used for the connection to the Administration Switch (branch or root). NIC2 is used for the external connection. Figure 4-3 shows a rear view of the HP ProLiant DL360 G5 server and the appropriate port assignments for an HP XC system. Figure 4-3 HP ProLiant DL360 G5 Server Rear View 1 2 3 The callouts in the figure enumerate the following: 1. 2. 3. This port is used for the connection to the Console Switch.
1. Make the following settings from the Main menu. The BIOS settings differ depending on the hardware model generation: • BIOS settings for HP ProLiant DL360 G4 nodes are listed in Table 4-6. • BIOS settings for HP ProLiant DL360 G5 nodes are listed in Table 4-7.
Table 4-7 BIOS Settings for HP ProLiant DL360 G5 Nodes (continued) Menu Name Option Name Set to This Value BIOS Serial Console Baud Rate 115200 EMS Console Disabled BIOS Interface Mode Command-Line Press the Esc key to return to the main menu. 2. 3. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the boot sequence. Repeat this procedure for each HP ProLiant DL360 G4 and G5 node in the HP XC system.
Figure 4-5 shows a rear view of the HP ProLiant DL380 G5 server and the appropriate port assignments for an HP XC system. Figure 4-5 HP ProLiant DL380 G5 Server Rear View 5 4 2 1 3 1 2 3 The callouts in the figure enumerate the following: 1. 2. 3. This port is used for the connection to the external network. This port is used for the connection to the Administration Switch (branch or root). The iLO Ethernet port is used for the connection to the Console Switch.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based Setup Utility (RBSU). Perform the following procedure from the RBSU for each HP ProLiant DL380 node in the HP XC system: 4.
1. Make the following settings from the Main menu. The BIOS settings differ depending upon the hardware model generation: • BIOS settings for HP ProLiant DL380 G4 nodes are listed in Table 4-9 . • BIOS settings for HP ProLiant DL380 G5 nodes are listed in Table 4-10.
Table 4-10 BIOS Settings for HP ProLiant DL380 G5 Nodes (continued) Menu Name Option Name Set to This Value Advanced Options Processor Hyper_threading Disable BIOS Serial Console & EMS BIOS Serial Console Port COM1; IRQ4; IO: 3F8h - 3FFh BIOS Serial Console Baud Rate 115200 EMS Console Disabled BIOS Interface Mode Command-Line Press the Esc key to return to the main menu. 2. 3. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the boot sequence.
Figure 4-6 HP ProLiant DL580 G4 Server Rear View 1 2 3 The callouts in the figure enumerate the following: 1. 2. 3. NIC1 is used for the connection to the Administration Switch (branch or root). NIC2 is used for the connection to the external network. The iLO Ethernet port is used for the connection to the Console Switch. Setup Procedure Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL580 G4 node in the HP XC system: 1. 2. 3.
Table 4-11 iLO Settings for HP ProLiant DL580 G4 Nodes Name Menu Name Submenu Name Administration User Option Name Set to This Value New Create a common iLO user name and password for every node in the hardware configuration. The password must have a minimum of 8 characters by default, but this value is configurable. The user Administrator is predefined by default, but you must create your own user name and password. For security purposes, HP recommends that you delete the Administrator user.
1. Make the following settings from the Main menu. The BIOS settings for HP ProLiant DL580 G4 nodes are listed in Table 4-12. Table 4-12 BIOS Settings for HP ProLiant DL580 G4 Nodes Menu Name Option Name Set to This Value Set the following boot order on all nodes except the head node; CD-ROM does not have to be first in the list, but it must be listed before the hard disk: 1. CD-ROM 2. NIC1 3.
Figure 4-7 HP xw8200 and xw8400 Workstation Rear View 1 The callout in the figure enumerates the following: 1. This port is used for the connection to the administration network. Setup Procedure Use the Setup Utility to configure the appropriate settings for an HP XC system. Perform the following procedure for each workstation in the hardware configuration. Change only the values that are described in this procedure; do not change any other factory-set values unless you are instructed to do so: 1. 2.
Table 4-14 BIOS Settings for xw8400 Workstations Menu Name Submenu Name Option Name Storage Storage Options SATA Emulation Set to This Value Separate IDE Controller After you make this setting, make sure the Primary SATA Controller and Secondary SATA Controller settings are set to Enabled. Boot Order Set the following boot order on all nodes except the head node: 1. Optical Drive 2. USB device 3. Broadcom Ethernet controller 4. Hard Drive 5.
4.6 Preparing the Hardware for CP3000BL Systems Perform the following tasks on each server blade in the hardware configuration after the head node is installed and the switches are discovered: • • • • Set the boot order Create an iLO2 user name and password Set the power regulator Configure smart array devices Use the Onboard Administrator, the iLO2 web interface, and virtual media to make the appropriate settings on HP ProLiant Server Blades.
f. The Onboard Administrator automatically creates user accounts for itself (prefixed with the letters OA) to provide single sign-on capabilities. Do not remove these accounts. 5. Enable telnet access: a. In the left frame, click Access. b. Click the control to enable Telnet Access. c. Click the Apply button to save the settings. 6. Click the Virtual Devices tab and make the following settings: a.
10. Close the iLO2 utility Web page. 11. Repeat this procedure from every active Onboard Administrator and make the same settings for each server blade in each enclosure. After preparing all the nodes in all the enclosures, return to the HP XC System Software Installation Guide to discover all the nodes and enclosures in the HP XC system. 4.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems Follow the procedures in this section to prepare each node before installing and configuring the HP XC System Software.
3. For each node, make the BIOS settings listed in Table 4-16.
6. Ensure that all machines are requesting IP addresses through the Dynamic Host Control Protocol (DHCP). Do the following to determine if DHCP is enabled: a. At the ProLiant> prompt, enter the following: ProLiant> net b. At the INET> prompt, enter the following: INET> state iface...ipsrc.....IP addr........subnet.......gateway 1-et1 dhcp 0.0.0.0 255.0.0.0 0.0.0.0 current tick count 2433 ping delay time: 280 ms. ping host: 0.0.0.0 Task wakeups:netmain: 93 nettick: 4814 telnetsrv: 401 c.
1. 2. 3. This port is used for the connection to the Administration Switch (branch or root). On the rear of the node, this port is marked with the number 1 (NIC1). If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect connection. Otherwise, it is used for an external connection. On the rear of the node, this port is marked with the number 2 (NIC2). The port labeled LO100i is used for the connection to the Console Switch.
Table 4-17 BIOS Settings for HP ProLiant DL145 G2 Nodes (continued) Menu Name Submenu Name Hammer Configuration Option Name Set to This Value Disable Jitter bit Enabled page Directory Cache Disabled PCI Configuration/Ethernet Device On Board (for Ethernet 1 and 2) I/O Device Configuration Console Redirection IPMI/LAN Setting IPMI Enabled Option ROM Scan Enabled Latency timer 40h Serial Port BMC COM Port SIO COM Port Disabled PS/2 Mouse Enabled Console Redirection Enabled EMS Console
Table 4-18 BIOS Settings for HP ProLiant DL145 G3 Nodes Menu Name Submenu Name Option Name Set to This Value Main Boot Options NumLock Off Advanced I/O Device Configuration Serial Port Mode BMC Serial port A: Enabled Base I/O address: 3F8 Interrupt: IRQ 4 Memory Controller Options DRAM Bank Interleave Serial ATA Console Redirection AUTO Node Interleave Disabled 32-Bit Memory Hole Enabled Embedded SATA Enabled SATA Mode SATA Enabled/Disable Int13 support Enabled Option ROM Scan
4. 5. Select Exit→Saving Changes to exit the BIOS Setup Utility. Repeat this procedure for each HP ProLiant DL145 G2 and DL145 G3 node in the hardware configuration. 4.7.
Figure 4-12 HP ProLiant DL385 G2 Server Rear View 5 4 2 1 3 1 2 3 The callouts in the figure enumerate the following: 1. 2. 3. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect connection. Otherwise, it is used for an external connection. On the back of the node, this port is marked with the number 2. This port is the connection to the Administration Switch (branch or root). On the back of the node, this port is marked with the number 1.
Table 4-20 iLO Settings for HP ProLiant DL385 G2 Nodes Menu Name Submenu Name Option Name User Add Set to This Value Create a common iLO user name and password for every node in the hardware configuration. The password must have a minimum of 8 characters by default, but this value is configurable. The user Administrator is predefined by default, but you must create your own user name and password. For security purposes, HP recommends that you delete the Administrator user.
1. Make the RBSU settings accordingly. The settings differ depending on the hardware model generation: • Table 4-21 provides the RBSU settings for the HP ProLiant DL385 G1 nodes. • Table 4-22 provides the RBSU settings for the HP ProLiant DL385 G2 nodes. Use the navigation aids shown at the bottom of the screen to move through the menus and make selections.
Table 4-22 RBSU Settings for HP ProLiant DL385 G2 Nodes (continued) Menu Name Option Name Set to This Value System Options Embedded Serial Port COM2 Virtual Serial Port COM1 Set the following boot order on all nodes except the head node; CD-ROM does not have to be first in the list, but it must be listed before the hard disk: 1. CD-ROM 2. Floppy Drive (A:) 3. USB DriveKey (C:) 4. PCI Embedded HP NC373i Multifunction Gigabit Adapter 5.
Figure 4-13 HP ProLiant DL585 G1 Server Rear View PCI -X 64- B it 100M Hz 8 7 6 5 4 3 2 1 133M Hz 2 1 iL O ! 2 1 2 1 UID The callouts in the figure enumerate the following: 1. 2. 3. The iLO Ethernet port is the connection to the Console Switch. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect connection. Otherwise, it is used for an external connection. NIC1 is the connection to the Administration Switch (branch or root).
2. 3. Turn on power to the node. Watch the screen carefully during the power-on self-test, and press the F8 key when prompted to access the Integrated Lights Out Setup Utility. Make the following iLO settings for each node depending on hardware model: • Table 4-23 provides the iLO settings for ProLiant DL585 G1 nodes. • Table 4-24 provides the iLO settings for ProLiant DL585 G2 nodes.
1. Make the RBSU settings accordingly for each node. The settings differ depending on the hardware model generation: • Table 4-25 provides the RBSU settings for the HP ProLiant DL585 G1 nodes. • Table 4-26 provides the RBSU settings for the HP ProLiant DL585 G2 nodes.
Table 4-26 RBSU Settings for HP ProLiant DL585 G2 Nodes (continued) Menu Name Option Name Set to This Value Set the following boot order on all nodes except the head node; the CD-ROM must be listed before the hard drive: • IPL:1 CD-ROM • IPL:2 Floppy Drive (A:) • IPL:3 USB Drive Key (C:) • IPL:4 PCI Embedded HP NC373i Multifunction Gigabit Adapter • IPL:5 Hard Drive C: Standard Boot Order (IPL) Set the following boot order on the head node: • IPL:1 CD-ROM • IPL:2 Floppy Drive (A:) • IPL:3 USB Drive Key
Figure 4-15 xw9300 Workstation Rear View 1 The callout in the figure enumerates the following: 1. This port is used for the connection to the administration network. Figure 4-16 shows a rear view of the xw9400 workstation and the appropriate port connections for an HP XC system. Figure 4-16 xw9400 Workstation Rear View 1 2 The callouts in the figure enumerate the following: 1. 2. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect connection.
2. 3. 4. 5. 6. Turn on power to the workstation. When the node is powering up, press the F10 key to access the Setup Utility. When prompted, press any key to continue. Select English as the language. Make the appropriate settings for the workstation depending on hardware model: • Table 4-27 describes the settings for the xw9300 workstation. • Table 4-28 describes the settings for the xw9400 workstation.
4.8 Preparing the Hardware for CP4000BL Systems Perform the following tasks on each server blade in the hardware configuration after the head node is installed and the switches are discovered: • • • • Set the boot order Create an iLO2 user name and password Set the power regulator Configure smart array devices Use the Onboard Administrator, the iLO2 web interface, and virtual media to make the appropriate settings on HP ProLiant Server Blades.
f. The Onboard Administrator automatically creates user accounts for itself (prefixed with the letters OA) to provide single sign-on capabilities. Do not remove these accounts. 5. Enable telnet access: a. In the left frame, click Access. b. Click the control to enable Telnet Access. c. Click the Apply button to save the settings. 6. Click the Virtual Devices tab and make the following settings: a.
9. Perform this step for HP ProLiant BL685c nodes; proceed to the next step for all other hardware models. On an HP ProLiant BL685c node, watch the screen carefully during the power-on self-test, and press the F9 key to access the ROM-Based Setup Utility (RBSU) to enable HPET as shown Table 4-30. Table 4-30 Additional BIOS Setting for HP ProLiant BL685c Nodes Menu Name Option Name Set To This Value Advanced Linux x86_64 HPET Enabled Press the F10 key to exit the RBSU.
4.9 Preparing the Hardware for CP6000 (Intel Itanium) Systems Follow the procedures in this section to prepare HP Integrity servers before installing and configuring the HP XC System Software.
Figure 4-17 HP Integrity rx1620 Server Rear View LAN 10/100 GSP RESETS CONSOLE / REMOTE / UPS PCI-X 133 SOFTHARD SCSI LVD/SE LAN Gb A PCI-X 133 USB SERIAL LAN Gb B The callouts in the figure enumerate the following: 1. 2. 3. The port labeled LAN 10/100 is the MP connection to the ProCurve Console Switch. The port labeled LAN Gb A connects to the Administration Switch (branch or root). The port labeled LAN Gb B is used for an external connection.
e. Log in to the MP using the default user name and password shown on the screen. The MP Main Menu appears: MP MAIN MENU: CO: VFP: CM: SMCLP: CL: SL: HE: X: Console Virtual Front Panel Command Menu Server Management Command Line Protocol Console Log Show Event Logs Main Help Menu Exit Connection 4. 5. 6. Enter SL to show event logs. Then, enter C to clear all log files and Y to confirm. Enter CM to display the Command Menu. Perform the following steps to ensure that the IPMI over LAN option is set.
13. Perform this step on all nodes except the head node. From the Boot Configuration menu, select the Edit OS Boot Order option. Do the following from the Edit OS Boot Order option: a. b. c. Use the navigation instructions shown on the screen to move the Netboot entry you just defined to the top of the boot order. If prompted, save the setting to NVRAM. Enter x to return to the previous menu. 14. Perform this step on all nodes, including the head node.
c. d. e. Use a terminal emulator, such as HyperTerminal, to open a terminal window. Press the Enter key to access the MP. If there is no response, press the MP reset pin on the back of the MP and try again. Log in to the MP using the default user name and password shown on the screen. The MP Main Menu appears: MP MAIN MENU: CO: VFP: CM: SMCLP: CL: SL: HE: X: Console Virtual Front Panel Command Menu Server Management Command Line Protocol Console Log Show Event Logs Main Help Menu Exit Connection 4. 5. 6.
13. Perform this step on all nodes except the head node. From the Boot Configuration menu, select the Edit OS Boot Order option. Do the following: a. b. c. Use the navigation instructions on the screen to move the Netboot entry you just defined to the top of the boot order. If prompted, press the Enter key to select the position. Enter x to return to the Boot Configuration menu. 14. Perform this step on all nodes, including the head node. Select the Console Configuration option, and do the following: a.
b. c. d. e. Connect the CONSOLE connector to a null modem cable, and connect the null modem cable to the PC COM1 port. Use a terminal emulator, such as HyperTerminal, to open a terminal window. Press the Enter key to access the MP. If the MP does not respond, press the MP reset pin on the back of the MP and try again. Log in to the MP using the default user name and password shown on the screen.
13. Perform this step on all nodes except the head node. From the Boot Configuration menu, select the Edit OS Boot Order option. Do the following: a. b. c. Use the navigation instructions on the screen to move the Netboot entry you just defined to the top of the boot order. If prompted, press the Enter key to select the position. Enter x to return to the Boot Configuration menu. 14. Perform this step on all nodes, including the head node.
a. Connect a three-way DB9-25 cable to the MP DB-9 port on the back of the HP Integrity rx4640 server. This port is the first of the four DB9 ports at the bottom left of the server; it is labeled MP Local. b. c. d. e. Connect the CONSOLE connector to a null modem cable, and connect the null modem cable to the PC COM1 port. Use a terminal emulator, such as HyperTerminal, to open a terminal window. Press the Enter key to access the MP.
e. f. Press the Enter key for no boot options. When prompted, save the entry to NVRAM. For more information about how to work with these menus, see the documentation that came with the HP Integrity server. 13. Perform this step on all nodes except the head node. From the Boot Configuration menu, select the Edit OS Boot Order option. Do the following: a. b. c. Use the navigation instructions on the screen to move the Netboot entry you just defined to the top of the boot order.
SYS LAN Connection to Administration Network MP LAN Figure 4-22 HP Integrity rx8620 Core IO Board Connections Connection to Console Network (Management Processor) MP Serial Port MP Reset Pin Preparing Individual Nodes Follow this procedure for each HP Integrity rx8620 node in the hardware configuration: 1. 2. Ensure that the power cord is connected but that the processor is not turned on. Connect a personal computer to the Management Processor (MP): a.
NOTE: Most of the MP commands of the HP Integrity rx8620 are similar to the HP Integrity rx2600 MP commands, but there are some differences. The two MPs for the HP Integrity rx8620 operate in a master/slave relationship. Only the master MP, which is on Core IO board 0, is assigned an IP address. Core IO board 0 is always the top Core IO board. The slave MP is used only if the master MP fails. 5.
NOTE: If the console stops accepting input from the keyboard, the following message is displayed: [Read-only - use ^Ecf to attach to console.] In that situation, press and hold down the Ctrl key and type the letter e. Release the Ctrl key, and then type the letters c and f to reconnect to the console. 12. Do the following from the EFI Boot Manager screen, which is displayed during the power-up of the node.
14. From the Boot Option Maintenance menu, add a boot option for the EFI Shell (if one does not exist). Follow the instructions in step 12a. 15. Exit the Boot Option Maintenance menu. 16. Choose the EFI Shell boot option and boot to the EFI shell. Enter the following EFI shell commands: EFI> acpiconfig enable softpowerdown EFI> acpiconfig single-pci-domain EFI> reset The reset command reboots the machine.
4.10 Preparing the Hardware for CP6000BL Systems Use the management processor (MP) to perform the following tasks on each server blade in the hardware configuration after the head node is installed and the switches are discovered: • • • • • • • Clear all event logs Enable IPMI over LAN Create an MP login ID and password that matches all other devices Add a boot entry for the string DVD boot on the head node, and add a boot entry for the string Netboot on all other nodes.
iLOs, and OAs must use the same user name and password. Do not use any special characters as part of the password. 10. Turn on power to the node: MP:CM> pc -on -nc 11. Press Ctrl-B to return to the MP Main Menu. 12. Enter CO to connect to the console. It takes a few minutes for the live console to display. 13. Add a boot entry and set the OS boot order. Your actions for the head node differ from all other nodes.
16. Use the RB command to reset the BMC. 17. Press Ctrl-b to exit the console mode and press the x key to exit. After preparing all the nodes in all the enclosures, return to the HP XC System Software Installation Guide to discover all the nodes and enclosures in the HP XC system.
5 Troubleshooting This chapter describes known problems with respect to preparing hardware devices for use with the HP XC System Software and their solutions. 5.1 iLO2 Devices 5.1.1 iLO2 Devices Can Become Unresponsive There is a known problem with the iLO2 console management devices that causes the iLO2 device to become unresponsive to certain tools including the HP XC power daemon and the iLO2 Web interface. When this happens, the power daemon generates CONNECT_ERROR messages.
A Establishing a Connection Through a Serial Port Follow this generic procedure to establish a connection to a server using a serial port connection to a console port. If you need more information about how to establish these connections, see the hardware documentation. 1. Connect a null modem cable between the serial port on the rear panel of the server and a COM port on the host computer. 2. Launch a terminal emulation program such as Windows HyperTerminal. 3.
B Server Blade Configuration Examples This appendix contains illustrations and descriptions of fully cabled HP XC systems based on interconnect type and server blade height. The connections are color-coded, so consider viewing the PDF file online or printing this appendix on a color printer to take advantage of the color coding. B.
Figure B-2 InfiniBand Interconnect With Full-Height Server Blades Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT NIC NIC PCI SLOT PCI SLOT InfiniBand PCI Cards C-Class Blade Enclosure F U L L H E IG H T B L A D E Interconnect bay 2 NIC 3 MEZZ 1 MEZZ 2 MEZZ 3 iLO2 HALF HEIGHT BLADE ADMIN NET VLAN EXTERNAL NET VLAN NIC 2 NIC 4 NIC 1 NIC 2 MEZZ 1 MEZZ 2 iLO2 Admin ProCurve 2800 Series Switch Interconnect bay 1 NIC 1 InfiniBand Mezzanine Cards Interconnect bay 3 Inter
Figure B-3 InfiniBand Interconnect With Mixed Height Server Blades InfiniBand PCI Cards Non-Blade Server MGT NIC PCI SLOT PCI SLOT NIC C-Class Blade Enclosure FULL HEIGHT BLADE NIC 2 NIC 4 Interconnect bay 3 MEZZ 1 MEZZ 3 Interconnect bays 5 & 6 (double wide) iLO2 HALF HEIGHT BLADE PCI SLOT NIC PCI SLOT Admin ProCurve 2800 Series Switch Console ProCurve 2600 Series Switch Interconnect bay 4 MEZZ 2 NIC 1 InfiniBand Interconnect Switch Interconnect bay 7 NIC 2 iLO2 NIC Interconnect bay
Glossary A administration branch The half (branch) of the administration network that contains all of the general-purpose administration ports to the nodes of the HP XC system. administration network The private network within the HP XC system that is used for administrative operations. availability set An association of two individual nodes so that one node acts as the first server and the other node acts as the second server of a service. See also improved availability, availability tool.
operating system and its loader. Together, these provide a standard environment for booting an operating system and running preboot applications. enclosure The hardware and software infrastructure that houses HP BladeSystem servers. extensible firmware interface See EFI. external network node A node that is connected to a network external to the HP XC system. F fairshare An LSF job-scheduling policy that specifies how resources should be shared by competing users.
image server A node specifically designated to hold images that will be distributed to one or more client systems. In a standard HP XC installation, the head node acts as the image server and golden client. improved availability A service availability infrastructure that is built into the HP XC system software to enable an availability tool to fail over a subset of eligible services to nodes that have been designated as a second server of the service See also availability set, availability tool.
LVS Linux Virtual Server. Provides a centralized login capability for system users. LVS handles incoming login requests and directs them to a node with a login role. M Management Processor See MP. master host See LSF master host. MCS An optional integrated system that uses chilled water technology to triple the standard cooling capacity of a single rack. This system helps take the heat out of high-density deployments of servers and blades, enabling greater densities in data centers.
onboard administrator See OA. P parallel application An application that uses a distributed programming model and can run on multiple processors. An HP XC MPI application is a parallel application. That is, all interprocessor communication within an HP XC parallel application is performed through calls to the MPI message passing library. PXE Preboot Execution Environment.
an HP XC system, the use of SMP technology increases the number of CPUs (amount of computational power) available per unit of space. ssh Secure Shell. A shell program for logging in to and executing commands on a remote computer. It can provide secure encrypted communications between two untrusted hosts over an insecure network. standard LSF A workload manager for any kind of batch job.
Index A administration network as interconnect, 31, 46 console branch, 25 defined, 25, 27 application cabinet, 35 architecture (see processor architecture) B baseboard management controller (see BMC) BIOS settings CP4000 systems, 78 BMC, 23 BMC firmware, 47 boot order, 67, 87 branch administration switch, 37, 43 branch console switch, 37, 44 C cabinet, 35 chip architecture (see processor architecture) cluster platform (see CP3000) (see CP3000BL) (see CP4000) (see CP6000) supported, 19 CONNECT_ERROR, 107 c
InfiniBand, 47 IPMI, 47 master list, 47 MP, 47 Myrinet, 47 Quadrics, 47 system, 47 system BIOS, 47 G Gigabit Ethernet interconnect, 29, 45 H hardware configuration supported, 21 hardware models supported, 19, 20 hardware preparation CP300BL, 67 CP4000 , 70 CP400BL, 87 CP6000, 90 CP600BL, 104 for all cluster platforms, 49 xw8200 workstation, 64 xw8400 workstation, 64 xw9300 workstation, 84 xw9400 workstation, 84 hardware preparation task HP server blades, 67, 87, 104 HCA, 46 head node in utility cabinet, 3
defined, 23 features, 23 setting IP address, 90, 101 MP firmware, 47 Myrinet interface cards revision, 47 N network administration, 25, 27 administration console branch, 25 console, 28 external, 31 interconnect, 25, 29 network cabling, 27 network configuration, 27 node maximum number in system, 26 nodes maximum number of, 44 O OFED, 46 onboard administrator defined, 22 IP address, 67, 87 setting the password, 50 OpenFabrics Enterprise Distribution (see OFED) P password iLO2, 67, 87 MP, 94, 96, 98, 101, 1
defined, 31 W Web site HP XC System Software documentation, 13 workstation, 64 xw8200, 64 xw8400, 64 xw9300, 84 xw9400, 84 X xw8200 workstation, 64 xw8400 workstation, 64 xw9300 workstation, 84 xw9400 workstation, 84 124 Index
*A-XCHWP-32u2* Printed in the US