HP XC System Software Hardware Preparation Guide Version 3.2.
© Copyright 2003, 2004, 2005, 2006, 2007, 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Table of Contents About This Document.......................................................................................................11 Intended Audience................................................................................................................................11 New and Changed Information in This Edition...................................................................................11 Typographic Conventions.....................................................................
3.3.3 Switch Port Connections.........................................................................................................47 3.3.3.1 Switch Connections and HP Workstations.....................................................................49 3.3.4 Super Root Switch...................................................................................................................49 3.3.5 Root Administration Switch....................................................................................
4.10 Preparing the Hardware for CP6000BL Systems.........................................................................136 5 Troubleshooting..........................................................................................................139 5.1 iLO2 Devices..................................................................................................................................139 5.1.1 iLO2 Devices Can Become Unresponsive...................................................................
List of Figures 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 3-12 3-13 3-14 4-1 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 4-10 4-11 4-12 4-13 4-14 4-15 4-16 6 HP BladeSystem c7000 enclosure (Front and Rear Views)...........................................................22 HP BladeSystem c7000 Enclosure Bay Locations (Front View)....................................................
4-17 4-18 4-19 4-20 4-21 4-22 4-23 4-24 4-25 4-26 4-27 4-28 4-29 4-30 4-31 B-1 B-2 B-3 HP ProLiant DL385 Server Rear View.........................................................................................100 HP ProLiant DL385 G2 Server Rear View...................................................................................101 HP ProLiant DL385 G5 Server Rear View ..................................................................................105 HP ProLiant DL585 Server Rear View...............
List of Tables 1-1 1-2 1-3 1-4 3-1 3-2 4-1 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 4-10 4-11 4-12 4-13 4-14 4-15 4-16 4-17 4-18 4-19 4-20 4-21 4-22 4-23 4-24 4-25 4-26 4-27 4-28 4-29 4-30 4-31 4-32 4-33 4-34 4-35 4-36 4-37 4-38 4-39 4-40 4-41 4-42 4-43 4-44 4-45 4-46 8 Supported Processor Architectures and Hardware Models.........................................................20 Supported HP ProLiant Server Blade Models...............................................................................
4-47 Adding a Boot Entry and Setting the Boot Order on HP Integrity Server Blades......................
About This Document This document describes how to prepare the nodes in your HP cluster platform before installing HP XC System Software. An HP XC system is integrated with several open source software components. Some open source software components are being used for underlying technology, and their deployment is transparent. Some open source software components require user-level documentation specific to HP XC systems, and that kind of information is included in this document when required.
Ctrl+x ENVIRONMENT VARIABLE [ERROR NAME] Key Term User input Variable [] {} ... | WARNING CAUTION IMPORTANT NOTE A key sequence. A sequence such as Ctrl+x indicates that you must hold down the key labeled Ctrl while you press another key or mouse button. The name of an environment variable, for example, PATH. The name of an error, usually returned in the errno variable. The name of a keyboard key. Return and Enter both refer to the same key. The defined use of an important word or phrase.
HP XC System Software User's Guide Provides an overview of managing the HP XC user environment with modules, managing jobs with LSF, and describes how to build, run, debug, and troubleshoot serial and parallel applications on an HP XC system.
• • HP Server Blade c7000 Enclosure HP BladeSystem c3000 Enclosure Related Information This section provides useful links to third-party, open source, and other related software products. Supplementary Software Products This section provides links to third-party and open source software products that are integrated into the HP XC System Software core technology.
• http://supermon.sourceforge.net/ Home page for Supermon, a high-speed cluster monitoring system that emphasizes low perturbation, high sampling rates, and an extensible data protocol and programming interface. Supermon works in conjunction with Nagios to provide HP XC system monitoring. • http://www.llnl.gov/linux/pdsh/ Home page for the parallel distributed shell (pdsh), which executes commands across HP XC client nodes in parallel. • http://www.balabit.
• http://www.linuxheadquarters.com Web address providing documents and tutorials for the Linux user. Documents contain instructions for installing and using applications for Linux, configuring hardware, and a variety of other topics. • http://www.gnu.org Home page for the GNU Project. This site provides online software and information for many programs and utilities that are commonly used on GNU/Linux systems. Online information include guides for using the bash shell, emacs, make, cc, gdb, and more.
• • • • MySQL Cookbook, by Paul Debois High Performance MySQL, by Jeremy Zawodny and Derek J. Balling (O'Reilly) Perl Cookbook, Second Edition, by Tom Christiansen and Nathan Torkington Perl in A Nutshell: A Desktop Quick Reference , by Ellen Siever, et al. Manpages Manpages provide online reference and command information from the command line.
1 Hardware and Network Overview This chapter addresses the following topics: • • • • • • • • • “Supported Cluster Platforms” (page 19) “Server Blade Enclosure Components” (page 22) “Server Blade Mezzanine Cards” (page 28) “Server Blade Interconnect Modules” (page 28) “Supported Console Management Devices” (page 29) “Administration Network Overview” (page 30) “Administration Network: Console Branch” (page 31) “Interconnect Network” (page 31) “Large-Scale Systems” (page 32) 1.
IMPORTANT: A hardware configuration can contain a mixture of Opteron and Xeon nodes, but not Itanium nodes.
HP server blades offer an entirely modular computing system with separate computing and physical I/O modules that are connected and shared through a common chassis, called an enclosure; for more information on enclosures, see “Server Blade Enclosure Components” (page 22). Full-height Opteron server blades can take up to four dual core CPUs and Xeon server blades can take up to two quad cores. Table 1-2 lists the HP ProLiant hardware models supported for use in an HP XC hardware configuration.
1.2 Server Blade Enclosure Components HP server blades are contained in an enclosure, which is a chassis that houses and connects blade hardware components. An enclosure is managed by an Onboard Administrator. The HP BladeSystem c7000 and c3000 enclosures are supported under HP XC.
2 3 Power supply bays Insight Display. For more information, see “Insight Display” (page 28). As shown in Figure 1-3, the HP BladeSystem c7000 enclosure can house a maximum of 16 half-height or 8 full-height server blades. The c7000 enclosure can contain a maximum of 6 power supplies and 10 fans. Figure 1-3 also illustrates the numbering scheme for the server bays in which server blades are inserted. The numbering scheme differs for half height and full height server blades.
Figure 1-4 HP BladeSystem c7000 Enclosure Bay Locations (Rear View) 1 2 3 4 5 6 7 1 2 3 4 Fan Bays Interconnect Bay #1 Interconnect Bay #2 Interconnect Bay #3 Interconnect Bay #4 Interconnect Bay #5 Interconnect Bay #6 8 9 10 11 12 13 Interconnect Bay #7 Interconnect Bay #8 Onboard Administrator Bay 1 Onboard Administrator Bay 2 Power Supply Exhaust Vent AC Power Connections Onboard Administrator/Integrated Lights Out port Serial connector Enclosure Downlink port Enclosure Uplink port General Configu
Specific HP XC Setup Guidelines The following enclosure setup guidelines are specific to HP XC: • • • • On every HP BladeSystem c7000 enclosure, an Ethernet interconnect module (either a switch or pass-through module) is installed in interconnect bay #1 (see callout 2 in Figure 1-4) for the administration network.
Figure 1-7 HP BladeSystem c3000 Enclosure Bay Locations (Front View) 1 2 3 4 Device bays DVD drive (optional) Insight Display. For more information, see “Insight Display” (page 28). Onboard Administrator (OA) The HP BladeSystem c3000 enclosure can house a maximum of 8 half-height or 4 full-height server blades. Additionally, the c3000 enclosure contains an integrated DVD drive, which is useful for installing the HP XC System Software.
Figure 1-9 HP BladeSystem c3000 Enclosure Bay Locations (Rear View) 1 2 3 4 5 6 7 1 2 3 Interconnect bay #1 Fans Interconnect bay #2 Enclosure/Onboard Administrator Link Module Power Supplies Interconnect bay #3 Interconnect bay #4 Enclosure Downlink port Enclosure Uplink port Onboard Administrator/Integrated Lights Out port Specific HP XC Setup Guidelines The following enclosure setup guidelines are specific to HP XC: • • • • On every enclosure, an Ethernet interconnect module (either a switch or pa
You can access the Onboard Administrator through a graphical Web-based user interface, a command-line interface, or the simple object access protocol (SOAP) to configure and monitor the enclosure. You can add a second Onboard Administrator to provide redundancy. The Onboard Administrator requires a password. For information on setting the Onboard Administrator Password, see “Setting the Onboard Administrator Password” (page 62). 1.2.
See the HP BladeSystem Onboard Administrator User Guide for illustrations of interconnect bay port mapping connections on half- and full-height server blades. 1.5 Supported Console Management Devices Table 1-3 lists the supported console management device for each hardware model within each cluster platform. The console management device provides remote access to the console of each node, enabling functions such as remote power management, remote console logging, and remote boot.
Table 1-3 Supported Console Management Devices (continued) Hardware Component Firmware Dependency HP ProLiant BL460c iLO2, system BIOS, Onboard Administrator (OA) HP ProLiant BL480c iLO2, system BIOS, OA HP ProLiant BL680c G5 iLO2, system BIOS, Onboard Administrator (OA) CP4000 HP ProLiant DL145 LO-100i HP ProLiant DL145 G2 LO-100i HP ProLiant DL145 G3 LO-100i HP ProLiant DL165 G5 LO-100i HP ProLiant DL365 iLO2 HP ProLiant DL365 G5 iLO2 HP ProLiant DL385 iLO2 HP ProLiant DL385 G2 iLO2
HP XC system, the administrative tools probe and discover the topology of the administration network. The administration network requires and uses Gigabit Ethernet. The administration network has at least one Root Administration Switch and can have multiple Branch Administration Switches. These switches are discussed in “Switches” (page 46). 1.
Table 1-4 Supported Interconnects INTERCONNECT FAMILIES Gigabit Ethernet InfiniBand® PCI-X InfiniBand PCI Express Single Data Rate and Double Data Rate (DDR) CP3000 X X X X CP3000BL X X X CP4000 X X X CP4000BL X X X CP6000 X X3 X3 Cluster Platform and Hardware Model 1 2 3 X X InfiniBand ConnectX Double Data Rate (DDR)1 Myrinet® (Rev. D, E, and F) X X QsNetII X2 X Mellanox ConnectX Infiniband Cards require OFED Version 1.2.5 or later.
2 Cabling Server Blades The following topics are addressed in this chapter: • • • • • • “Blade Enclosure Overview” (page 33) “Network Overview ” (page 33) “Cabling for the Administration Network” (page 37) “Cabling for the Console Network” (page 38) “Cabling for the Interconnect Network” (page 39) “Cabling for the External Network” (page 41) 2.1 Blade Enclosure Overview An HP XC blade cluster is made up of one or more "Blade Enclosures" connected together as a cluster.
Chapter 3 (page 45) describes specific node and switch connections for non-blade hardware configurations. A hardware configuration with server blades does not have these specific cabling requirements; specific switch port assignments are not required. However, HP recommends a logical ordering of the cables on the switches to facilitate serviceability. Enclosures are discovered in port order, so HP recommends that you cable them in the order you want them to be numbered.
Figure 2-1 Interconnection Diagram for a Small HP XC Cluster of Server Blades ProCurve managed GigE Switch Enclosure 1 Switch 1 Enclosure 2 Switch 1 Enclosure 3 Switch 1 Enclosure 4 Switch 1 Enclosure 1 Enclosure 2 Enclosure 3 Enclosure 4 OA OA OA OA Enclosure 1 Enclosure 2 Enclosure 3 Enclosure 4 Server Blades Server Blades Server Blades Server Blades NIC1 NIC1 NIC1 NIC1 May be any managed GigE switch supported by CP ProCurve managed GigE Switch Enclosure 4 Secondary OA Ext Link Primary OA Ext Lin
Figure 2-2 Interconnection Diagram for a Medium Sized HP XC Cluster of Server Blades GigE Switch Enclosure 1 Switch 1 Enclosure 2 Switch 1 Enclosure 32 Switch 1 ... ProCurve 2650 ...
Figure 2-3 Interconnection Diagram for a Large HP XC Cluster of Server Blades GigE Switch /4 /4 Region 1 Region 8 GigE Switch GigE Switch ... Enclosure 32 GigE Switch 1 ... Enclosure 512 GigE Switch 1 ProCurve 2650 Enclosure 1 Secondary OA Ext Link Primary OA Ext Link GigE Switch ProCurve 2650 Enclosure 481 Enclosure 32 OA Ext Link Secondary OA Ext Link Primary OA Ext Link ...
Figure 2-4 Administration Network Connections Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT C-Class Blade Enclosure FULL HEIGHT BLADE NIC 1 NIC 2 NIC 3 NIC 4 MEZZ 1 MEZZ 2 MEZZ 3 HALF HEIGHT BLADE iLO2 NIC 1 NIC 2 MEZZ 1 MEZZ 2 iLO2 NIC PCI SLOT PCI SLOT NIC Admin ProCurve 2800 Series Switch Interconnect bay 1 Interconnect bay 2 Interconnect bay 3 Console ProCurve 2600 Series Switch Interconnect bay 4 Interconnect bay 5 Interconnect bay 6 Gigabit Ethernet Interconnect
Figure 2-5 Console Network Connections Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT C-Class Blade Enclosure NIC 1 NIC 2 NIC 3 NIC 4 MEZZ 1 MEZZ 2 MEZZ 3 iLO2 NIC 1 NIC 2 MEZZ 1 MEZZ 2 iLO2 NIC NIC PCI SLOT PCI SLOT Admin ProCurve 2800 Series Switch Interconnect bay 1 Interconnect bay 2 Interconnect bay 3 Console ProCurve 2600 Series Switch Interconnect bay 4 Interconnect bay 5 Interconnect bay 6 Gigabit Ethernet Interconnect Switch Interconnect bay 7 Interconnect bay 8
Figure 2-6 Gigabit Ethernet Interconnect Connections Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT NIC C-Class Blade Enclosure PCI SLOT PCI SLOT NIC Admin ProCurve 2800 Series Switch Interconnect bay 1 NIC 1 NIC 2 Interconnect bay 2 NIC 3 NIC 4 Console ProCurve 2600 Series Switch Interconnect bay 3 MEZZ 1 Interconnect bay 4 MEZZ 2 MEZZ 3 Interconnect bay 5 iLO2 Gigabit Ethernet Interconnect Switch Interconnect bay 6 NIC 1 Interconnect bay 7 NIC 2 Interconnec
2.5.3 Configuring the Interconnect Network Over the Administration Network In cases where an additional Gigabit Ethernet port or switch may not be available, the HP XC System Software enables you to configure the interconnect on the administration network. When the interconnect is configured on the administration network, only a single LAN is used.
server blades do not have three NICs, and therefore, half-height server blades are not included in this example Because NIC1 and NIC3 on a full-height server blade are connected to interconnect bay 1, you must use VLANs on the switch in that bay to separate the external network from the administration network. Also, in this example, PCI Ethernet cards are used in the non-blade server nodes.
Figure 2-9 External Network Connections: Half-Height Server Blades and NIC1 and NIC2 in Use Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT NIC NIC PCI SLOT PCI SLOT Ethernet PCI Cards C-Class Blade Enclosure FULL HEIGHT BLADE NIC 1 NIC 2 NIC 3 NIC 4 MEZZ 1 MEZZ 2 MEZZ 3 HALF HEIGHT BLADE iLO2 NIC 1 NIC 2 MEZZ 1 MEZZ 2 iLO2 Ethernet Mezzanine Cards Admin ProCurve 2800 Series Switch Interconnect bay 1 Interconnect bay 2 Interconnect bay 3 Console ProCurve 2600 Series Swi
Figure 2-10 External Network Connections: Half and Full-Height Server Blades and NIC1 in Use Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT C-Class Blade Enclosure FULL HEIGHT BLADE NIC 1 NIC 2 NIC 3 NIC 4 MEZZ 1 MEZZ 2 MEZZ 3 HALF HEIGHT BLADE iLO2 NIC 1 NIC 2 MEZZ 1 MEZZ 2 iLO2 NIC NIC PCI SLOT PCI SLOT Admin ProCurve 2800 Series Switch Interconnect bay 1 Interconnect bay 2 Interconnect bay 3 Console ProCurve 2600 Series Switch Interconnect bay 4 Interconnect bay 5 In
3 Making Node and Switch Connections This chapter provides information about the connections between nodes and switches that are required for an HP XC system. The following topics are addressed: • • • • “Cabinets” (page 45) “Trunking and Switch Choices” (page 45) “Switches” (page 46) “Interconnect Connections” (page 56) IMPORTANT: The specific node and switch port connections documented in this chapter do not apply to hardware configurations containing HP server blades and enclosures.
to create a higher bandwidth connection between the Root Administration Switches and the Branch Administration Switches. For physically small hardware models (such as a 1U HP ProLiant DL145 server), a large number of servers (more than 30) can be placed in a single cabinet, and are all attached to a single branch switch. The branch switch is a ProCurve Switch 2848, and two-port trunking is used for the connection between the Branch Administration Switch and the Root Administration Switch.
Root Administration Switch Root Console Switch Branch Administration Switch Branch Console Switch IMPORTANT: This switch connects directly to Gigabit Ethernet ports of the head node, the Root Console Switch, Branch Administration Switches, and other nodes in the utility cabinet. This switch connects to the Root Administration Switch, Branch Console Switches, and connects to the management console ports of nodes in the utility cabinet.
Figure 3-2 Node and Switch Connections on a Typical System Specialized Role Nodes Head Node Administration Switches Console Switches Root Administration Root Console Br anch Switches Br anch Switches Compute Nodes Figure 3-3 shows a graphical representation of the logical layout of the switches and nodes in a large-scale system with a Super Root Switch. The head node connects to Port 42 on the Root Administration Switch in Region 1.
3.3.3.1 Switch Connections and HP Workstations HP model xw workstations do not have console ports. Only the Root Administration Switch supports mixing nodes without console management ports with nodes that have console management ports (that is, all other supported server models). HP workstations connected to the Root Administration Switch must be connected to the next lower-numbered contiguous set of ports immediately below the nodes that have console management ports.
3.3.5 Root Administration Switch The Root Administration Switch for the administration network of an HP XC system can be either a ProCurve 2848 switch or a ProCurve 2824 switch for small configurations. If you are using a ProCurve 2848 switch as the switch at the center of the administration network, use Figure 3-5 to make the appropriate port connections.
Figure 3-6 ProCurve 2824 Root Administration Switch 3 1 hp procurve switch 2824 J 4903 A 1 St atus RP S Po w er Fa u lt C ons o le R eset C lear LE D Mode 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Ac t Fa n FD x Sp d 6 19 20 Ln k Te s t 4 2 21 22 23 24 T M T M T M T M 5 The callouts in the figure enumerate the following: 1. 2. 3. 4. 5. 6. Uplinks from branches start at port 1 (ascending). 10/100/1000 Base-TX RJ-45 ports.
3.3.6.1 ProCurve 2650 Switch You can use a ProCurve 2650 switch as a Root Console Switch for the console branch of the administration network. The console branch functions at a lower speed (10/100 Mbps) than the rest of the administration network. The ProCurve 2650 switch is shown in Figure 3-7. In the figure, white ports should not have connections, black ports can have connections, and ports with numbered callouts are used for specific purposes, described after the figure.
Figure 3-8 ProCurve 2610-48 Root Console Switch The callouts in the figure enumerate the following: 1. 2. 3. Port 42 must be reserved for an optional connection to the console port on the head node. Port 49 is reserved. Port 50 is the Gigabit Ethernet link to the Root Administration Switch. Allocate the ports on this switch for consistency with the administration switches, as follows: • Ports 1–10, 13–22, 25–34, 37–41 — Starting with port 1, the ports are used for links from Branch Console Switches.
— — Starting with port 1, the ports are used for links from Branch Console Switches. Trunking is not used. Starting with port 21 and in descending order, ports are assigned for use by individual nodes in the utility cabinet. Nodes in the utility cabinet are connected directly to the Root Administration Switch. NOTE: There must be at least one idle port in this set to indicate the dividing line between branch links and root node administration ports. • Ports 11, 12, 23, and 24, are unused. 3.3.6.
Figure 3-11 ProCurve 2848 Branch Administration Switch 2 Connections to Node Administration Ports hp procurve switch 2848 2 1 3 4 6 5 8 7 10 9 11 12 13 14 1 15 16 17 18 19 20 21 22 23 24 26 25 27 28 29 30 31 32 33 15 17 31 33 16 18 32 34 34 35 36 37 38 39 40 41 42 43 44 J4904A RPS Po w er Fa ult Fan LED Mode Lnk T Ac t Test FD x Sp d Re set Cl e a r Spd m ode : of f = 1 0 Mbps fla sh = 10 0 Mbps on = 1 0 0 0 Mbps 45 M T 46 M T 47
Figure 3-13 ProCurve 2650 Branch Console Switch Connections to Node Console Ports hp pr oc ur ve swi tch 26 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 1 14 15 16 17 15 18 19 20 21 22 23 24 25 26 27 28 29 17 30 31 32 33 31 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 47 33 Gi g -T Po r ts J4 8 9 9 A Sel f Te st Po w er Fa ult Fa n St atu s Re set Po r t LED Vi ew Lnk T Ac t FD x Sp d Cl e a r 49 M T 50 M Mi niGB IC Po r ts Spd m ode : of f = 1 0 Mbp
The method for wiring the administration network and interconnect networks allows expansion of the system within the system's initial interconnect fabric without recabling of any existing nodes. If additional switch chassis or ports are added to the system as part of the expansion, some recabling may be necessary. 3.4.1 QsNet Interconnect Connections For the QsNetII interconnect developed by Quadrics, it is important that nodes are connected to the Quadrics switch ports in a specific order.
If you do not specify the --ic=AdminNet option, the discover command attempts to locate the highest speed interconnect on the system with the default being a Gigabit Ethernet network that is separate from the administration network. 3.4.
4 Preparing Individual Nodes This chapter describes how to prepare individual nodes in the HP XC hardware configuration.
Table 4-1 Firmware Dependencies (continued) Hardware Component Firmware Dependency HP ProLiant BL680c G5 iLO2, system BIOS, Onboard Administrator (OA) CP4000 HP ProLiant DL145 LO-100i, system BIOS HP ProLiant DL145 G2 LO-100i, system BIOS HP ProLiant DL145 G3 LO-100i, system BIOS HP ProLiant DL165 G5 LO-100i, system BIOS HP ProLiant DL365 iLO2, system BIOS HP ProLiant DL365 G5 iLO2, system BIOS HP ProLiant DL385 iLO2, system BIOS HP ProLiant DL385 G2 iLO2, system BIOS HP ProLiant DL385 G
Table 4-1 Firmware Dependencies (continued) Hardware Component Firmware Dependency Myrinet Firmware version Myrinet interface card Interface card version QsNetII Firmware version InfiniBand Firmware version 4.2 Ethernet Port Connections on the Head Node Table 4-2 lists the Ethernet port connections on the head node based on the type of interconnect in use. Use this information to determine the appropriate connections for the external network connection on the head node.
6. 7. that it is accessible to Nagios during system operation. For more information on Nagios, see the HP XC System Software Administration Guide. Review the documentation that came with the hardware and have it available, if needed. If your hardware configuration contains server blades and enclosures, proceed to “Setting the Onboard Administrator Password” (page 62).
4.5 Preparing the Hardware for CP3000 (Intel Xeon with EM64T) Systems Follow the procedures in this section to prepare each node before installing and configuring the HP XC System Software.
2. Turn on power to the node. Watch the screen carefully during the power-on, self-test, and press the F10 key when prompted to access the BIOS Setup Utility. The Lights-Out 100i (LO-100i) console management device is configured through the BIOS Setup Utility. The BIOS Setup Utility displays the following information about the node: BIOS ROM ID: BIOS Version: BIOS Build Date: Record this information for future reference. 3. For each node, make the following BIOS settings from the Main window.
Table 4-3 BIOS Settings for HP ProLiant DL140 G2 Nodes (continued) Menu Name Submenu Name Option Name Set to This Value Wake On LAN Disabled Set the following boot order on all nodes except the head node: 1. CD-ROM 2. Removable Devices 3. PXE MBA V7.7.2 Slot 0200 4. Hard Drive 5. ! PXE MBA V7.7.2 Slot 0300 (! means disabled) Boot Set the following boot order on the head node: 1. CD-ROM 2. Removable Devices 3. Hard Drive 4. PXE MBA v7.7.2 SLot 0200 5. PXE MBA v7.7.
Table 4-4 BIOS Settings for HP ProLiant DL140 G3 Nodes (continued) Menu Name Submenu Name Option Name Set to This Value Set the following boot order on all nodes except the head node: 1. CD-ROM 2. Removable Devices 3. Embedded NIC1 4. Hard Drive 5. Embedded NIC2 Power Embedded NIC1 PXE Enabled Embedded NIC2 PXE Disabled Resume On Modem Off Ring Wake On LAN 4. 5. Disabled From the Main window, select Exit→Save Changes and Exit to exit the utility.
The callouts in the figure enumerate the following: 1. 2. 3. This port is used for the connection to the Console Switch. On the back of the node, this port is marked with LO100i. This port is used for the connection to the Administration Switch (branch or root). On the back of the node, this port is marked with the number 1 (NIC1). If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect connection. Otherwise, it is used for an external connection.
Table 4-5 BIOS Settings for HP ProLiant DL160 G5 Nodes (continued) Menu Name Submenu Name Option Name Set to This Value 2nd Boot Device If this node is the head node, set this value to: Embedded NIC1 Otherwise, set this value to: Hard Drive 4. 5. Embedded NIC1 PXE Enabled Embedded NIC2 PXE Disabled From the Main window, select Exit→Save Changes and Exit to exit the utility. Repeat this procedure for each HP ProLiant DL160 G5 node in the HP XC system. 4.5.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based Setup Utility (RBSU). Table 4-6 iLO Settings for HP ProLiant DL360 G4 Nodes Menu Name Submenu Name User Option Name Set to This Value Add Create a common iLO user name and password for every node in the hardware configuration. The password must have a minimum of 8 characters by default, but this value is configurable.
Configuring Smart Arrays On the HP ProLiant DL360 G4 with smart array cards, you must add the disks to the smart array before attempting to image the node. To do so, watch the screen carefully during the power-on self-test phase of the node, and press the F8 key when prompted to configure the disks into the smart array. Specific instructions are outside the scope of the HP XC documentation. See the documentation that came with the HP ProLiant server for more information. 4.5.
Table 4-8 iLO Settings for HP ProLiant DL360 G5 Nodes Menu Name Submenu Name User Option Name Set to This Value Add Create a common iLO user name and password for every node in the hardware configuration. The password must have a minimum of 8 characters by default, but this value is configurable. The user Administrator is predefined by default, but you must create your own user name and password. For security purposes, HP recommends that you delete the Administrator user.
Configuring Smart Arrays On the HP ProLiant DL360 G5 nodes with smart array cards, you must add the disks to the smart array before attempting to image the node. To do so, watch the screen carefully during the power-on self-test phase of the node, and press the F8 key when prompted to configure the disks into the smart array. Specific instructions are outside the scope of the HP XC documentation. See the documentation that came with the HP ProLiant server for more information. 4.5.
Figure 4-6 HP ProLiant DL380 G5 Server Rear View 5 4 2 1 3 1 2 3 The callouts in the figure enumerate the following: 1. 2. 3. This port is used for the connection to the external network. This port is used for the connection to the Administration Switch (branch or root). The iLO Ethernet port is used for the connection to the Console Switch. Setup Procedure Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL380 node in the HP XC system: 1. 2. 3.
1. Make the following settings from the Main menu. The BIOS settings differ depending upon the hardware model generation: • BIOS settings for HP ProLiant DL380 G4 nodes are listed in Table 4-11 . • BIOS settings for HP ProLiant DL380 G5 nodes are listed in Table 4-12.
Table 4-12 BIOS Settings for HP ProLiant DL380 G5 Nodes (continued) Menu Name Option Name Set to This Value Advanced Options Processor Hyper_threading Disable BIOS Serial Console & EMS BIOS Serial Console Port COM1; IRQ4; IO: 3F8h - 3FFh BIOS Serial Console Baud Rate 115200 EMS Console Disabled BIOS Interface Mode Command-Line Press the Esc key to return to the main menu. 2. 3. Press the Esc key to exit the RBSU. Press the F10 key to confirm your choice and restart the boot sequence.
Figure 4-7 HP ProLiant DL580 G4 Server Rear View 1 2 3 The callouts in the figure enumerate the following: 1. 2. 3. NIC1 is used for the connection to the Administration Switch (branch or root). NIC2 is used for the connection to the external network. The iLO Ethernet port is used for the connection to the Console Switch. Setup Procedure Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL580 G4 node in the HP XC system: 1. 2. 3.
Table 4-13 iLO Settings for HP ProLiant DL580 G4 Nodes Name Menu Name Submenu Name Administration User Option Name Set to This Value New Create a common iLO user name and password for every node in the hardware configuration. The password must have a minimum of 8 characters by default, but this value is configurable. The user Administrator is predefined by default, but you must create your own user name and password. For security purposes, HP recommends that you delete the Administrator user.
1. Make the following settings from the Main menu. The BIOS settings for HP ProLiant DL580 G4 nodes are listed in Table 4-14. Table 4-14 BIOS Settings for HP ProLiant DL580 G4 Nodes Menu Name Option Name Set to This Value Set the following boot order on all nodes except the head node; CD-ROM does not have to be first in the list, but it must be listed before the hard disk: 1. CD-ROM 2. NIC1 3.
Figure 4-8 HP ProLiant DL580 G5 Server Rear View The callouts in the figure enumerate the following: 1. 2. 3. The iLO Ethernet port is used for the connection to the Console Switch. NIC1 is used for the connection to the Administration Switch (branch or root). NIC2 is used for the connection to the external network. Setup Procedure Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL580 G5 node in the HP XC system: 1. 2. 3.
5. Watch the screen carefully, and press the F9 key when prompted to access the ROM-Based Setup Utility (RBSU). Perform the following procedure from the RBSU for each HP ProLiant DL580 G5 node in the HP XC system: 1. Make the following settings from the Main menu. The BIOS settings for HP ProLiant DL580 G5 nodes are listed in Table 4-14.
Figure 4-9 HP xw8200 and xw8400 Workstation Rear View 1 The callout in the figure enumerates the following: 1. This port is used for the connection to the administration network. Setup Procedure Use the Setup Utility to configure the appropriate settings for an HP XC system. Perform the following procedure for each workstation in the hardware configuration. Change only the values that are described in this procedure; do not change any other factory-set values unless you are instructed to do so: 1. 2.
Table 4-18 BIOS Settings for xw8400 Workstations Menu Name Submenu Name Option Name Storage Storage Options SATA Emulation Set to This Value Separate IDE Controller After you make this setting, make sure the Primary SATA Controller and Secondary SATA Controller settings are set to Enabled. Boot Order Set the following boot order on all nodes except the head node: 1. Optical Drive 2. USB device 3. Broadcom Ethernet controller 4. Hard Drive 5.
Perform the following procedure for each workstation in the hardware configuration. Change only the values that are described in this procedure; do not change any other factory-set values unless you are instructed to do so: 1. 2. 3. 4. 5. 6. Establish a connection to the console by connecting a monitor and keyboard to the node. Turn on power to the workstation. When the node is powering on, press the F10 key to access the Setup Utility. When prompted, press any key to continue.
4.6 Preparing the Hardware for CP3000BL Systems Perform the following tasks on each server blade in the hardware configuration after the head node is installed and the switches are discovered: • • • • Set the boot order Create an iLO2 user name and password Set the power regulator Configure smart array devices Use the Onboard Administrator, the iLO2 web interface, and virtual media to make the appropriate settings on HP ProLiant Server Blades.
f. The Onboard Administrator automatically creates user accounts for itself (prefixed with the letters OA) to provide single sign-on capabilities. Do not remove these accounts. 5. Enable telnet access: a. In the left frame, click Access. b. Click the control to enable Telnet Access. c. Click the Apply button to save the settings. 6. Click the Virtual Devices tab and make the following settings: a.
10. Close the iLO2 utility Web page. 11. Repeat this procedure from every active Onboard Administrator and make the same settings for each server blade in each enclosure. After preparing all the nodes in all the enclosures, return to the HP XC System Software Installation Guide to discover all the nodes and enclosures in the HP XC system.
4.7 Preparing the Hardware for CP4000 (AMD Opteron) Systems Follow the procedures in this section to prepare each node before installing and configuring the HP XC System Software.
3. For each node, make the BIOS settings listed in Table 4-21.
6. Ensure that all machines are requesting IP addresses through the Dynamic Host Control Protocol (DHCP). Do the following to determine if DHCP is enabled: a. At the ProLiant> prompt, enter the following: ProLiant> net b. At the INET> prompt, enter the following: INET> state iface...ipsrc.....IP addr........subnet.......gateway 1-et1 dhcp 0.0.0.0 255.0.0.0 0.0.0.0 current tick count 2433 ping delay time: 280 ms. ping host: 0.0.0.0 Task wakeups:netmain: 93 nettick: 4814 telnetsrv: 401 c.
1. 2. 3. This port is used for the connection to the Administration Switch (branch or root). On the rear of the node, this port is marked with the number 1 (NIC1). If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect connection. Otherwise, it is used for an external connection. On the rear of the node, this port is marked with the number 2 (NIC2). The port labeled LO100i is used for the connection to the Console Switch.
Table 4-22 BIOS Settings for HP ProLiant DL145 G2 Nodes (continued) Menu Name Submenu Name Hammer Configuration Option Name Set to This Value Disable Jitter bit Enabled page Directory Cache Disabled PCI Configuration/Ethernet Device On Board (for Ethernet 1 and 2) I/O Device Configuration Console Redirection IPMI/LAN Setting IPMI Enabled Option ROM Scan Enabled Latency timer 40h Serial Port BMC COM Port SIO COM Port Disabled PS/2 Mouse Enabled Console Redirection Enabled EMS Console
Table 4-23 BIOS Settings for HP ProLiant DL145 G3 Nodes Menu Name Submenu Name Option Name Set to This Value Main Boot Options NumLock Off Advanced I/O Device Configuration Serial Port Mode BMC Serial port A: Enabled Base I/O address: 3F8 Interrupt: IRQ 4 Memory Controller Options DRAM Bank Interleave Serial ATA Console Redirection AUTO Node Interleave Disabled 32-Bit Memory Hole Enabled Embedded SATA Enabled SATA Mode SATA Enabled/Disable Int13 support Enabled Option ROM Scan
4. 5. Select Exit→Saving Changes to exit the BIOS Setup Utility. Repeat this procedure for each HP ProLiant DL145 G2 and DL145 G3 node in the hardware configuration. 4.7.3 Preparing HP ProLiant DL165 G5 Nodes Use the BIOS Setup utility on HP ProLiant DL165 G5 servers to configure the appropriate settings for an HP XC system. For this hardware model, you cannot set or modify the default console port password through the BIOS Setup Utility the way you can for other hardware models.
Table 4-24 BIOS Settings for HP ProLiant DL165 G5 Nodes Menu Name Submenu Name Option Name Main Boot Settings Configuration Bootup Num-Lock Advanced I/O Device Configuration S-ATA Configuration Remote Access Configuration IPMI Configuration Set to This Value Disabled Embedded Serial Port IRQ: 3F8 Interrupt: IRQ 4 S-ATA Mode S-ATA INT13 support Enabled Base Address IRQ [3F8h,4] Serial Port Mode 115200 8,n,1 Redirection of BIOS POST Always Terminal Type ANSI LAN Configuration: Share NI
Figure 4-15 HP ProLiant DL365 Server Rear View The callouts in the figure enumerate the following: 1. 2. 3. This port is the Ethernet connection to the Console Switch. On the back of the node, this port is marked with the acronym iLO. This port is the connection to the Administration Switch (branch or root). On the back of the node, this port is marked with the number 1. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect connection.
1. Make the RBSU settings for the HP ProLiant DL365 nodes, as indicated in Table 4-26 Use the navigation aids shown at the bottom of the screen to move through the menus and make selections.
Specific instructions are outside the scope of the HP XC documentation. See the documentation that came with the HP ProLiant server for more information. 4.7.
Table 4-27 iLO Settings for HP ProLiant DL365 G5 Nodes Menu Name Submenu Name Option Name User Add Set to This Value Create a common iLO user name and password for every node in the hardware configuration. The password must have a minimum of 8 characters by default, but this value is configurable. The user Administrator is predefined by default, but you must create your own user name and password. For security purposes, HP recommends that you delete the Administrator user.
1. Make the RBSU settings for the HP ProLiant DL365 G5 nodes, as indicated in Table 4-28 Use the navigation aids shown at the bottom of the screen to move through the menus and make selections.
Specific instructions are outside the scope of the HP XC documentation. See the documentation that came with the HP ProLiant server for more information. 4.7.
Figure 4-18 HP ProLiant DL385 G2 Server Rear View 5 4 2 1 3 1 2 3 The callouts in the figure enumerate the following: 1. 2. 3. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect connection. Otherwise, it is used for an external connection. On the back of the node, this port is marked with the number 2. This port is the connection to the Administration Switch (branch or root). On the back of the node, this port is marked with the number 1.
Table 4-30 iLO Settings for HP ProLiant DL385 G2 Nodes Menu Name Submenu Name Option Name User Add Set to This Value Create a common iLO user name and password for every node in the hardware configuration. The password must have a minimum of 8 characters by default, but this value is configurable. The user Administrator is predefined by default, but you must create your own user name and password. For security purposes, HP recommends that you delete the Administrator user.
1. Make the RBSU settings accordingly. The settings differ depending on the hardware model generation: • Table 4-31 provides the RBSU settings for the HP ProLiant DL385 nodes. • Table 4-32 provides the RBSU settings for the HP ProLiant DL385 G2 nodes. Use the navigation aids shown at the bottom of the screen to move through the menus and make selections.
Table 4-32 RBSU Settings for HP ProLiant DL385 G2 Nodes (continued) Menu Name Option Name Set to This Value System Options Embedded Serial Port COM2 Virtual Serial Port COM1 Set the following boot order on all nodes except the head node; CD-ROM does not have to be first in the list, but it must be listed before the hard disk: 1. CD-ROM 2. Floppy Drive (A:) 3. USB DriveKey (C:) 4. PCI Embedded HP NC373i Multifunction Gigabit Adapter 5.
Figure 4-19 HP ProLiant DL385 G5 Server Rear View The callouts in the figure enumerate the following: 1. 2. 3. This port is the Ethernet connection to the Console Switch. On the back of the node, this port is marked with the acronym iLO. This port is the connection to the Administration Switch (branch or root). On the back of the node, this port is marked with the number 1. If a Gigabit Ethernet (GigE) interconnect is configured, this port is used for the interconnect connection.
4.7.8 Preparing HP ProLiant DL585 and DL585 G2 Nodes On HP ProLiant DL585 and DL585 G2 servers, use the following tools to configure the appropriate settings for an HP XC system: • Integrated Lights Out (iLO) Setup Utility • ROM-Based Setup Utility (RBSU) HP ProLiant DL585 and DL585 G2 servers use the iLO utility; thus, they need certain settings that you cannot make until the iLO has an IP address.
1. 2. 3. NIC1 is the connection to the Administration Switch (branch or root). If a Gigabit Ethernet (GigE) interconnect is configured, this port (labeled NIC2) is used for the interconnect connection. Otherwise, it is used for an external connection. The iLO2 Ethernet port is the connection to the Console Switch. Setup Procedure Perform the following procedure from the iLO Setup Utility for each HP ProLiant DL585 and DL585 G2 node in the hardware configuration: 1. 2. 3.
Perform the following procedure for each HP ProLiant DL585 and DL585 G2 node in the hardware configuration: 108 Preparing Individual Nodes
1. Make the RBSU settings accordingly for each node. The settings differ depending on the hardware model generation: • Table 4-36 provides the RBSU settings for the HP ProLiant DL585 nodes. • Table 4-37 provides the RBSU settings for the HP ProLiant DL585 G2 nodes.
Table 4-37 RBSU Settings for HP ProLiant DL585 G2 Nodes (continued) Menu Name Option Name Set to This Value Set the following boot order on all nodes except the head node; the CD-ROM must be listed before the hard drive: • IPL:1 CD-ROM • IPL:2 Floppy Drive (A:) • IPL:3 USB Drive Key (C:) • IPL:4 PCI Embedded HP NC373i Multifunction Gigabit Adapter • IPL:5 Hard Drive C: Standard Boot Order (IPL) Set the following boot order on the head node: • IPL:1 CD-ROM • IPL:2 Floppy Drive (A:) • IPL:3 USB Drive Key
Figure 4-22 HP ProLiant DL585 G5 Server Rear View The callouts in the figure enumerate the following: 1. 2. 3. The iLO2 Ethernet port is the connection to the Console Switch. NIC1 is the connection to the Administration Switch (branch or root). If a Gigabit Ethernet (GigE) interconnect is configured, this port (labeled NIC2) is used for the interconnect connection. Otherwise, it is used for an external connection.
1. Make the RBSU settings for each node; Table 4-39 lists the RBSU setting for the DL585 G5 nodes.
4.7.10 Preparing HP ProLiant DL785 G5 Nodes On HP ProLiant DL785 G5 servers, use the following tools to configure the appropriate settings for an HP XC system: • • Integrated Lights Out (iLO) Setup Utility ROM-Based Setup Utility (RBSU) HP ProLiant DL785 G5 servers use the iLO utility; thus, they need certain settings that you cannot make until the iLO has an IP address. The HP XC System Software Installation Guide provides instructions for using a browser to connect to the iLO to enable telnet access.
Table 4-40 iLO Settings for HP ProLiant DL785 G5 Nodes Menu Name Submenu Name Option Name User Add Set to This Value Create a common iLO user name and password for every node in the hardware configuration. The password must have a minimum of 8 characters by default, but this value is configurable. The user Administrator is predefined by default, but you must create your own user name and password. For security purposes, HP recommends that you delete the Administrator user.
1. Make the RBSU settings for each node; Table 4-41 lists the RBSU setting for the DL785 G5 nodes.
4.7.11 Preparing HP xw9300 and xw9400 Workstations HP xw9300 and xw9400 workstations are typically used when the HP Scalable Visual Array (SVA) software is installed and configured to interoperate on an HP XC system. Configuring an xw9300 or xw9400 workstation as the HP XC head node is supported. Figure 4-24 shows a rear view of the xw9300 workstation and the appropriate port connections for an HP XC system. Figure 4-24 xw9300 Workstation Rear View 1 The callout in the figure enumerates the following: 1.
Setup Procedure Use the Setup Utility to configure the appropriate settings for an HP XC system. Perform the following procedure for each workstation in the hardware configuration. Change only the values described in this procedure; do not change any other factory-set values unless you are instructed to do so: 1. 2. 3. 4. 5. 6. Use the instructions in the accompanying hardware documentation to connect a monitor, mouse, and keyboard to the node and establish a connection to the console.
10. Follow the software installation instructions in the HP XC System Software Installation Guide to install the HP XC System Software.
4.8 Preparing the Hardware for CP4000BL Systems Perform the following tasks on each server blade in the hardware configuration after the head node is installed and the switches are discovered: • • • • Set the boot order Create an iLO2 user name and password Set the power regulator Configure smart array devices Use the Onboard Administrator, the iLO2 web interface, and virtual media to make the appropriate settings on HP ProLiant Server Blades.
f. The Onboard Administrator automatically creates user accounts for itself (prefixed with the letters OA) to provide single sign-on capabilities. Do not remove these accounts. 5. Enable telnet access: a. In the left frame, click Access. b. Click the control to enable Telnet Access. c. Click the Apply button to save the settings. 6. Click the Virtual Devices tab and make the following settings: a.
9. Perform this step for HP ProLiant BL685c nodes; proceed to the next step for all other hardware models. On an HP ProLiant BL685c node, watch the screen carefully during the power-on self-test, and press the F9 key to access the ROM-Based Setup Utility (RBSU) to enable HPET as shown Table 4-45. Table 4-45 Additional BIOS Setting for HP ProLiant BL685c Nodes Menu Name Option Name Set To This Value Advanced Linux x86_64 HPET Enabled Press the F10 key to exit the RBSU.
4.9 Preparing the Hardware for CP6000 (Intel Itanium) Systems Follow the procedures in this section to prepare HP Integrity servers before installing and configuring the HP XC System Software.
Figure 4-26 HP Integrity rx1620 Server Rear View LAN 10/100 CONSOLE / REMOTE / UPS GSP RESETS PCI-X 133 SOFTHARD SCSI LVD/SE LAN Gb A PCI-X 133 USB SERIAL LAN Gb B The callouts in the figure enumerate the following: 1. 2. 3. The port labeled LAN 10/100 is the MP connection to the ProCurve Console Switch. The port labeled LAN Gb A connects to the Administration Switch (branch or root). The port labeled LAN Gb B is used for an external connection.
e. Log in to the MP using the default user name and password shown on the screen. The MP Main Menu appears: MP MAIN MENU: CO: VFP: CM: SMCLP: CL: SL: HE: X: Console Virtual Front Panel Command Menu Server Management Command Line Protocol Console Log Show Event Logs Main Help Menu Exit Connection 4. 5. 6. Enter SL to show event logs. Then, enter C to clear all log files and Y to confirm. Enter CM to display the Command Menu. Perform the following steps to ensure that the IPMI over LAN option is set.
13. Perform this step on all nodes except the head node. From the Boot Configuration menu, select the Edit OS Boot Order option. Do the following from the Edit OS Boot Order option: a. b. c. Use the navigation instructions shown on the screen to move the Netboot entry you just defined to the top of the boot order. If prompted, save the setting to NVRAM. Enter x to return to the previous menu. 14. Perform this step on all nodes, including the head node.
c. d. e. Use a terminal emulator, such as HyperTerminal, to open a terminal window. Press the Enter key to access the MP. If there is no response, press the MP reset pin on the back of the MP and try again. Log in to the MP using the default user name and password shown on the screen. The MP Main Menu appears: MP MAIN MENU: CO: VFP: CM: SMCLP: CL: SL: HE: X: Console Virtual Front Panel Command Menu Server Management Command Line Protocol Console Log Show Event Logs Main Help Menu Exit Connection 4. 5. 6.
13. Perform this step on all nodes except the head node. From the Boot Configuration menu, select the Edit OS Boot Order option. Do the following: a. b. c. Use the navigation instructions on the screen to move the Netboot entry you just defined to the top of the boot order. If prompted, press the Enter key to select the position. Enter x to return to the Boot Configuration menu. 14. Perform this step on all nodes, including the head node. Select the Console Configuration option, and do the following: a.
b. c. d. e. Connect the CONSOLE connector to a null modem cable, and connect the null modem cable to the PC COM1 port. Use a terminal emulator, such as HyperTerminal, to open a terminal window. Press the Enter key to access the MP. If the MP does not respond, press the MP reset pin on the back of the MP and try again. Log in to the MP using the default user name and password shown on the screen.
13. Perform this step on all nodes except the head node. From the Boot Configuration menu, select the Edit OS Boot Order option. Do the following: a. b. c. Use the navigation instructions on the screen to move the Netboot entry you just defined to the top of the boot order. If prompted, press the Enter key to select the position. Enter x to return to the Boot Configuration menu. 14. Perform this step on all nodes, including the head node.
a. Connect a three-way DB9-25 cable to the MP DB-9 port on the back of the HP Integrity rx4640 server. This port is the first of the four DB9 ports at the bottom left of the server; it is labeled MP Local. b. c. d. e. Connect the CONSOLE connector to a null modem cable, and connect the null modem cable to the PC COM1 port. Use a terminal emulator, such as HyperTerminal, to open a terminal window. Press the Enter key to access the MP.
e. f. Press the Enter key for no boot options. When prompted, save the entry to NVRAM. For more information about how to work with these menus, see the documentation that came with the HP Integrity server. 13. Perform this step on all nodes except the head node. From the Boot Configuration menu, select the Edit OS Boot Order option. Do the following: a. b. c. Use the navigation instructions on the screen to move the Netboot entry you just defined to the top of the boot order.
Figure 4-31 HP Integrity rx8620 Core IO Board Connections MP LAN SYS LAN Connection to Administration Network Connection to Console Network (Management Processor) MP Serial Port MP Reset Pin Preparing Individual Nodes Follow this procedure for each HP Integrity rx8620 node in the hardware configuration: 1. 2. Ensure that the power cord is connected but that the processor is not turned on. Connect a personal computer to the Management Processor (MP): a.
NOTE: Most of the MP commands of the HP Integrity rx8620 are similar to the HP Integrity rx2600 MP commands, but there are some differences. The two MPs for the HP Integrity rx8620 operate in a master/slave relationship. Only the master MP, which is on Core IO board 0, is assigned an IP address. Core IO board 0 is always the top Core IO board. The slave MP is used only if the master MP fails. 5.
NOTE: If the console stops accepting input from the keyboard, the following message is displayed: [Read-only - use ^Ecf to attach to console.] In that situation, press and hold down the Ctrl key and type the letter e. Release the Ctrl key, and then type the letters c and f to reconnect to the console. 12. Do the following from the EFI Boot Manager screen, which is displayed during the power-up of the node.
14. From the Boot Option Maintenance menu, add a boot option for the EFI Shell (if one does not exist). Follow the instructions in step 12a. 15. Exit the Boot Option Maintenance menu. 16. Choose the EFI Shell boot option and boot to the EFI shell. Enter the following EFI shell commands: EFI> acpiconfig enable softpowerdown EFI> acpiconfig single-pci-domain EFI> reset The reset command reboots the machine.
4.10 Preparing the Hardware for CP6000BL Systems Use the management processor (MP) to perform the following tasks on each server blade in the hardware configuration after the head node is installed and the switches are discovered: • • • • • • • Clear all event logs Enable IPMI over LAN Create an MP login ID and password that matches all other devices Add a boot entry for the string DVD boot on the head node, and add a boot entry for the string Netboot on all other nodes.
iLOs, and OAs must use the same user name and password. Do not use any special characters as part of the password. 10. Turn on power to the node: MP:CM> pc -on -nc 11. Press Ctrl-B to return to the MP Main Menu. 12. Enter CO to connect to the console. It takes a few minutes for the live console to display. 13. Add a boot entry and set the OS boot order. Your actions for the head node differ from all other nodes.
16. Use the RB command to reset the BMC. 17. Press Ctrl-b to exit the console mode and press the x key to exit. After preparing all the nodes in all the enclosures, return to the HP XC System Software Installation Guide to discover all the nodes and enclosures in the HP XC system.
5 Troubleshooting This chapter describes known problems with respect to preparing hardware devices for use with the HP XC System Software and their solutions. 5.1 iLO2 Devices 5.1.1 iLO2 Devices Can Become Unresponsive There is a known problem with the iLO2 console management devices that causes the iLO2 device to become unresponsive to certain tools including the HP XC power daemon and the iLO2 Web interface. When this happens, the power daemon generates CONNECT_ERROR messages.
A Establishing a Connection Through a Serial Port Follow this generic procedure to establish a connection to a server using a serial port connection to a console port. If you need more information about how to establish these connections, see the hardware documentation. 1. Connect a null modem cable between the serial port on the rear panel of the server and a COM port on the host computer. 2. Launch a terminal emulation program such as Windows HyperTerminal. 3.
B Server Blade Configuration Examples This appendix contains illustrations and descriptions of fully cabled HP XC systems based on interconnect type and server blade height. The connections are color-coded, so consider viewing the PDF file online or printing this appendix on a color printer to take advantage of the color coding. B.
Figure B-2 InfiniBand Interconnect With Full-Height Server Blades Non-Blade Server MGT NIC NIC PCI SLOT Non-Blade Server PCI SLOT MGT NIC NIC PCI SLOT PCI SLOT InfiniBand PCI Cards C-Class Blade Enclosure F U L L H E IG H T B L A D E ADMIN NET VLAN EXTERNAL NET VLAN NIC 2 Interconnect bay 2 NIC 3 NIC 4 MEZZ 1 InfiniBand Mezzanine Cards MEZZ 2 MEZZ 3 iLO2 HALF HEIGHT BLADE Admin ProCurve 2800 Series Switch Interconnect bay 1 NIC 1 NIC 1 NIC 2 MEZZ 1 MEZZ 2 iLO2 Interconnect bay 3 Inter
Figure B-3 InfiniBand Interconnect With Mixed Height Server Blades InfiniBand PCI Cards Non-Blade Server MGT NIC PCI SLOT PCI SLOT NIC C-Class Blade Enclosure FULL HEIGHT BLADE NIC 2 NIC 4 PCI SLOT Console ProCurve 2600 Series Switch Interconnect bay 4 MEZZ 2 MEZZ 3 Interconnect bays 5 & 6 (double wide) iLO2 HALF HEIGHT BLADE PCI SLOT NIC Admin ProCurve 2800 Series Switch Interconnect bay 3 MEZZ 1 NIC 1 InfiniBand Interconnect Switch Interconnect bay 7 NIC 2 iLO2 NIC Interconnect bay
Glossary A administration branch The half (branch) of the administration network that contains all of the general-purpose administration ports to the nodes of the HP XC system. administration network The private network within the HP XC system that is used for administrative operations. availability set An association of two individual nodes so that one node acts as the first server and the other node acts as the second server of a service. See also improved availability, availability tool.
operating system and its loader. Together, these provide a standard environment for booting an operating system and running preboot applications. enclosure The hardware and software infrastructure that houses HP BladeSystem servers. extensible firmware interface See EFI. external network node A node that is connected to a network external to the HP XC system. F fairshare An LSF job-scheduling policy that specifies how resources should be shared by competing users.
image server A node specifically designated to hold images that will be distributed to one or more client systems. In a standard HP XC installation, the head node acts as the image server and golden client. improved availability A service availability infrastructure that is built into the HP XC system software to enable an availability tool to fail over a subset of eligible services to nodes that have been designated as a second server of the service See also availability set, availability tool.
LVS Linux Virtual Server. Provides a centralized login capability for system users. LVS handles incoming login requests and directs them to a node with a login role. M Management Processor See MP. master host See LSF master host. MCS An optional integrated system that uses chilled water technology to triple the standard cooling capacity of a single rack. This system helps take the heat out of high-density deployments of servers and blades, enabling greater densities in data centers.
onboard administrator See OA. P parallel application An application that uses a distributed programming model and can run on multiple processors. An HP XC MPI application is a parallel application. That is, all interprocessor communication within an HP XC parallel application is performed through calls to the MPI message passing library. PXE Preboot Execution Environment.
an HP XC system, the use of SMP technology increases the number of CPUs (amount of computational power) available per unit of space. ssh Secure Shell. A shell program for logging in to and executing commands on a remote computer. It can provide secure encrypted communications between two untrusted hosts over an insecure network. standard LSF A workload manager for any kind of batch job.
Index A administration network as interconnect, 41, 57 console branch, 31 defined, 30, 37 application cabinet, 45 architecture (see processor architecture) B baseboard management controller (see BMC) BIOS settings HP ProLiant DL140 G2, 65 HP ProLiant DL140 G3, 66 HP ProLiant DL145, 88 HP ProLiant DL145 G2, 91 HP ProLiant DL145 G3, 92 HP ProLiant DL160 G5, 68 HP ProLiant DL165 G5, 94 HP ProLiant DL360 G4, 69 HP ProLiant DL360 G5, 71 HP ProLiant DL380 G4, 74 HP ProLiant DL380 G5, 75 HP ProLiant DL580 G4, 78
HP XC System Software, 12 Linux, 15 LSF, 14 manpages, 17 master firmware list, 12 Modules, 15 MPI, 16 MySQL, 15 Nagios, 14 pdsh, 15 reporting errors in, 17 rrdtool, 14 SLURM, 14 software RAID, 16 Supermon, 15 syslog-ng, 15 SystemImager, 15 TotalView, 16 E EFI boot manager CP6000 systems, 124 EFI firmware, 59 ELAN4 (see QsNet) enclosure, 84, 119, 136 setup guidelines, 24 /etc/hosts file, 84, 119 Ethernet ports head node, 61 external network creating VLANs, 41 defined, 41 NIC use, 41 external storage, 45 F
HP ProLiant DL585 G5, 20, 110 HP ProLiant DL785 G5, 20, 113 HP xw8200 workstation, 20, 80 HP xw8400 workstation, 20, 80 HP xw8600 workstation, 20, 82 HP xw9300 workstation, 20, 116 HP xw9400 workstation, 20, 116 I iLO, 29, 59 enabling telnet, 29 Ethernet port, 90, 95, 97, 100, 101, 105 iLO settings HP ProLiant D360 G4, 69 HP ProLiant DL360 G5, 71 HP ProLiant DL365, 95 HP ProLiant DL365 G5, 98 HP ProLiant DL380 G4, 73 HP ProLiant DL385, 101 HP ProLiant DL385 G2, 102, 105 HP ProLiant DL385 G5, 73 HP ProLiant
processor architecture, 19 AMD Opteron, 19 Intel Itanium, 19 Intel Xeon with EM64T, 19 processor architectures, 20 ProCurve 2610-24 root console switch, 51, 54 ProCurve 2610-48 branch console switch, 55 root console switch, 51, 52 ProCurve 2626, 47 root console switch, 51, 53 ProCurve 2650, 47 branch console switch, 55 root console switch, 51 ProCurve 2824, 47 branch administration switch, 54 root administration switch, 50 super root switch, 49 ProCurve 2848, 47 branch administration switch, 54 root adminis
*A-XCHWP-321c* Printed in the US