User Service Guide HP Integrity rx8640 Server, HP 9000 rp8440 Server Third Edition Manufacturing Part Number : AB297-9003C January 2007 USA © Copyright 2007
Legal Notices © Copyright 2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S.
Contents 1. HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dimensions and Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Front Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Installing an A6869B VGA/USB PCI Card in a Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Troubleshooting the A6869B VGA/USB PCI Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 System Console Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 VGA Consoles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Booting Red Hat Enterprise Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Booting SuSE Linux Enterprise Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Shutting Down Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 5. Server Troubleshooting Common Installation Problems. . . . . . . . . . . . .
Contents Replacing the Side Cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing and Replacing the Front Bezel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing and Replacing the Front Smart Fan Assembly. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing the Front Smart Fan Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tables Table 1-1. Cell Board CPU Module Load Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Table 1-2. DIMM Sizes Supported. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Table 1-3. DIMM Load Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Table 1-4. Removable Media Drive Path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tables 8
Figures Figure 1-1. 16-Socket Server Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Figure 1-2. Server (Front View With Bezel) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Figure 1-3. Server (Front View Without Bezel) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Figure 1-4. Server (Rear View) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures Figure 3-30. LAN and RS-232 Connectors on the Core I/O Board . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3-31. Front Panel Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3-32. BPS LED Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3-33. MP Main Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures Figure C-1. Server Space Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure C-2. Server Cabinet Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure C-3. Planning Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure C-4. Planning Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures 12
About this Document This document covers the HP Integrity rx8640 and HP 9000 rp8440 servers. This document does not describe system software or partition configuration in any detail. For detailed information concerning those topics, refer to the HP System Partitions Guide: Administration for nPartitions.
Book Layout This document contains the following chapters and appendices: • Chapter 1 - System Overview • Chapter 2 - System Specifications • Chapter 3 - Installing the System • Chapter 4 - Booting and Shutting Down the Operating System • Chapter 5 - Server Troubleshooting • Chapter 6 - Removing and Replacing Components • Appendix A - Replaceable Parts • Appendix B - MP Commands • Appendix C - Templates • Index
Intended Audience This document is intended to be used by customer engineers assigned to support HP Integrity rx8640 and HP 9000 rp8440 servers. Publishing History The following publishing history identifies the editions and release dates of this document. Updates are made to this document on an unscheduled, as needed, basis. The updates will consist of a new release of this document and pertinent online or CD-ROM documentation. First Edition ...................................................
Related Information You canaccess other information on HP server hardware management, Microsoft® Windows® administratuon, and diagnostic support tools at the following Web sites: http://docs.hp.com The main Web site for HP technical documentation is http://docs.hp.com. Server Hardware Information: http://docs.hp.com/hpux/hw/ The http://docs.hp.com/hpux/hw/ Web site is the systems hardware portion of docs.hp.com.
CAUTION A caution provides information required to avoid losing data or avoid losing system functionality. NOTE A note highlights useful information such as restrictions, recommendations, or important details about HP product features. • Commands and options are represented using this font. • Text that you type exactly as shown is represented using this font. • Text to be replaced with text that you supply is represented using this font.
HP Encourages Your Comments Hewlett-Packard welcomes your feedback on this publication. Please address your comments to edit@presskit.rsn.hp.com and note that you will not receive an immediate reply. All comments are appreciated.
1 HP Integrity rx8640 and HP 9000 rp8440 Server Overview The HP Integrity rx8640 server and the HP 9000 rp8440 server are members of the HP business-critical computing platform family of mid-range, mid-volume servers, positioned between the HP Integrity rx7640, HP 9000 rp7440 and HP Integrity Superdome servers.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview IMPORTANT The HP Integrity rx8640 and the HP 9000 rp8440 are both sx2000-based systems and share common hardware and technology throughout. The server is a 17U1 high, 16-socket symmetric multiprocessor (SMP) rack-mount or standalone server. Features of the server include: • Up to 512 GB of physical memory provided by dual inline memory modules (DIMMs).
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Detailed Server Description The following section provides detailed intormation about the server components.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Dimensions and Components The following section describes server dimensions and components.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Figure 1-3 Server (Front View Without Bezel) Removable Media Drives PCI Power Supplies Power Switch Hard Disk Drives Front OLR Fans Bulk Power Supplies The server has the following dimensions: • Depth: Defined by cable management constraints to fit into a standard 36-inch deep rack: 25.5 inches from front rack column to PCI connector surface 26.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Figure 1-4 Server (Rear View) PCI OLR Fans PCI I/O Card Section Core I/O Cards Rear OLR Fans AC Power Receptacles Access the PCI-X I/O card section, located toward the rear by removing the top cover. The PCI card bulkhead connectors are located at the rear top. The PCI OLR fan modules are located in front of the PCI cards. They are housed in plastic carriers.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Front Panel Front Panel Indicators and Controls The front panel, located on the front of the server, includes the power switch. Refer to Figure 1-5.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Cell Board The cell board, illustrated in Figure 1-6, contains the processors, main memory, and the CC application specific integrated circuit (ASIC) which interfaces the processors and memory with the I/O. The CC is the heart of the cell board, providing a crossbar connection that enables communication with other cell boards in the system. It connects to the processor dependent hardware (PDH) and microcontroller hardware.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Because of space limitations on the cell board, the PDH and microcontroller circuitry reside on a riser board that plugs at a right angle into the cell board. The cell board also includes clock circuits, test circuits, and decoupling capacitors. PDH Riser Board The server PDH riser board is a small card that plugs into the cell board at a right angle.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Table 1-1 Number of CPU Modules Installed Cell Board CPU Module Load Order Socket 2 Socket 3 Socket 1 Socket 0 1 Empty slot Empty slot Empty slot CPU installed 2 CPU installed Empty slot Empty slot CPU installed 3 CPU installed Empty slot CPU installed CPU installed 4 CPU installed CPU installed CPU installed CPU installed Figure 1-7 Socket Locations on Cell Board Socket 2 Socket 3 Socket 1 Soc
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description The memory subsystem comprises four independent quadrants. Each quadrant has its own memory data bus connected from the cell controller to the two buffers for the memory quadrant. Each quadrant also has two memory control buses: one for each buffer.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Table 1-2 DIMM Size DIMM Sizes Supported Total Capacity Memory Component Density 1 GB 64 GB 256 Mb 2 GB 128 GB 512 Mb 4 GB 256 GB 1024 Mb 8 GB 512 GB 2048 Mb Valid Memory Configurations The first cell must have one DIMM pair loaded in slots 0A/0B.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Table 1-3 DIMM Load Order Number of DIMMs Installed Action Taken DIMM Location on Cell Board Quad Location 2 DIMMs = 1 rank Install first 0A and 0B Quad 2 4 DIMMs = 2 rank Add second 1A and 1B Quad 1 6 DIMMs = 3 rank Add third 2A and 2B Quad 3 8 DIMMs = 4 rank Add fourth 3A and 3B Quad 0 10 DIMMs = 5 rank Add fifth 4A and 4B Quad 2 12 DIMMs = 6 rank Add sixth 5A and 5B Quad 1 14 DIMMs = 7 rank
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description On the server, each nPartition has its own dedicated portion of the server hardware which can run a single instance of the operating system. Each nPartition can boot, reboot, and operate independently of any other nPartitions and hardware within the same server complex.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Internal Disk Devices Figure 1-10 shows the top internal disk drives connect to cell 0 through the core I/O for cell 0, in a server cabinet. The bottom internal disk drives connect to cell 1 through the core I/O for cell 1. The upper removable media drive connects to cell 0 through the core I/O card for cell 0 and the lower removable media drive connects to cell 1 through the core I/O card for cell 1.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Table 1-5 Hard Drive Hard Disk Drive Path (Continued) Path Slot 2 drive 1/0/0/2/0.6.0 Slot 3 drive 1/0/0/3/0.6.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description System Backplane The system backplane board contains the following components: • Two crossbar chips (XBC) • Clock generation logic • Preset generation logic • Power regulators • Two local bus adapter (LBA) chips that create internal PCI buses for communicating with the core I/O card.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description The two LBA PCI bus controllers on the system backplane create the PCI bus for the core I/O cards. You must shut down the partition for the core I/O card before removing the card. Having the SCSI connectors on the system backplane allows replacement of the core I/O card without having to remove cables in the process.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description SBA link protocol into “ropes.” A rope is defined as a high-speed, point-to-point data bus. The SBA can support up to 16 of these high-speed bidirectional rope links for a total aggregate bandwidth of approximately 11.5 GB/s. There are LBA chips on the PCI-X backplane that act as a bus bridge, supporting either one or two ropes for PCI-X 133 MHz slots and the equivalent bandwidth of four ropes for PCI-X 266 MHz slots.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Table 1-6 PCI-X Slot Boot Paths Cell 0 (Continued) Cell PCI Slot Ropes Path 0 7 2/3 0/0/2/1/0 0 8 1 0/0/1/1/0 Table 1-7 PCI-X Slot Boot Paths Cell 1 Cell PCI Slot Ropes Path 1 1 8/9 1/0/8/1/0 1 2 10/11 1/0/10/1/0 1 3 12/13 1/0/12/1/0 1 4 14/15 1/0/14/1/0 1 5 6/7 1/0/6/1/0 1 6 4/5 1/0/4/1/0 1 7 2/3 1/0/2/1/0 1 8 1 1/0/1/1/0 The server supports two internal SBAs.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description IMPORTANT Always refer to the PCI card’s manufacturer for the specific PCI card performance specifications. PCI, PCI-X mode 1, and PCI-X mode 2 cards are supported at different clock speeds. Select the appropriate PCI-X I/O slot for best performance. Table 1-8 I/O Partition 0 1 PCI-X Slot Types Slota Maximum MHz Maximum Peak Bandwidth Ropes Supported Cards PCI Mode Supported 8 133 533 MB/s 001 3.
HP Integrity rx8640 and HP 9000 rp8440 Server Overview Detailed Server Description Core I/O Card Up to two core I/O cards can be plugged into the server. Two core I/O cards enable two I/O partitions to exist in the server. The server can have up to two partitions. When a Server Expansion Unit with two core I/O cards is attached to the server, two additional partitions can be configured. A core I/O card can be replaced with standby power applied.
2 System Specifications This chapter describes the basic system configuration, physical specifications and requirements for the server.
System Specifications Dimensions and Weights Dimensions and Weights This section provides dimensions and weights of the server and server components. Table 2-1 gives the dimensions and weights for a fully configured server. Table 2-1 Server Dimensions and Weights Standalone Packaged Height - Inches (centimeters) 29.55 (75.00) 86.50 (219.70) Width - Inches (centimeters) 17.50 (44.50) 40.00 (101.60) Depth - Inches (centimeters) 30.00 (76.20) 48.00 (122.00) Weight - Pounds (kilograms) 368.
System Specifications Dimensions and Weights Table 2-3 Example Weight Summary Component Quantity Multiply Weight (kg) Cell board 4 27.8 lb 12.16 kg 107.20 lb 48.64 kg PCI card (varies used sample value) 4 0.34 lb 0.153 kg 1.36 lb 0.61 kg Power supply (BPS) 6 12 lb 5.44 kg 72 lb 32.66 kg DVD drive 2 2.2 lb 1.0 kg 4.4 lb 2.0 kg Hard disk drive 4 1.6 lb 0.73 kg 6.40 lb 2.90 kg Chassis with skins and front bezel cover 1 131 lb 59.42 kg 131 lb 59.42 kg Total weight 322.36 lb 146.
System Specifications Electrical Specifications Electrical Specifications This section provides electrical specifications for HP Integrity rx8640 and HP 9000 rp8440 servers. These servers share common specifications. The exceptions are separate system power as well as power dissipation and cooling requirements. The associated data can be found in Table 2-7 on page 47, Table 2-8 on page 48, Table 2-10 on page 52 and Table 2-11 on page 53.
System Specifications Electrical Specifications System Power Specifications Table 2-6 lists the AC power requirements for the servers. This table provides information to help determine the amount of AC power needed for your computer room.
System Specifications Electrical Specifications Table 2-7 HP Integrity rx8640 System Power Requirements Power Required (50–60 Hz) Watts VA Comments Maximum Theoretical Power 5862 5982 See Note 1 Marked Electrical Power ––– 5400 30A @ 180 VAC, See Note 2 User Expected Maximum Power 3883 3962 See Note 3 Note 1: Maximum Theoretical Power: or “Maximum Configuration” (Input power at the ac input expressed in Watts and Volt-Amps to take into account Power factor correction.
System Specifications Electrical Specifications Table 2-8 HP 9000 rp 8440 System Power Requirements Power Required (50–60 Hz) Watts VA Comments Maximum Theoretical Power 5720 5837 See Note 1 Marked Electrical Power ––– 5400 30A @ 180 VAC, See Note 2 User Expected Maximum Power 3789 3866 See Note 3 Note 1: Maximum Theoretical Power: or “Maximum Configuration” (Input power at the ac input expressed in Watts and Volt-Amps to take into account Power factor correction.
System Specifications Environmental Specifications Environmental Specifications This section provides the environmental, power dissipation, noise emission, and air flow specifications for the server. Temperature and Humidity The cabinet is actively cooled using forced convection in a Class C1-modified environment. The recommended humidity level for Class C1 is 40 to 55% relative humidity (RH).
System Specifications Environmental Specifications Table 2-9 Example ASHRAE Thermal Report Condition Voltage 208 Volts Typical Heat Release Airflow, nominal Airflow, maximum at 32° C Description Watts cfm (m3/hr) lb kg Inches mm Minimum configuration 971 960 1631 178 81 h=29.55 750.57 w=17.50 444.50 d=30.00 762.00 h=29.55 750.57 w=17.50 444.50 d=30.00 762.00 h=29.55 750.57 w=17.50 444.50 d=30.00 762.
System Specifications Environmental Specifications Environmental Temperature Sensor To ensure that the system is operating within the published limits, the ambient operating temperature is measured using a sensor placed on the server backplane. Data from the sensor is used to control the fan speed and to initiate system overtemp shutdown. Non-Operating Environment The system is designed to withstand ambient temperatures between -40° C to 70° C under non-operating conditions.
System Specifications Environmental Specifications Typical HP Integrity rx8640 Power Dissipation and Cooling Table 2-10 provides calculations for rx8640 configurations as described in the table.
System Specifications Environmental Specifications Typical HP 9000 rp8440 Power Dissipation and Cooling Table 2-11 provides calculations for the rp8440 configurations as described in the table.
System Specifications Environmental Specifications Figure 2-1 illustrates the location of the inlet and outlet airducts on a single cabinet. Air is drawn into the front of the server and forced out the rear.
System Specifications Environmental Specifications 55 Chapter 2
3 Installing the System Inspect shipping containers when the equipment arrives at the site. Check equipment after the packing has been removed. This chapter discusses how to receive, inspect and install the server.
Installing the System Receiving and Inspecting the Server Cabinet Receiving and Inspecting the Server Cabinet This section contains information about receiving, unpacking and inspecting the server cabinet. NOTE The server will ship in one of three different configurations.
Installing the System Receiving and Inspecting the Server Cabinet Step 2. Lift the cardboard top cap from the shipping box. Refer to Figure 3-1, Figure 3-1 Removing the Polystraps and Cardboard Step 3. Remove the corrugated wrap from the pallet. Step 4. Remove the packing materials. CAUTION Cut the plastic wrapping material off rather than pull it off. Pulling the plastic covering off represents an electrostatic discharge (ESD) hazard to the hardware. Step 5.
Installing the System Receiving and Inspecting the Server Cabinet NOTE Figure 3-2 shows one ramp attached to the pallet on either side of the cabinet with each ramp secured to the pallet using two bolts. In an alternate configuration, the ramps are secured together on one side of the cabinet with one bolt.
Installing the System Receiving and Inspecting the Server Cabinet Step 6. Remove the six bolts from the base that attaches the rack to the pallet. Figure 3-3 Preparing to Roll Off the Pallet WARNING Be sure that the leveling feet on the rack are raised before you roll the rack down the ramp, and any time you roll the rack on the casters. Use caution when rolling the cabinet off the ramp. A single server in the cabinet weighs approximately 508 lb.
Installing the System Receiving and Inspecting the Server Cabinet Securing the Cabinet When in position, secure and stabilize the cabinet using the leveling feet at the corners of the base (Figure 3-4). Install the anti-tip mechanisms on the bottom front and rear of the rack.
Installing the System Standalone and To-Be-Racked Systems Standalone and To-Be-Racked Systems Servers shipped in a stand-alone or to-be-racked configuration must have the core I/O handles and the PCI towel bars attached at system installation. Obtain and install the core I/O handles and PCI towel bars from the accessory kit A6093-04046. The towel bars and handles are the same part. Refer to service note A6093A-11. This is the same accessory kit used for the HP 9000 rp8400 server.
Installing the System Lifting the Server Cabinet Manually Lifting the Server Cabinet Manually Use this procedure only if no HP approved lift is available. CAUTION This procedure must only be performed by four qualified HP Service Personnel utilizing proper lifting techniques and procedures. Step 1. Follow the instructions on the outside of the service packaging to remove the banding and cardboard top from the server pallet. Step 2. Reduce the weight by removing all bulk power supplies and cell boards.
Installing the System Using the RonI Model 17000 SP 400 Lifting Device Using the RonI Model 17000 SP 400 Lifting Device Use the lifter designed by the RonI company to rack-mount the server. The lifter can raise 400 lb/182 kg to a height of 5 feet. The lifter can be broken down into several components. When completely broken down, no single component weighs more than 25 lb/12 kg. The ability to break the lifter down makes it easy to transport from the office to the car and then to the customer site.
Installing the System Using the RonI Model 17000 SP 400 Lifting Device Step 3. Insert the lifter forks between the cushions (Figure 3-5). Figure 3-5 Positioning the Lifter to the Pallet Position the Lifter Forks at These Insertion Points Step 4. Carefully roll the lift forward until it is fully positioned against the side of the server.
Installing the System Using the RonI Model 17000 SP 400 Lifting Device Step 5. Slowly raise the server off the pallet until it clears the pallet cushions (Figure 3-6). Figure 3-6 Raising the Server Off the Pallet Cushions Step 6. Carefully roll the lifter and server away from the pallet. Do not raise the server any higher than necessary when moving it over to the rack. Step 7.
Installing the System Installing the Wheel Kit Installing the Wheel Kit Compare the packing list (Table 3-1) with the contents of the wheel kit before beginning the installation. For a more updated list of part numbers, go to the HP Part Surfer web site at: http://www.partsurfer.hp.com.
Installing the System Installing the Wheel Kit Use the following procedure to install the wheel kit. Step 1. Cut and remove the polystrap bands securing the server to the pallet. Step 2. Lift the carton top from the cardboard tray resting on the pallet. Step 3. Remove the bezel kit carton and top cushions (Figure 3-7) from the pallet. Figure 3-7Server on Shipping Pallet Top Cushions Bezel Kit Cardboard Tray Shipping Pallet Step 4. Unfold bottom cardboard tray.
Installing the System Installing the Wheel Kit Step 5. Remove the front cushion only (Figure 3-8). Do not remove any other cushions until further instructed. Figure 3-8Removing Cushion from Front Edge of Server Rear Cushion Side Cushion Front Cushion Step 6. Open the wheel kit box and locate the two front casters. The front casters are shorter in length than the two rear casters. Each front caster is designed to fit only on one corner of the server (right front caster and left front caster).
Installing the System Installing the Wheel Kit Step 7. Remove two of the eight screws from the plastic pouch. Attach one wheel caster to the front of the server (Figure 3-9). Figure 3-9Attaching a Caster Wheel to the Server Front Casters Step 8. Attach the remaining front caster to the server using two more screws supplied in the plastic pouch. Step 9. Remove the rear cushion at the rear of the server. Do not remove the remaining cushions. Step 10.
Installing the System Installing the Wheel Kit Step 12. The ramp has two ppredrilled holes (Figure 3-10). Attach the ramp to the edge of the pallet using the two screws taped to the ramp.
Installing the System Installing the Wheel Kit Step 13. Remove the two side cushions from the server, (Figure 3-11) and unfold the cardboard tray so that it lays flat on the pallet. Figure 3-11Removing Side Cushion from Server Ramp Side Cushion Step 14. Carefully roll the server off the pallet and down the ramp. Step 15. Obtain the caster covers from the wheel kit. Note that the caster covers are designed to fit on either side of the server.
Installing the System Installing the Wheel Kit Step 16. Insert the slot on the caster cover into the front caster (Figure 3-12). Secure the caster cover to the server by tightening the captive screw on the cover at the rear of the server. Repeat for the second caster cover.
Installing the System Installing the Wheel Kit Step 17. Snap the bezel cover into place on the front of the server. Figure 3-13 shows the server cabinet with the wheel kit installed.
Installing the System Installing the Top and Side Covers Installing the Top and Side Covers This section describes the procedures for installing the top and side server covers. NOTE Figure 3-14 You may be need to remove existing top and side covers installed on the server before installing the covers shipped with the wheel kit. If cover removal is not needed, go directly to the sections for installing the top and side cover.
Installing the System Installing the Top and Side Covers Step 5. Place the cover in a safe location. Figure 3-15 Top Cover Detail Retaining Screws Installing the Top Cover The following section describes the procedure for installing the top cover. Step 1. Orient the cover according to its position on the chassis. Step 2. Slide the cover into position using a slow, firm pressure to properly seat the cover. Step 3. Tighten the blue retaining screws securing the cover to the chassis.
Installing the System Installing the Top and Side Covers Step 2. Loosen the blue retaining screw securing the cover to the chassis (Figure 3-16). Figure 3-16 Side Cover Detail Retaining Screw Step 3. Slide the cover from the chassis toward the rear of the system. Step 4. Place the cover in a safe location. Installing the Side Cover The following section describes the procedure for installing the side cover. Step 1. Orient the cover according to its position on the chassis. Step 2.
Installing the System Installing the Power Distribution Unit Installing the Power Distribution Unit The server may ship with a power distribution unit (PDU). Two 60 A PDUs available for the server. Each PDU 3 U high and is mounted horizontally between the rear columns of the server cabinet. The 60 A PDUs are delivered with an IEC-309 60 A plug. The 60 A NEMA1 PDU has four 20 A circuit breakers and is constructed for North American use.
Installing the System Installing Additional Cards and Storage Installing Additional Cards and Storage This section provides information on additional products ordered after installation and any dependencies for these add-on products. The following options can be installed in the server: • Hard disk drive storage • Removable media device storage • PCI/PCI-X I/O cards Installing an Additional Hard Disk Drive The disk drives are located in the front of the chassis (Figure 3-17).
Installing the System Installing Additional Cards and Storage Step 3. Press the front locking latch to secure the disk drive in the chassis. Step 4. If the server OS is running, spin up the disk by entering one of the following commands: #diskinfo -v /dev/rdsk/cxtxdx #ioscan -f Removable Media Drive Installation The DVD drive or DDS-4 tape drives are located in the front of the chassis.
Installing the System Installing Additional Cards and Storage Step 6. Latch the front locking tab to secure the drive in the chassis.
Installing the System Installing Additional Cards and Storage HP Integrity rx8640 Supported PCI/PCI-X I/O Cards The rx8640 server supports a number of PCI and PCI-X I/O cards. Table 3-2 lists the cards currently supported on the server. For a more updated list of part numbers, go to the HP Part Surfer web site at: http://www.partsurfer.hp.com.
Installing the System Installing Additional Cards and Storage Table 3-2 HP Integrity rx8640 Server PCI-X I/O Cards (Continued) Part Number HP-UX 11i V2 Card Description Windows® Linux® AB286C PCI-X 2-Port 4X InfiniBand HCA (HPC)-RoHS AB287A 10 GbE - Fiber (PCI-X 133) b b b AB290A U320 SCSI/GigE Combo Card Bb Bb Bb AB345A PCI-X 2-port 4X InfiniBand HCA AB345C PCI-X 2-port 4X InfiniBand HCA-RoHS AB378A1 QLogic 1-port 4Gb FC card (PCI-X 266) B AB379A1 QLogic 2-port 4Gb FC card (PCI-
Installing the System Installing Additional Cards and Storage HP 9000 rp8440 Supported PCI/PCI-X I/O cards Table 3-3 lists the PCI/PCI-X cards supported in the rp8440 server. Several cards lose boot functionality when upgrading the server. The customer must use another I/O card to retain boot functionality if the customer’s card is not supported in the server.
Installing the System Installing Additional Cards and Storage Table 3-3 HP 9000 rp8440 Server PCI-X I/O Cards (Continued) Part Number HP-UX 11i V1 Card Description AB287A1 10 GbE - Fiber (PCI-X 133) AB290A1 U320 SCSI/GigE Combo Card Bb AB378A1 QLogic 1-port 4Gb FC card (PCI-X 266) B AB378B1 QLogic 1-port 4Gb FC card (PCI-X 266) B AB379A1 QLogic 2-port 4Gb FC card (PCI-X 266) B AB379B1 QLogic 2-port 4Gb FC card (PCI-X 266) B AB465A1 2-port 1000b-T 2Gb FC Combo Bb AB545A1 4-Port 100
Installing the System Installing Additional Cards and Storage IMPORTANT The above list of part numbers is current and correct as of December 2006. Part numbers change often. Check the following website to ensure you have the latest part numbers associated with this server: http://partsurfer.hp.com/cgi-bin/spi/main Installing an Additional PCI-X I/O Card IMPORTANT The installation process varies depending on what method for installing the PCI card is selected.
Installing the System Installing Additional Cards and Storage Step 1. Remove the top cover. Step 2. Remove the PCI bulkhead filler panel. Step 3. Flip the PCI manual release latch (MRL) for the card slot to the open position. Refer to Figure 3-19. Step 4. Install the new PCI card in the slot. NOTE Apply a slow, firm pressure to properly seat the card into the backplane. Step 5. Flip the PCI MRL for the card slot to the closed position.
Installing the System Installing Additional Cards and Storage Step 8. Check for errors in the hotplugd daemon log file (default: /var/adm/hotplugd.log). The critical resource analysis (CRA) performed while doing an attention button-initiated add action is restrictive, and the action will not complete and will fail to protect critical resources from being impacted. For finer control over CRA actions, use the pdweb or the olrad command.
Installing the System Installing Additional Cards and Storage Figure 3-20 PCI/PCI-X Card Location PCI/PCI-X Cards IMPORTANT Some PCI I/O cards, such as the A6869B VGA/USB PCI card, cannot be added or replaced online (while Windows remains running). For these cards, you must shut down Windows on the nPartition before performing the card replacement or addition. See the section on Shutting Down nPartitions and Powering off Hardware Components in the appropriate service guide. 1.
Installing the System Installing Additional Cards and Storage No Console Display Black Screen. No text displayed. Hardware problem. * Must have supported power enabled. * Must have a functional VGA/USB PCI card. * Must have a functional PCI slot. Select another slot on same partition/backplane. * Must have the VGA/USB PCI card firmly seated in PCI backplane slot. * Must have a supported monitor. * Must have verified cable connections to VGA/USB PCI card. Display unreadable.
Installing the System System Console Selection System Console Selection Each operating system requires that the correct console type be selected from the firmware selection menu. The following section describes how to determine the correct console device. If an operating system is being installed or the system configuration is being changed the system console setting must be checked to ensure it matches the hardware and OS.
Installing the System System Console Selection d. Choose the correct device for your system and deselect others. See “Interface Differences Between Itanium-based Systems” for details about choosing the appropriate device. e. Select “Save Settings to NVRAM” and then “Exit” to complete the change. f. A system reset is required for the changes to take effect. VGA Consoles Any device that has a PCI section in its path and does not have a UART section will be a VGA device.
Installing the System Cabling and Powering On the Server Cabling and Powering On the Server After the system has been unpacked and moved into position, it must be connected to a source of AC power. The AC power must be checked for the proper voltage before the system is powered up. This chapter describes these activities. Checking the Voltage This section provides voltage check information for use on the customer site.
Installing the System Cabling and Powering On the Server Table 3-4 provides single phase voltage measurement examples specific to the geographic region where these measurements are taken. Table 3-4 Single Phase Voltage Examples Japan Europea North America L1 to L2 210 V 208 V or 240 V 230 V L1 to ground 105 V 120 V 230 V L2 to ground 105 V 120 V 0V a. In some European countries, there might not be a polarization.
Installing the System Cabling and Powering On the Server 2. Insert the probe into the ground pin for A0. 3. Insert the other probe into the ground pin for A1. 4. Verify that the measurement is between 0-5 V AC. If the measurement is 5 V or greater, escalate the situation. Do not attempt to plug the power cord into the server cabinet. Step 2. Measure the voltage between B0 and B1 as follows: 1. Take the AC voltage down to the lowest scale on the volt meter. 2. Insert the probe into the ground pin for B0. 3.
Installing the System Cabling and Powering On the Server WARNING SHOCK HAZARD Risk of shock hazard while testing primary power. Use properly insulated probes. Be sure to replace access cover when finished testing primary power. Step 1. Measure the voltage between A0 and A1 as follows: 1. Take the AC voltage down to the lowest scale on the volt meter. 2. Insert the probe into the ground pin for A0. 3. Insert the other probe into the ground pin for A1. 4. Verify that the measurement is between 0-5 V AC.
Installing the System Cabling and Powering On the Server Voltage Check (Additional Procedure) The voltage check ensures that all phases (and neutral, for international systems) are connected correctly to the cabinet and that the AC input voltage is within limits. Perform this procedure if the previous voltage check procedure did not yield the expected results.
Installing the System Cabling and Powering On the Server WARNING Do not set site AC circuit breakers serving the processor cabinets to ON before verifying that the cabinet has been wired into the site AC power supply correctly. Failure to do so can result in injury to personnel or damage to equipment when AC power is applied to the cabinet. Step 8. Set the site AC circuit breaker to ON. Step 9. Set the server power to ON. Step 10. Check that the indicator light on each power supply is lit.
Installing the System Cabling and Powering On the Server • B1 input provides power to BPS 3, BPS 4, and BPS 5 For information on how input power cords supply power to each BPS, refer to Figure 3-27. Figure 3-27 Distribution of Input Power for Each Bulk Power Supply WARNING Voltage is present at various locations within the server whenever a power source is connected. This voltage is present even when the main power switch is in the off position.
Installing the System Cabling and Powering On the Server A minimum of two BPS are required to bring up a single cell board installed in the server. There is no N+1 capability in this case. Refer to Table 3-5 for configurations of multiple cell boards using N+1.
Installing the System Cabling and Powering On the Server Installing The Line Cord Anchor (rack mounted servers) The line cord anchor is attached to the rear of the server when rack mounted. It provides a method to secure the line cords to the server preventing accidental removal of the cords from the server. Four Cell Server Installation (rp8400, rp8420, rp8440, rx8620, rx8640) There are pre-drilled holes and pre-installed captive nuts in the server chassis. To install the line cord anchor 1.
Installing the System Cabling and Powering On the Server 4. Use the supplied Velcro straps to attach the cords to the anchor.
Installing the System Cabling and Powering On the Server MP Core I/O Connections Each HP server has at least one core I/O card installed. Each core I/O card has a management processor (MP). If two core I/O cards are installed, this allows for two partitions to be configured or enables core I/O redundancy in a single partition configuration. Each core I/O card is oriented vertically and accessed from the back of the server.
Installing the System Cabling and Powering On the Server If the CE Tool is a laptop using Reflection 1, check or change these communications settings using the following procedure: Step 1. From the Reflection 1 Main screen, pull down the Connection menu and select Connection Setup. Step 2. Select Serial Port. Step 3. Select Com1. Step 4. Check the settings and change, if required. Go to More Settings to set Xon/Xoff. Click OK to close the More Settings window. Step 5.
Installing the System Cabling and Powering On the Server Step 1. Connect one end of a null modem cable (9-pin to 9-pin) (Part Number 5182-4794) to the RS-232 Local port on the core I/O card (the DB9 connector located at the bottom of the core I/O card). Refer to Figure 3-30.
Installing the System Cabling and Powering On the Server Before powering up the server cabinet for the first time: Step 1. Verify that the AC voltage at the input source is within specifications for each server cabinet being installed. Step 2. If not already done, power on the serial display device. The preferred tool is the CE Tool running Reflection 1. To power on the MP, set up a communications link, and log in to the MP: Step 1. Apply power to the server cabinet.
Installing the System Cabling and Powering On the Server Figure 3-32 BPS LED Location BPS LED Location Step 3. Log in to the MP: 1. Enter Admin at the login prompt. (This term is case-sensitive.) It takes a few moments for the MP> prompt to appear. If the MP> prompt does not appear, verify that the laptop serial device settings are correct: 8 bits, no parity, 9600 baud, and None for both Receive and Transmit. Then try again. 2. Enter Admin at the password prompt. (This term is case-sensitive.
Installing the System Cabling and Powering On the Server The MP Main Menu is displayed: Figure 3-33 MP Main Menu Configuring LAN Information for the MP This section describes how to set and verify the server management processor (MP) LAN port information. LAN information includes the MP network name, the MP IP address, the subnet mask, and gateway address. This information is provided by the customer. To set the MP LAN IP address: Step 1. At the MP Main Menu prompt (MP>), enter cm.
Installing the System Cabling and Powering On the Server Enter lc and press the Return key. The lc command is displayed as shown in Figure 3-34. Figure 3-34 The lc Command Screen MP:CM> lc This command modifies the LAN parameters. Current configuration of MP customer LAN interface MAC address : 00:12:79:b4:03:1c IP address : 15.11.134.222 0x0f0b86de Hostname : metro-s Subnet mask : 255.255.248.0 0xfffff800 Gateway : 15.11.128.
Installing the System Cabling and Powering On the Server Step 10. A screen similar to Figure 3-35 will display allowing verification of the settings. Figure 3-35The ls Command Screen To return to the MP Main menu, enter ma. To exit the MP, enter x at the MP Main Menu. Accessing the Management Processor via a Web Browser Web browser access is an embedded feature of the management processor (MP). The Web browser enables access to the server via the LAN port on the core I/O card.
Installing the System Cabling and Powering On the Server Step 4. Type sa at the MP:CM> prompt to display and set MP remote access. Figure 3-36 Example sa Command Step 5. Launch a Web browser on the same subnet using the IP address for the MP LAN port. Step 6. Click anywhere on the Zoom In/Out title bar (Figure 3-37) to generate a full screen MP window.
Installing the System Cabling and Powering On the Server Step 7. Select the emulation type you want to use. Step 8. Log in to the MP when the login window appears. Access to the MP via a Web browser is now possible. Verifying the Presence of the Cell Boards To perform this activity, either connect to the management processor (MP) over the customer console or connect the CE Tool (laptop) to the RS-232 Local port on the MP.
Installing the System Cabling and Powering On the Server Configuring AC Line Status The MP utilities can detect if power is applied to each of the AC input cords for the server, by sampling the status of the bulk power supplies. During installation, use the following procedure to check the configuration for the AC line status and configure it to match the customer’s environment. Step 1. At the MP prompt, enter cm.
Installing the System Cabling and Powering On the Server 1. Open a separate Reflection window and connect to the MP. 2. From the MP Main Menu, select the VFP command with the s option. Step 2. A window showing activity for a single partition. To display activity for each partition as it powers on: Step 1. Open a separate Reflection window and connect to the MP. Step 2. Select the VFP command and select the desired partition to view.
Installing the System Cabling and Powering On the Server Selecting a Boot Partition Using the MP At this point in the installation process, the hardware is set up, the MP is connected to the LAN, the AC and DC power have been turned on, and the self-test is completed. Now the configuration can be verified. After the DC power on and the self-test is complete, use the MP to select a boot partition. Step 1. From the MP Main Menu, enter cm. Step 2. From the MP Command Menu, enter bo. Step 3.
Installing the System Cabling and Powering On the Server NOTE If the partition fails to boot or if the server was shipped without Instant Ignition, booting from a DVD that contains the operating system and other necessary software might be required. Adding Processors for HP Integrity rx8640 with Instant Capacity The Instant Capacity program provides access to additional CPU resources beyond the amount that was purchased for the server.
Installing the System Cabling and Powering On the Server Table 3-6 Factory-Integrated Installation Checklist Procedure In-process Initials Comments Completed Initials Comments Obtain LAN information Verify site preparation Site grounding verified Power requirements verified Check inventory Inspect shipping containers for damage Unpack cabinet Allow proper clearance Cut polystrap bands Remove cardboard top cap Remove corrugated wrap from the pallet Remove four bolts holding down the ramps and remove t
Installing the System Cabling and Powering On the Server Table 3-6 Factory-Integrated Installation Checklist (Continued) (Continued) Procedure In-process Completed Adjust leveling feet Install anti tip plates Inspect cables for proper installation Set up CE tool and connect to Remote RS-232 port on MP Apply power to cabinet (Housekeeping) Check power to BPSs Log in to MP Set LAN IP address on MP Connect customer console Set up network on customer console Verify LAN connection Verify presence of cells P
Installing the System Cabling and Powering On the Server Table 3-6 Factory-Integrated Installation Checklist (Continued) (Continued) Procedure In-process Completed Set up network services (if required) Enable Instant Capacity (if available) Final inspection of circuit boards Final inspection of cabling Area cleaned and debris and packing materials disposed of Tools accounted for Parts and other items disposed of Make entry in Gold Book (recommended) Customer acceptance and signoff (if required) 119 C
4 Booting and Shutting Down the Operating System This chapter presents procedures for booting an operating system (OS) on an nPartition (hardware partition) and procedures for shutting down the OS.
Booting and Shutting Down the Operating System Operating Systems Supported on Cell-based HP Servers Operating Systems Supported on Cell-based HP Servers HP supports nPartitions on cell-based HP 9000 servers and cell-based HP Integrity servers. The following list describes the OSes supported on cell-based servers based on the HP sx2000 chipset.
Booting and Shutting Down the Operating System Operating Systems Supported on Cell-based HP Servers NOTE SuSE Linux Enterprise Server 10 is supported on HP rx8640 servers, and will be supported on other cell-based HP Integrity servers with the Intel® Itanium® 2 dual-core processor (rx7640 and Superdome) soon after the release of those servers. Refer to “Booting and Shutting Down Linux” on page 150 for details.
Booting and Shutting Down the Operating System System Boot Configuration Options System Boot Configuration Options This section briefly discusses the system boot options you can configure on cell-based servers. You can configure boot options that are specific to each nPartition in the server complex. HP 9000 Boot Configuration Options On cell-based HP 9000 servers the configurable system boot options include boot device paths (PRI, HAA, and ALT) and the autoboot setting for the nPartition.
Booting and Shutting Down the Operating System System Boot Configuration Options To save and restore boot options, use the EFI Shell variable command. The variable -save file command saves the contents of the boot options list to the specified file on an EFI disk partition. The variable -restore file command restores the boot options list from the specified file that was previously saved. Details also are available by entering help variable at the EFI Shell.
Booting and Shutting Down the Operating System System Boot Configuration Options To set autoboot from HP-UX, use the setboot command. • ACPI Configuration Value—HP Integrity Server OS Boot On cell-based HP Integrity servers you must set the proper ACPI configuration for the OS that will be booted on the nPartition. To check the ACPI configuration value, issue the acpiconfig command with no arguments at the EFI Shell.
Booting and Shutting Down the Operating System System Boot Configuration Options To change the nPartition behavior when an OS is shut down and halted, use either the acpiconfig enable softpowerdown EFI Shell command or the acpiconfig disable softpowerdown command, and then reset the nPartition to make the ACPI configuration change take effect.
Booting and Shutting Down the Operating System System Boot Configuration Options To display or set the boot mode for an nPartition on a cell-based HP Integrity server, use any of the following tools as appropriate. Refer to Installing and Managing HP-UX Virtual Partitions (vPars), Sixth Edition, for details, examples, and restrictions. — parconfig EFI shell command The parconfig command is a built-in EFI shell command. Refer to the help parconfig command for details.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX Booting and Shutting Down HP-UX This section presents procedures for booting and shutting down HP-UX on cell-based HP servers and a procedure for adding HP-UX to the boot options list on HP Integrity servers. • To determine whether the cell local memory (CLM) configuration is appropriate for HP-UX, refer to “HP-UX Support for Cell Local Memory” on page 128.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX To add an HP-UX boot option when logged in to HP-UX, use the setboot command. For details, refer to the setboot (1M) manpage. Step 1. Access the EFI Shell environment. Log in to the management processor, and enter CO to access the system console. When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu).
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX Refer to “Shutting Down HP-UX” on page 138 for details on shutting down the HP-UX OS. CAUTION ACPI Configuration for HP-UX Must Be default On cell-based HP Integrity servers, to boot the HP-UX OS, an nPartition ACPI configuration value must be set to default. At the EFI Shell interface, enter the acpiconfig command with no arguments to list the current ACPI configuration.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX 0/0/2/0/0.0 (hex) Main Menu: Enter command or menu > Step 3. Boot the device by using the BOOT command from the BCH interface. You can issue the BOOT command in any of the following ways: • BOOT Issuing the BOOT command with no arguments boots the device at the primary (PRI) boot path.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX HP-UX Booting (EFI Boot Manager) From the EFI Boot Manager menu, select an item from the boot options list to boot HP-UX using that boot option. The EFI Boot Manager is available only on HP Integrity servers. Refer to “ACPI Configuration for HP-UX Must Be default” on page 130 for required configuration details. Step 1. Access the EFI Boot Manager menu for the nPartition on which you want to boot HP-UX.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX a. At the EFI Shell interface enter the acpiconfig default command. b. Enter the reset command for the nPartition to reboot with the proper (default) configuration for HP-UX. Step 3. At the EFI Shell environment, issue the map command to list all currently mapped bootable devices. The bootable file systems of interest typically are listed as fs0:, fs1:, and so on. Step 4.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX Single-User Mode HP-UX Booting This section describes how to boot HP-UX in single-user mode on cell-based HP 9000 servers and cell-based HP Integrity servers. • On HP 9000 servers, to boot HP-UX in single-user mode, refer to “Single-User Mode HP-UX Booting (BCH Menu)” on page 134. • On HP Integrity servers, to boot HP-UX in single-user mode, refer to “Single-User Mode HP-UX Booting (EFI Shell)” on page 135.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX Refer to the hpux (1M) manpage for a detailed list of hpux loader options. Example 4-1Single-User HP-UX Boot ISL Revision A.00.42 JUN 19, 1999 ISL> hpux -is /stand/vmunix Boot : disk(0/0/2/0/0.13.0.0.0.0.0;0)/stand/vmunix 8241152 + 1736704 + 1402336 start 0x21a0e8 .... INIT: Overriding default level with level ’s’ INIT: SINGLE USER MODE INIT: Running /sbin/sh # Step 4.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX After you press any key, the HPUX.EFI interface (the HP-UX Boot Loader prompt, HPUX>) is provided. For help using the HPUX.EFI loader, enter the help command. To return to the EFI Shell, enter exit. fs0:\> hpux (c) Copyright 1990-2002, Hewlett Packard Company. All rights reserved HP-UX Boot Loader for IA64 Revision 1.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX Step 1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX in LVM-maintenance mode. Log in to the management processor, and enter CO to access the Console list. Select the nPartition console. When accessing the console, confirm that you are at the BCH Main Menu (the Main Menu: Enter command or menu> prompt). If you are at a BCH menu other than the Main Menu, then enter MA to return to the BCH Main Menu.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Shutting Down HP-UX When HP-UX is running on an nPartition, you can shut down HP-UX using the shutdown command.
Booting and Shutting Down the Operating System Booting and Shutting Down HP-UX • Shut down HP-UX and reboot the nPartition. Issue the shutdown -r command to shut down and reboot the nPartition. On cell-based HP Integrity servers, the shutdown -r command is equivalent to the shutdown -R command. • Perform a reboot for reconfiguration of the nPartition. Issue the HP-UX shutdown -R command to perform a reboot for reconfiguration.
Booting and Shutting Down the Operating System Booting and Shutting Down HP OpenVMS I64 Booting and Shutting Down HP OpenVMS I64 This section presents procedures for booting and shutting down HP OpenVMS I64 on cell-based HP Integrity servers and procedures for adding HP OpenVMS to the boot options list. • To determine whether the cell local memory (CLM) configuration is appropriate for HP OpenVMS, refer to “HP OpenVMS I64 Support for Cell Local Memory” on page 140.
Booting and Shutting Down the Operating System Booting and Shutting Down HP OpenVMS I64 To configure booting on Fibre Channel devices, you must use the OpenVMS I64 Boot Manager utility (BOOT_OPTIONS.COM). For more information on this utility and other restrictions, refer to the HP OpenVMS for Integrity Servers Upgrade and Installation Manual. Adding an HP OpenVMS Boot Option This procedure adds an HP OpenVMS item to the boot options list from the EFI Shell.
Booting and Shutting Down the Operating System Booting and Shutting Down HP OpenVMS I64 To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Booting HP OpenVMS To boot HP OpenVMS I64 on a cell-based HP Integrity server use either of the following procedures.
Booting and Shutting Down the Operating System Booting and Shutting Down HP OpenVMS I64 Booting HP OpenVMS (EFI Shell) From the EFI Shell environment, to boot HP OpenVMS on a device first access the EFI System Partition for the root device (for example fs0:), and enter \efi\vms\vms_loader to initiate the OpenVMS loader. Step 1. Access the EFI Shell environment for the system on which you want to boot HP OpenVMS. Log in to the management processor, and enter CO to select the system console.
Booting and Shutting Down the Operating System Booting and Shutting Down HP OpenVMS I64 Log in to the management processor (MP) for the server and use the Console menu to access the system console. Accessing the console through the MP enables you to maintain console access to the system after HP OpenVMS has shut down. Step 2. At the OpenVMS command line (DCL) issue the @SYS$SYSTEM:SHUTDOWN command and specify the shutdown options in response to the prompts given.
Booting and Shutting Down the Operating System Booting and Shutting Down Microsoft Windows Booting and Shutting Down Microsoft Windows This section presents procedures for booting and shutting down the Microsoft Windows OS on cell-based HP Integrity servers and a procedure for adding Windows to the boot options list. • To determine whether the cell local memory (CLM) configuration is appropriate for Windows, refer to “Microsoft Windows Support for Cell Local Memory” on page 145.
Booting and Shutting Down the Operating System Booting and Shutting Down Microsoft Windows Adding a Microsoft Windows Boot Option This procedure adds the Microsoft Windows item to the boot options list. Step 1. Access the EFI Shell environment. Log in to the management processor, and enter CO to access the system console. When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu).
Booting and Shutting Down the Operating System Booting and Shutting Down Microsoft Windows Step 6. Press Q to quit the NVRBOOT utility, and exit the console and management processor interfaces if you are finished using them. To exit the EFI environment press ^B (Control+B); this exits the system console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu.
Booting and Shutting Down the Operating System Booting and Shutting Down Microsoft Windows Step 3. Press Enter to initiate booting using the chosen boot option. Step 4. When Windows begins loading, wait for the Special Administration Console (SAC) to become available. The SAC interface provides a text-based administration tool that is available from the nPartition console. For details, refer to the SAC online help (type ? at the SAC> prompt). Loading.
Booting and Shutting Down the Operating System Booting and Shutting Down Microsoft Windows /a Abort a system shutdown. /t xxx Set the timeout period before shutdown to xxx seconds. The timeout period can range from 0–600, with a default of 30. Refer to the help shutdown Windows command for details.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux Booting and Shutting Down Linux This section presents procedures for booting and shutting down the Linux OS on cell-based HP Integrity servers and a procedure for adding Linux to the boot options list. • To determine whether the cell local memory (CLM) configuration is appropriate for Red Hat Enterprise Linux or SuSE Linux Enterprise Server, refer to “Linux Support for Cell Local Memory” on page 150.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux See “Boot Options List” on page 123 for additional information about saving, restoring, and creating boot options. On HP Integrity servers, the OS installer automatically adds an entry to the boot options list. NOTE Adding a Linux Boot Option This procedure adds a Linux item to the boot options list. Step 1. Access the EFI Shell environment. Log in to the management processor, and enter CO to access the system console.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux To exit the EFI environment press ^B (Control+B); this exits the system console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Booting Red Hat Enterprise Linux You can boot the Red Hat Enterprise Linux OS on HP Integrity servers using either of the methods described in this section.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux Refer to “ACPI Configuration for Red Hat Enterprise Linux Must Be default” on page 152 for required configuration details. Step 1. Access the EFI Shell. From the system console, select the EFI Shell entry from the EFI Boot Manager menu to access the shell. Step 2. Access the EFI System Partition for the Red Hat Enterprise Linux boot device.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux To load the SuSE Linux Enterprise Server OS at the EFI Boot Manager menu, choose its entry from the list of boot options. Choosing a Linux entry from the boot options list boots the OS using ELILO.EFI loader and the elilo.conf file. • Initiate the ELILO.EFI Linux loader from the EFI Shell. Refer to the procedure “Booting SuSE Linux Enterprise Server (EFI Shell)” on page 154 for details.
Booting and Shutting Down the Operating System Booting and Shutting Down Linux The Red Hat Enterprise Linux and SuSE Linux Enterprise Server shutdown command includes the following options: -h Halt after shutdown. On cell-based HP Integrity servers, this either powers down server hardware or puts the nPartition into a shutdown for reconfiguration state. Use the PE command at the management processor Command Menu to manually power on or power off server hardware, as needed. -r Reboot after shutdown.
5 Server Troubleshooting This chapter contains tips and procedures for diagnosing and correcting problems with the server and its customer replaceable units (CRUs). Information about the various status LEDs on the server is also included.
Server Troubleshooting Common Installation Problems Common Installation Problems The following sections contain general procedures to help you locate installation problems. CAUTION Do not operate the server with the top cover removed for an extended period of time. Overheating can damage chips, boards, and mass storage devices. However, you can safely remove the top cover while the server is running to remove and replace PCI hot-plug cards.
Server Troubleshooting Common Installation Problems The Server Does Not Power On To check for power-related problems, use the checklist below: a. Check the LED for each bulk power supply (BPS). The LED is located in the lower left-hand corner of the power supply face. Table 5-2 shows the states of the LEDs. b. Check that the power supply and a minimum of two power cords are plugged into the chassis. NOTE c. Two power cords must be connected to A0 and A1 or B0 and B1. Remove and replace any suspect BPS.
Server Troubleshooting Server LED Indicators Server LED Indicators The server has LEDs that indicate system health. This section defines those LEDs. Front Panel LEDs There are seven LEDs located on the front panel. Figure 5-1 Front Panel with LED Indicators Table 5-1 Front Panel LEDs LED Power MP Status Status Description On Green 48 V Good.
Server Troubleshooting Server LED Indicators Table 5-1 LED Cell 0 thru Cell 3 Front Panel LEDs (Continued) Status Green Description Cell power on (solid) Off Cell power off Red Cell fault. Cell powered off due to power problem or HPMC/MC event from cell (solid) Yellow (flashing) Locate Blue (flashing) Off Chapter 5 Cell fault warning Latches not latched, LPM not ready, VRMs reporting not good or OT Cell fan slow/failed User requests locator ON and specifies (1 - 72) hour off timeout.
Server Troubleshooting Server LED Indicators Bulk Power Supply LEDs There is a single, three-color LED located on each bulk power supply.
Server Troubleshooting Server LED Indicators PCI Power Supply LEDs There are three LEDs on the PCI power supply. Green and yellow LEDs follow OL* operation. A multi-color LED reports warnings and faults. Figure 5-3 PCI Power Supply LED Locations Power Attention Table 5-3 LED Power Fault Chapter 5 Fault PCI-X Power Supply LEDs Driven By Each supply Each supply State Description On Green All output voltages generated by the power supply are within limits.
Server Troubleshooting Server LED Indicators System and I/O Fan LEDs There is a single, three-color LED located on the front OLR fan, the rear OLR fan and the PCI I/O fan.
Server Troubleshooting Server LED Indicators OL* LEDs Cell Board LEDs There is one green power LED located next to each ejector on the cell board in the server that indicates the power is good. When the LED is illuminated green, power is being supplied to the cell board and it is unsafe to remove the cell board from the server. There is one yellow attention LED located next to each ejector lever on the cell board in the server.
Server Troubleshooting Server LED Indicators Table 5-5 Location On cell board (located in the server cabinet) Cell Board OL* LED Indicators LED Power Attention 165 Driven by Cell LPM MP via GPM State Description On Green 3.3 V Standby and Cell_Power_Good Off 3.3 V Standby off, or 3.
Server Troubleshooting Server LED Indicators PCI OL* Card Divider LEDs The PCI-X OL* card LEDs are located on each of the 16 PCI-X slot dividers in the PCI-X card cage assembly area. The green power LED indicates whether power is supplied to the card slot. The yellow attention LED states are defined in Table 5-6.
Server Troubleshooting Server LED Indicators Core I/O LEDs The core I/O LEDs are located on the bulkhead of the installed core I/O PCA. See Table 5-7 on page 168 to determine status and description. There is a DIP switch on the core I/O card that is used to select which MP firmware set (indicated by the MP SEL LED) is selected for loading. The DIP switch is only visible when the core I/O card is removed from the system and is located in the center of the PCA.
Server Troubleshooting Server LED Indicators Table 5-7 Core I/O LEDs LED (as silk-screened on the bulkhead) State Description SCSI TRM On Green SCSI termpower is on SCSI LVD On Green SCSI LVD mode (on = LVD, off = SE) ATTN On Yellow PCI attention PWR On Green I/O power on SYS LAN 10 BT On Green SYS LAN in 10 BT mode SYS LAN 100 BT On Green SYS LAN in 100 BT mode SYS LAN 1Gb On Green SYS LAN in 1Gb mode SYS LAN ACT On Green Indicates SYS LAN activity SYS LAN LINK On Green SYS
Server Troubleshooting Server LED Indicators Table 5-8 Button Identification (as silk-screened on the bulkhead) MP RESET Core I/O Buttons Location To the far left side of the core I/O card Function Resets the MP NOTE: If the MP RESET button is held for longer than five seconds, it will clear the MP password and reset the LAN, RS-232 (serial port), and modem port parameters to their default values. LAN Default Parameters • IP Address - 192.168.1.1 • Subnet mask - 255.255.255.
Server Troubleshooting Server LED Indicators Disk Drive LEDs There are two tri-color LED on each disk drive.
Server Troubleshooting Server Management Subsystem Hardware Overview Server Management Subsystem Hardware Overview Server management for the servers is provided by the MP on the core I/O board. The server management hardware is powered by standby power that is available whenever the server is plugged into primary AC power. This allows service access even if the DC power to the server is switched off.
Server Troubleshooting Server Management Overview Server Management Overview Server management consists of four basic functional groups: • Chassis management • Chassis logging • Console and session redirection • Service access Chassis Management Chassis management consists of control and sensing the state of the server subsystems: • Control and sensing of bulk power • Control and sensing of DC-to-DC converters • Control and sensing of fans • Control of the front panel LEDs • Sensing temper
Server Troubleshooting Server Management Behavior Server Management Behavior This section describes how the system responds to over-temperature situations, how the firmware controls and monitors fans, and how it controls power to the server. Thermal Monitoring The manageability firmware is responsible for monitoring the ambient temperature in the server and taking appropriate action if this temperature becomes too high.
Server Troubleshooting Server Management Behavior The altimeter circuit is checked at power on by the MP. If an expected value is returned from the altimeter circuit, the altimeter is determined good. The altimeter reading is then set in non-volatile random access memory (NVRAM) on board the core I/O card. If the value is ever lost like for a core I/O replacement, the NVRAM will be updated at next boot provided the altimeter is functioning normally.
Server Troubleshooting Updating Firmware Updating Firmware The following sections describe how to update firmware using either HP Firmware Manager (HP FM) or FTP. Firmware Manager You can update firmware by using the HP Firmware Manager (HP FM). HP FM is a set of tools for updating firmware on an Integrity or PA-RISC system. HP FM is packaged with firmware and distributed through the web. HP FM provides two methods of updating firmware.
Server Troubleshooting Updating Firmware CAUTION Figure 5-11 Instructions for updating the firmware are contained in the firmware release notes for each version of firmware. The procedure should be followed exactly for each firmware update otherwise the system could be left in an unbootable state. Figure 5-11 should not be used as an upgrade procedure and is provided only as an example.
Server Troubleshooting PDC Code CRU Reporting PDC Code CRU Reporting The processor dependent code (PDC) interface defines the locations for the CRUs. These locations are denoted in the following figures to aid in physically locating the CRU when the diagnostics point to a specific CRU that has failed or may be failing in the near future.
Server Troubleshooting PDC Code CRU Reporting Figure 5-13 Server Cabinet CRUs (Rear View) I/O Fan 2 I/O Fan 5 I/O Fan 1 I/O Fan 4 I/O Fan 0 I/O Fan 3 Cabinet Fan 9 Cabinet Fan 10 Cabinet Fan 11 Cabinet Fan 12 Cabinet Fan 13 Cabinet Fan 14 Core I/O (Cell 0) Cabinet Fan 15 Cabinet Fan 16 Cabinet Fan 17 Core I/O (Cell 1) Cabinet Fan 18 Cabinet Fan 19 Cabinet Fan 20 B1 A1 B0 A0 Chapter 5 178
Server Troubleshooting PDC Code CRU Reporting 179 Chapter 5
6 Removing and Replacing Components This chapter provides a detailed description of the server field replaceable unit (CRU) removal and replacement procedures. The procedures in this chapter are intended for use by trained and experienced HP service personnel only.
Removing and Replacing Components Customer Replaceable Units (CRUs) Customer Replaceable Units (CRUs) The following section lists the different types of CRUs the server supports. Hot-Plug CRUs A CRU is defined as hot-plug if it can be removed from the chassis while the system remains operational, but requires software intervention before it is removed.
Removing and Replacing Components Safety and Environmental Considerations Safety and Environmental Considerations WARNING Before proceeding with any installation, maintenance, or service on a system that requires physical contact with electrical or electronic components, be sure that either power is removed or safety precautions are followed to protect against electric shock and equipment damage. Observe all WARNING and CAUTION labels on equipment.
Removing and Replacing Components Safety and Environmental Considerations • Prepare an ESD-safe work surface large enough to accommodate the various assemblies handled during the upgrade. Use a grounding mat and an anti-static wrist strap, such as those included in the ESD Field Service Kit (9300-1609). • The anti-static bag cannot function as a static dissipating mat. Do not use the anti-static bag for any other purpose than to enclose a product.
Removing and Replacing Components Powering Off Hardware Components and Powering On the Server Powering Off Hardware Components and Powering On the Server When you remove and replace hardware, you might need to power off hardware components as part of the remove and replace procedure. This section gives details on how to power the hardware components off and on. Powering Off Hardware Components To power off individual components or the entire cabinet: Step 1.
Removing and Replacing Components Powering Off Hardware Components and Powering On the Server Powering On the System To power on the system after a repair: Step 1. If needed, reconnect all power cords to the appropriate receptacles and power on the system. Step 2. Use the MP Command Menu PE command to power on the hardware component that was powered off and replaced. Step 3. Use the PS command to verify that power is enabled to the newly replaced part.
Removing and Replacing Components Removing and Replacing Covers Removing and Replacing Covers It is necessary to remove one or more of the covers (Figure 6-1) to access many of the CRUs within the server chassis. CAUTION Figure 6-1 Observe all electrostatic discharge (ESD) safety precautions before attempting these procedures. Failure to follow ESD safety precautions can result in damage to the server. Cover Locations Top Cover Side Cover Front Bezel Removing the Top Cover Step 1.
Removing and Replacing Components Removing and Replacing Covers Step 5. Place the cover in a safe location. Figure 6-2 Top Cover Removed Retaining Screws Replacing the Top Cover Step 1. Orient the cover on the top of the chassis. NOTE Carefully seat the cover to avoid damage to the intrusion switch. Step 2. Slide the cover into position using a slow, firm pressure to properly seat the cover. Step 3. Tighten the blue retaining screws securing the cover to the chassis. Removing the Side Cover Step 1.
Removing and Replacing Components Removing and Replacing Covers Step 2. Loosen the blue retaining screw securing the cover to the chassis. Refer to Figure 6-3. Figure 6-3Side Cover Removal Detail Retaining Screw Step 3. Slide the cover from the chassis toward the rear of the system. Step 4. Place the cover in a safe location. Replacing the Side Cover Step 1. Orient the cover on the side of the chassis. Step 2. Slide the cover into position using a slow, firm pressure to properly seat the cover. Step 3.
Removing and Replacing Components Removing and Replacing Covers Removing and Replacing the Front Bezel To remove the front bezel: From the front of the server, grasp both sides of the bezel and pull firmly toward you (Figure 6-4). The catches will release and the bezel will pull free. Figure 6-4 Bezel Removal and Replacement Grasp Here Replacing the Front Bezel Step 1. If you are replacint the bezel, visually inspect the replacement part for the proper part number. Step 2.
Removing and Replacing Components Removing and Replacing the Front Smart Fan Assembly Removing and Replacing the Front Smart Fan Assembly The front smart fan assembly is located in the front of the chassis (Figure 6-5). The fan assembly is a hot-swap component. Refer to “Hot-Swap CRUs” on page 181 for a list and description of hot-swap CRUs. Figure 6-5 Front Smart Fan Assembly Location Front Smart Fan Assembly Removing the Front Smart Fan Assembly Step 1. Remove the front bezel.
Removing and Replacing Components Removing and Replacing the Front Smart Fan Assembly Step 2. Identify the failed fan assembly. Table 6-1 defines the fan LED states. Table 6-1 Smart Fan Assembly LED States LED State Meaning Green Fan is at speed and in sync or not at speed less than 12 seconds. Flash Yellow Fan is not keeping up with speed/sync pulse for longer than 12 seconds. Red Fan failed, stalled or has run slow or fast for longer than 12 seconds.
Removing and Replacing Components Removing and Replacing the Rear Smart Fan Assembly Removing and Replacing the Rear Smart Fan Assembly The rear smart fan assembly is located in the rear of the chassis (Figure 6-7). The fan assembly is a hot-swap component. Refer to “Hot-Swap CRUs” on page 181 for a list and description of hot-swap CRUs. Figure 6-7 Rear Smart Fan Assembly Location Rear Fan Assembly Removing the Rear Smart Fan Assembly Step 1. Identify the failed fan assembly.
Removing and Replacing Components Removing and Replacing the Rear Smart Fan Assembly Step 3. Slide the fan from the chassis (Figure 6-8). Figure 6-8Rear Fan Detail Replacing the Rear Smart Fan Assembly Step 1. Position the fan assembly in the chassis. Step 2. Slide the fan into the connector. Step 3. Tighten the two thumb screws to secure the fan to the chassis. Step 4. The LED should be Green. Refer to Table 6-1 for a listing of LED definitions.
Removing and Replacing Components Removing and Replacing a Disk Drive Removing and Replacing a Disk Drive The disk drive is located in the front of the chassis. Internal disk drives are hot-plug components. Refer to “Hot-Plug CRUs” on page 181 for a list and description of hot-plug CRUs. Figure 6-9 Disk Drive Location Disk Drives Removing the Disk Drive Step 1. Disengage the front locking latch on the disk drive by pushing the release tab to the right and the latch lever to the left.
Removing and Replacing Components Removing and Replacing a Disk Drive Step 2. Pull forward on the front locking latch and carefully slide the disk drive from the chassis (Figure 6-10), Figure 6-10 Disk Drive Detail Replacing the Disk Drive Step 1. Sometimes diskinfo and ioscan will display cached data. Running diskinfo on the device without a disk installed clears the cached data. Enter either of the following commands. For the diskinfo command, replace x with actual values.
Removing and Replacing Components Removing and Replacing a Removable Media Drive Removing and Replacing a Removable Media Drive A removable media drive can be a DVD drive or a DDS-4 tape drive located in the front of the chassis (Figure 6-11). You must power off the system before attempting to remove or replace this CRU. Refer to “Powering Off Hardware Components and Powering On the Server” on page 184 and Chapter 4 “Operating System Boot and Shutdown” for more information.
Removing and Replacing Components Removing and Replacing a Removable Media Drive Step 9. Remove the rails and clips from the drive. Figure 6-12 Removable Media Drive Detail Locking Tabs Replacing the Removable Media Drive NOTE If applicable, install the bottom drive before installing the top drive. Step 1. Attach the rails and clips to the drive. Step 2. Connect the cables to the rear of the drive. Step 3. Position the drive in the chassis. Step 4. Turn the power on to the server. Step 5.
Removing and Replacing Components Removing and Replacing a PCI Card Removing and Replacing a PCI Card The PCI cards are located in the rear of the chassis in the PCI card cage (Figure 6-13). PCI cards are hot-plug components. Refer to “Hot-Plug CRUs” on page 181 for a list and description of hot-plug CRUs. IMPORTANT Complete information regarding OL* for I/O cards is on the Web at: http://docs.hp.com. Refer to the Interface Card OL* Support Guide for details.
Removing and Replacing Components Removing and Replacing a PCI Card Attention button The hardware system slot-based method of performing OL*. This procedure describes how to perform an online replacement of a PCI card using the attention button for cards whose drivers support online addition or replacement (OLAR). The attention button is also referred to as the doorbell.
Removing and Replacing Components Removing and Replacing a PCI Card Step 5. Firmly pull up on the tabs on the PCI card separator. Step 6. Remove the card from the PCI slot. Replacing the PCI Card Step 1. Install the new replacement PCI card in the slot. NOTE Online addition using the attention button does not perform the pre-add sequence of olrad which uses the olrad -a command. Step 2. Flip the PCI MRL for the card slot to the closed position. Step 3. Connect all cables to the replacement PCI card.
Removing and Replacing Components Removing and Replacing a PCI Card Step 6. From the EFI Boot Manager Menu, select Boot Option Maintenance Menu and then from the Main Menu, select Add a Boot Option. Now add the device as a new boot device. Updating Option ROMs The Option ROM on a PCI I/O card can be “flashed,” or updated. The procedure to flash an I/O card follows. Step 1. Install the I/O card into the chassis. Step 2. Boot the server to the EFI Shell. Step 3. Execute the EFI search command.
Removing and Replacing Components Removing and Replacing a PCI Smart Fan Assembly Removing and Replacing a PCI Smart Fan Assembly The PCI smart fan assembly is located in front of the PCI card cage (Figure 6-15). The fan assembly is a hot-swap component. Refer to “Hot-Swap CRUs” on page 181 for a list and description of hot-swap CRUs. Figure 6-15 PCI Smart Fan Assembly Location Top View Front of Server Preliminary Procedures Complete these procedures before removing the PCI smart fan assembly. Step 1.
Removing and Replacing Components Removing and Replacing a PCI Smart Fan Assembly Table 6-2 Smart Fan Assembly LED Indications (Continued) LED State Meaning Red Fan failed, stalled or has run slow or fast for longer than 12 seconds. Off Fan is not present, no power is applied to fan, or the fan has failed. Removing the PCI Smart Fan Assembly Step 1. Securely grasp the two tabs on the fan assembly (Figure 6-16). Step 2. Slide the fan upward from the chassis.
Removing and Replacing Components Removing and Replacing a PCI Power Supply Removing and Replacing a PCI Power Supply The PCI-X power supply is located in the front of the chassis. See Figure 6-17. The power subsystem has N+1 redundancy when both power supplies are installed. It is not necessary to power down the PCI domain to replace a failed PCI power supply.
Removing and Replacing Components Removing and Replacing a PCI Power Supply Preliminary Procedures Complete these procedures before removing the PCI power supply. Step 1. Connect to ground with a wrist strap. Refer to “Electrostatic Discharge” on page 182 for more information. Step 2. Remove the front bezel. Refer to “Removing and Replacing the Front Bezel” on page 189. Step 3. Identify the failed power supply. Table 6-3 identifies the meaning of the PCI power supply LED state. Step 4.
Removing and Replacing Components Removing and Replacing a PCI Power Supply Step 3. Slide the module from the chassis. Refer to Figure 6-18. Figure 6-18PCI Power Supply Detail Replacing the PCI Power Supply Step 1. Slide the power supply in the chassis until the thumb latch clicks into the locked position. Step 2. The module easily slides into the chassis; apply a slow, firm pressure to properly seat the connection. Step 3. Verify the status of the power supply LEDs.
Removing and Replacing Components Removing and Replacing a Bulk Power Supply (BPS) Removing and Replacing a Bulk Power Supply (BPS) The bulk power supply (BPS) is located in the front of the chassis (Figure 6-19). The BPS is a hot-swap component. Refer to “Hot-Swap CRUs” on page 181 for a list and description of hot-swap CRUs. Cell Board Power Requirements The number of cell boards installed will determine the minimum number of bulk power supplies (BPS) required to support them.
Removing and Replacing Components Removing and Replacing a Bulk Power Supply (BPS) • B1 input provides power to BPS 3, BPS 4, and BPS 5 Figure 6-19 BPS Location (Front Bezel Removed) BPS 0 Chapter 6 BPS 1 BPS 2 BPS 3 BPS 4 BPS 5 208
Removing and Replacing Components Removing and Replacing a Bulk Power Supply (BPS) Removing the BPS Step 1. Remove the front bezel. Step 2. Isolate the failing BPS. Table 6-5 defines the states of the single multicolored LED on the BPS. Table 6-5 LED State BPS LED Definitions Description Blink Green BPS in standby state and no faults or warnings are present. Green BPS in run state (48 volt output enabled) and no faults or warnings are present.
Removing and Replacing Components Removing and Replacing a Bulk Power Supply (BPS) Step 4. Slide the BPS forward using the handle to remove it from the chassis. Figure 6-20 BPS Detail Release Latch Replacing the BPS Step 1. Grip the handle with one hand while supporting the rear of the BPS in the other hand. NOTE The BPS easily slides into the chassis; apply a slow, firm pressure to properly seat the connection. Step 2. Slide the power supply into the slot until it is fully seated.
Removing and Replacing Components Removing and Replacing a Bulk Power Supply (BPS) 211 Chapter 6
A Replaceable Parts This appendix contains the server CRU list. For a more updated list of part numbers, go to the HP Part Surfer web site at: http://www.partsurfer.hp.
Replaceable Parts Replaceable Parts List Replaceable Parts List Table A-1 Server CRU Descriptions and Part Numbers CRU DESCRIPTION Replacement P/N Exchange P/N POWER CORDS AND CABLES Jumper UPS-PDU 2.5m C19/C20 8120-6884 None Pwr Crd, C19/unterminated intl-Europe 8120-6895 None Pwr Crd, C19/IEC-309 L6-20 4.5 m BLACK CA Ay 8120-6897 None Pwr Crd, C19/L6-20 4.5 m BLACK C 8120-6903 None Pwr Crd, Jumper UPS-PDU 4.5 m C19/C20 8120-6961 None Pwr Crd, C19/GB 1002 4.
Replaceable Parts Replaceable Parts List Table A-1 Server CRU Descriptions and Part Numbers (Continued) CRU DESCRIPTION Snap, Bezel Attach Replacement P/N Exchange P/N C2786-40002 None A6752-67011 None KITS Removable Media Rail Kit Appendix A 214
Replaceable Parts Replaceable Parts List 215 Appendix A
B MP Commands This appendix contains a list of the server management commands.
Server Management Commands Table B-1 lists the server management commands.
Table B-3 System and Access Configuration Commands (Continued) CP Display partition cell assignments DATE Set the time and date DC Reset parameters to default configuration DE Display entity status DI Disconnect Remote or LAN console DFW Duplicate firmware DU Display devices on bus FW Obsolete. FW is now available at the MP Main Menu.
C Templates This appendix contains blank floor plan grids and equipment templates. Combine the necessary number of floor plan grid sheets to create a scaled version of the computer room floor plan.
Templates Figure C-1 illustrates the overall dimensions required for the servers.
Templates Equipment Footprint Templates Equipment Footprint Templates Equipment footprint templates are drawn to the same scale as the floor plan grid (1/4 inch = 1 foot). These templates show basic equipment dimensions and space requirements for servicing. Refer to Figure C-2 on page 223. The service areas shown on the template drawings are lightly shaded. Use the equipment templates with the floor plan grid to define the location of the equipment that will be installed in your computer room.
Templates Computer Room Layout Plan Computer Room Layout Plan Use the following procedure to create a computer room layout plan: Step 1. Remove several copies of the floor plan grid (Figure C-3). Step 2. Cut and join them together (as necessary) to create a scale model floor plan of your computer room. Step 3. Remove a copy of each applicable equipment footprint template (Figure C-2). Step 4. Cut out each template selected in step 3; then place it on the floor plan grid created in step 2. Step 5.
Templates Computer Room Layout Plan NOTE Figure C-2 Appendix C Attach a reduced copy of the completed floor plan to the site survey. HP installation specialists use this floor plan during equipment installation.
Templates Computer Room Layout Plan Figure C-3 224 Planning Grid Appendix C
Templates Computer Room Layout Plan Figure C-4 Appendix C Planning Grid 225
Templates Computer Room Layout Plan 226 Appendix C
Index A ac power input, 98 voltage check, 97 AC power inputs A0, 98 A1, 98 B0, 98 B1, 98 AC power specifications, 45 access commands, 217 administrator, 175 air ducts, 54 illustrated, 54 AR, 217 ASIC, 21 B backplane mass storage, 41, 43 system, 39, 41, 43, 51 BO, 217 BPS (Bulk Power Supply), 106 Bulk Power Supplies BPS, 99, 207 C CA, 217 cable, 157 cards core I/O, 171 CC, 217 cell board, 38, 43, 51, 100, 113, 164, 207 verifying presence, 112 cell controller, 21 chassis login, 172 management, 172 checklist i
Index getty, 171 grounding, 45 H HE, 217 high availability (HA), 171 hot-plug defined, 181 hot-swap defined, 181 housekeeping power, 105 HP-UX, 171 humidity, 49 I I/O bay, 173 I/O Subsystem, 37, 38 iCOD definition, 116 email requirements, 116 ID, 217 IF, 217 initial observations interval one, 100 interval three, 100 interval two, 100 inspecting for damage, 57 installation checklist, 116 warranty, 57 installation problems, 157 interference, 182 IP address default, 108 lc Comand Screen, 109 IT, 217 L LAN, 171
Index Processor Dependent Code PDC, 114 processors, 21 PS, 217 PWRGRD, 217 pwrgrd (Power Grid) command, 113 R rank, 31 RE, 217 Reflection 1, 104, 113 RL, 217 RR, 217 RS, 217 RS-232, 171 RU, 217 S safety considerations, 182 serial display device connecting, 103, 104 recommended windows, 113 setting parameters, 103 server, 171 block diagram, 22 computer room layout, 222 configuration, 171 front panel, 26 management, 171 management commands, 217 management overview, 172 status commands, 217 service access, 172