HP Integrity rx7640 and HP 9000 rp7440 Servers User Service Guide HP Part Number: AB312-9010A Published: November 2007 Edition: Fourth Edition
© Copyright 2007 Legal Notices © Copyright 2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are U.S.
Table of Contents About this Document.......................................................................................................15 Book Layout..........................................................................................................................................15 Intended Audience................................................................................................................................15 Publishing History...................................................
Typical Power Dissipation and Cooling..........................................................................................39 Acoustic Noise Specification...........................................................................................................40 Airflow.............................................................................................................................................40 System Requirements Summary..................................................................
Verifying the System Configuration Using the EFI Shell...........................................................84 Booting HP-UX Using the EFI Shell...........................................................................................84 Adding Processors with Instant Capacity.......................................................................................84 Installation Checklist......................................................................................................................
Server Management Behavior.............................................................................................................133 Thermal Monitoring......................................................................................................................134 Fan Control....................................................................................................................................134 Power Control.........................................................................
Removing a Slimline DVD Carrier................................................................................................162 Installation of Two Slimline DVD+RW Drives..............................................................................163 Removable Media Cable Configuration for the Slimline DVD+RW Drives............................163 Installing the Slimline DVD+RW Drives..................................................................................
List of Figures 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 1-12 2-1 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 3-12 3-13 3-14 3-15 3-16 3-17 3-18 3-19 3-20 3-21 3-22 3-23 3-24 3-25 3-26 3-27 3-28 3-29 3-30 3-31 3-32 3-33 3-34 3-35 3-36 5-1 5-2 5-3 8-Socket Server Block Diagram.....................................................................................................20 Server (Front View With Bezel) .............................................................................................
5-4 5-5 5-6 5-7 5-8 5-9 5-10 5-11 5-12 5-13 5-14 6-1 6-2 6-3 6-4 6-5 6-6 6-7 6-8 6-9 6-10 6-11 6-12 6-13 6-14 6-15 6-16 6-17 6-18 6-19 6-20 6-21 6-22 6-23 6-24 6-25 6-26 6-27 6-28 6-29 6-30 6-31 6-32 6-33 6-34 6-35 6-36 C-1 C-2 C-3 C-4 10 Front, Rear and PCI I/O Fan LEDs..............................................................................................126 Cell Board LED Locations...........................................................................................................
List of Tables 1-1 1-2 1-3 1-4 1-5 1-6 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 3-1 3-2 3-3 3-4 3-5 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 5-9 5-10 6-1 6-2 6-3 6-4 6-5 6-6 7-1 7-2 A-1 B-1 B-2 B-3 Cell Board CPU Module Load Order............................................................................................25 Server DIMMs...............................................................................................................................27 PCI-X paths for Cell 0....................................
List of Examples 4-1 7-1 Single-User HP-UX Boot..............................................................................................................101 Single-User HP-UX Boot..............................................................................................................
About this Document This document covers the HP Integrity rx7640 and HP 9000 rp7440 Servers. This document does not describe system software or partition configuration in any detail. For detailed information concerning those topics, refer to the HP System Partitions Guide: Administration for nPartitions.
Related Information You can access other information on HP server hardware management, Microsoft® Windows® administration, and diagnostic support tools at the following Web sites: http://docs.hp.com The main Web site for HP technical documentation is http://docs.hp.com. Server Hardware Information: http://docs.hp.com/hpux/hw/ The http://docs.hp.com/hpux/hw/ Web site is the systems hardware portion of docs.hp.com.
It provides HP nPartition server hardware management information, including site preparation, installation, and more. Windows Operating System Information You can find information about administration of the Microsoft® Windows® operating system at the following Web sites, among others: • http://docs.hp.com/windows_nt/ • http://www.microsoft.
HP Encourages Your Comments Hewlett-Packard welcomes your feedback on this publication. Please address your comments to edit@presskit.rsn.hp.com and note that you will not receive an immediate reply. All comments are appreciated.
1 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview The HP Integrity rx7640 and HP 9000 rp7440 Servers are members of HP’s business-critical computing platform family in the mid-range product line. The information in chapters one through six of this guide applies to the HP Integrity rx7640 and HP 9000 rp7440 Servers, except for a few items specifically denoted as applying only to the HP Integrity rx7640 Server. Chapter seven covers any information specific to the HP 9000 rp7440 Server only.
Figure 1-1 8-Socket Server Block Diagram Cell Board 0 Cell Board 0 Memory Memory CPU CPU PDH CPU CC CPU CPU PDH CC CPU CPU DVD/ Tape CPU CC Link Bulk Power Supply (x2) Disk Backplane Clocks Reset System Backplane LBA SBA Link PCI-X Power (x2) Disk Disk Disk PCI SBA SBA LBA LBA LBA LBA LBA LBA LBA LBA LBA LBA LBA LBA LBA LBA LBA LBA SCSI LAN/SCSI Boards Indicates hot pluggable link or bus SCSI SCSI LAN LAN SCSI SCSI MP MP LAN/SCSI Boards Indicates cable PCI-X Di
Figure 1-2 Server (Front View With Bezel) Figure 1-3 Server (Front View Without Bezel) Power Switch Removable Media Drive PCI Power Supplies Front OLR Fans Bulk Power Supplies Hard Disk Drives Detailed Server Description 21
The server has the following dimensions: • Depth: Defined by cable management constraints to fit into standard 36-inch deep rack: 25.5 inches from front rack column to PCI connector surface 26.7 inches from front rack column to MP Core I/O connector surface 30 inches overall package dimension, including 2.7 inches protruding in front of the front rack columns. • • Width: 44.45 cm (17.5 inches), constrained by EIA standard 19 inch racks. Height: 10U – 0.54 cm = 43.91 cm (17.287 inches).
The PCI OLR fan modules are located in front of the PCI-X cards. These six 9.2-cm fans are housed in plastic carriers. They are configured in two rows of three fans. Four OLR system fan modules, externally attached to the chassis, are 15-cm (6.5-inch) fans. Two fans are mounted on the front surface of the chassis and two are mounted on the rear surface. The cell boards are accessed from the right side of the chassis behind a removable side cover.
Figure 1-6 Front Panel LEDs and Power Switch Cell Board The cell board, illustrated in Figure 1-7, contains the processors, main memory, and the CC application specific integrated circuit (ASIC) which interfaces the processors and memory with the I/O, and to the other cell board in the server. The CC is the heart of the cell board, enabling communication with the other cell board in the system. It connects to the processor dependent hardware (PDH) and micro controller hardware.
Because of space limitations on the cell board, the PDH and microcontroller circuitry resides on a riser board that plugs into the cell board at a right angle. The cell board also includes clock circuits, test circuits, and de-coupling capacitors. PDH Riser Board The PDH riser board is a small card that plugs into the cell board at a right angle.
Figure 1-8 CPU Locations on Cell Board Socket 2 Socket 3 Cell Controller Socket 1 Socket 0 Memory Subsystem Figure 1-9 shows a simplified view of the memory subsystem. It consists of two independent access paths, each path having its own address bus, control bus, data bus, and DIMMs . Address and control signals are fanned out through register ports to the synchronous dynamic random access memory (SDRAM) on the DIMMs. The memory subsystem comprises four independent quadrants.
Figure 1-9 Memory Subsystem DIMMs The memory DIMMs used by the server are custom designed by HP. Each DIMM contains DDR-II SDRAM memory that operates at 533 MT/s. Industry standard DIMM modules do not support the high availability and shared memory features of the server. Therefore, industry standard DIMM modules are not supported. The server supports DIMMs with densities of 1, 2, and 4 Gb.
On the server, each nPartition has its own dedicated portion of the server hardware which can run a single instance of the operating system. Each nPartition can boot, reboot, and operate independently of any other nPartitions and hardware within the same server complex. The server complex includes all hardware within an nPartition server: all cabinets, cells, I/O chassis, I/O devices and racks, management and interconnecting hardware, power supplies, and fans.
System Backplane The system backplane contains the following components: • • • • • The system clock generation logic The system reset generation logic DC-to-DC converters Power monitor logic Two local bus adapter (LBA) chips that create internal PCI buses for communicating with the core I/O card The backplane also contains connectors for attaching the cell boards, the PCI-X backplane, the core I/O board set, SCSI cables, bulk power, chassis fans, the front panel display, intrusion switches, and the system
board 0 and cell board 1 communicates through an SBA over the SBA link. The SBA link consists of both an inbound and an outbound link with an effective bandwidth of approximately 11.5 GB/sec. The SBA converts the SBA link protocol into “ropes.” A rope is defined as a high-speed, point-to-point data bus. The SBA can support up to 16 of these high-speed bi-directional rope links for a total aggregate bandwidth of approximately 11.5 GB/sec.
Table 1-4 PCI-X Paths Cell 1 Cell PCI-X Slot I/O Chassis Path 1 1 1 1/0/8/1 1 2 1 1/0/10/1 1 3 1 1/0/12/1 1 4 1 1/0/14/1 1 5 1 1/0/6/1 1 6 1 1/0/4/1 1 7 1 1/0/2/1 1 8 1 1/0/1/1 The server supports two internal SBAs. Each SBA provides the control and interfaces for eight PCI-X slots. The interface is through the rope bus (16 ropes per SBA).
Table 1-5 PCI-X Slot Types I/O Partition Slot1 0 1 1 Maximum MHz Maximum Peak Ropes Bandwidth Supported Cards PCI Mode Supported 8 133 533 MB/s 001 3.3 V PCI or PCI-X Mode 1 7 133 1.06 GB/s 002/003 3.3 V PCI or PCI-X Mode 1 6 266 2.13 GB/s 004/005 3.3 V or 1.5 V PCI-X Mode 2 5 266 2.13 GB/s 006/007 3.3 V or 1.5 V PCI-X Mode 2 4 266 2.13 GB/s 014/015 3.3 V or 1.5 V PCI-X Mode 2 3 266 2.13 GB/s 012/013 3.3 V or 1.5 V PCI-X Mode 2 2 133 1.06 GB/s 010/011 3.
to the cell controller chip on cell board 2, and the ASIC on cell location 1 connects to the cell controller chip on cell board 3 through external link cables. Downstream, the ASIC spawns 16 logical 'ropes' that communicate with the core I/O bridge on the system backplane, PCI interface chips, and PCIe interface chips. Each PCI chip produces a single 64–bit PCI-X bus supporting a single PCI or PCI-X add-in card. Each PCIe chip produces a single x8 PCI-Express bus supporting a single PCIe add-in card.
Table 1-6 PCI-X/PCIe Slot Types (continued) I/O Partition Slot1 Maximum Peak Maximum MHz Bandwidth Ropes Supported Cards PCI Mode Supported 1 8 66 533 MB/s 001 3.3 V PCI or PCI-X Mode 1 7 133 1.06 GB/s 002/003 3.3 V PCI or PCI-X Mode 1 6 266 2.13 GB/s 004/005 3.3 V PCI-e 5 266 2.13 GB/s 006/007 3.3 V PCI-e 4 266 2.13 GB/s 014/015 3.3 V PCI-e 3 266 2.13 GB/s 012/013 3.3 V PCI-e 2 133 1.06 GB/s 010/011 3.3 V PCI or PCI-X Mode 1 1 133 1.06 GB/s 008/009 3.
2 Server Site Preparation This chapter describes the basic server configuration and its physical specifications and requirements. Dimensions and Weights This section provides dimensions and weights of the system components. Table 2-1 gives the dimensions and weights for a fully configured server. Table 2-1 Server Dimensions and Weights Standalone Packaged Height- Inches (centimeters) 17.3 (43.9) 35.75 (90.8) Width- Inches (centimeters) 17.5 (44.4) 28.0 (71.1) Depth- Inches (centimeters) 30.0 (76.
Table 2-3 Example Weight Summary (continued) Component Quantity Multiply Weight (kg) DVD drive 1 2.2 (1.0) 4.4 (2.0) Hard disk drive 4 1.6 (0.73) 6.40 (2.90) Chassis with skins and front bezel cover 1 90.0(41.0) 131.0 (59.42) Total weight 286.36 (129.89) Table 2-4 Weight Summary Component Quantity Multiply By Cell Board 27.8 (12.16) PCI Card 0.34 (0.153) Power Supply (BPS) 18 (8.2) DVD Drive 2.2 (1.0) Hard Disk Drive 1.6 (0.73) Chassis with skins and front bezel cover 90.
Table 2-5 Power Cords Part Number Description Where Used 8120-6895 Stripped end, 240 volt International - Other 8120-6897 Male IEC309, 240 volt International - Europe 8121-0070 Male GB-1002, 240 volts China 8120-6903 Male NEMA L6-20, 240 volt North America/Japan System Power Specifications Table 2-6 lists the AC power requirements for the HP Integrity rx7640 and HP 9000 rp7440 Serversservers. Table 2-7 lists the system power requirements for the HP 9000 rp7440 Server.
able to produce for the server with any combination of hardware under laboratory conditions using aggressive software applications designed specifically to work the system at maximum load. This number can safely be used to compute thermal loads and power consumption for the system under all conditions. Environmental Specifications This section provides the environmental, power dissipation, noise emission, and airflow specifications for the server.
Environmental Temperature Sensor To ensure that the system is operating within the published limits, the ambient operating temperature is measured using a sensor placed near the chassis inlet, between the cell boards. Data from the sensor is used to control the fan speed and to initiate system overtemp shutdown. Non-Operating Environment The system is designed to withstand ambient temperatures between -40° to 70° C under non-operating conditions.
Table 2-9 Typical Server Configurations for the HP Integrity rx7640 Server Cell Boards Memory Per PCI Cards Cell Board (assumes 10 watts each) DVDs Hard Disk Core I/O Bulk Power Drives Supplies Typical Power Typical Cooling Qty GBytes Qty Qty Qty Qty Qty Watts BTU/hr 2 32 16 2 4 2 2 2128 7265 2 16 8 0 2 2 2 1958 6685 2 8 8 0 2 2 2 1921 6558 1 8 8 0 1 1 2 1262 4308 The air conditioning data is derived using the following equations. • • • Watts x (0.
Figure 2-1 Airflow Diagram System Requirements Summary This section summarizes the requirements that must be considered in preparing the site for the server. Power Consumption and Air Conditioning To determine the power consumed and the air conditioning required, follow the guidelines in Table 2-9. NOTE: When determining power requirements, consider any peripheral equipment that will be installed during initial installation or as a later update.
3 Installing the Server Inspect shipping containers when the equipment arrives at the site. Check equipment after the packing has been removed. This chapter discusses how to inspect and install the server. Receiving and Inspecting the Server Cabinet This section contains information about receiving, unpacking and inspecting the server cabinet. NOTE: The server will ship in one of three different configurations.
Figure 3-1 Removing the Polystraps and Cardboard 3. 4. Remove the corrugated wrap from the pallet. Remove the packing materials. CAUTION: Cut the plastic wrapping material off rather than pull it off. Pulling the plastic covering off represents an electrostatic discharge (ESD) hazard to the hardware. 5. 44 Remove the four bolts holding down the ramps, and remove the ramps.
NOTE: Figure 3-2 shows one ramp attached to the pallet on either side of the cabinet with each ramp secured to the pallet using two bolts. In an alternate configuration, the ramps are secured together on one side of the cabinet with one bolt.
6. Remove the six bolts from the base that attaches the rack to the pallet. Figure 3-3 Preparing to Roll Off the Pallet WARNING! Be sure that the leveling feet on the rack are raised before you roll the rack down the ramp, and any time you roll the rack on the casters. Use caution when rolling the cabinet off the ramp. A single server in the cabinet weighs approximately 508 lb. It is strongly recommended that two people roll the cabinet off the pallet.
Figure 3-4 Securing the Cabinet Standalone and To-Be-Racked Systems Servers shipped in a stand-alone or to-be-racked configuration must have the core I/O handles and the PCI towel bars attached at system installation. Obtain and install the core I/O handles and PCI towel bars from the accessory kit A6093-04046. The towel bars and handles are the same part. Refer to service note A6093A-11. Rack-Mount System Installation Information is available to help with rack-mounting the server.
2. Reduce the weight by removing the bulk power supplies and cell boards. Place each on an ESD approved surface. CAUTION: System damage can occur through improper removal and reinstallation of bulk power supplies and cell boards. Refer to Chapter 6: Removing and Replacing Components, for the correct procedures to remove and reinstall these components. 3. Remove the systems left and right side covers.
Figure 3-6 Attaching the Front of Handle to Chassis Thumbscrews 7. 8. 9. 10. 11. Repeat steps 2—4 to install the other handle on the other side of the server. After handles are secured, server is ready to lift. Handles are removed in reverse order of steps 2—4. After moving the server, remove the lift handles from the chassis. After the server is secured, replace the previously removed cell boards and bulk power supplies. 12. Reinstall the side covers and front bezel.
WARNING! Use caution when using the lifter. To avoid injury, because of the weight of the server, center the server on the lifter forks before raising it off the pallet. Always rack the server in the bottom of a cabinet for safety reasons. Never extend more than one server from the same cabinet while installing or servicing another server product. Failure to follow these instructions could result in the cabinet tipping over. Figure 3-7 RonI Lifter 1.
Figure 3-8 Positioning the Lifter to the Pallet 4. 5. Carefully slide server onto lifter forks. Slowly raise the server off the pallet until it clears the pallet cushions.
Figure 3-9 Raising the Server Off the Pallet Cushions 6. 7. Carefully roll the lifter and server away from the pallet. Do not raise the server any higher than necessary when moving it over to the rack.
Table 3-1 Wheel Kit Packing List (continued) Part Number Description Quantity A6753-04006 Left front caster assembly 1 A6753-04007 Left rear caster assembly 1 0515-2478 M4 x 0.
Figure 3-11 Left Foam Block Position 6. Carefully tilt the server and place the other foam block provided in the kit under the right side of the server. Figure 3-12 Right Foam Block Position 7. 54 Remove the cushions from the lower front and rear of the server. Do not disturb the side cushions.
Figure 3-13 Foam Block Removal 8. Locate and identify the caster assemblies. Use the following table to identify the casters. NOTE: The caster part number is stamped on the caster mounting plate. Table 3-2 Caster Part Numbers 9. Caster Part Number Right front A6753-04001 Right rear A6753-04005 Left front A6753-04006 Left rear A6753-04007 Locate and remove one of the four screws from the plastic pouch. Attach the a caster to the server.
Figure 3-14 Attaching a Caster to the Server 10. 11. 12. 13. Attach the remaining casters to the server using the screws supplied in the plastic pouch. Remove the foam blocks from the left and right side of the server. Locate the plywood ramp. Attach the ramp to the edge of the pallet. NOTE: There are two pre-drilled holes in the ramp. Use the two screws taped to the ramp to attach the ramp to the pallet. 14. Carefully roll the server off the pallet and down the ramp. 15. Locate the caster covers.
Figure 3-15 Securing Each Caster Cover to the Server Rear Casters Caster Cover Caster Cover Front Casters 17. Wheel kit installation is complete when both caster covers are attached to the server, and the front bezel and all covers are installed. Figure 3-16 Completed Server Installing the Power Distribution Unit The server may ship with a power distribution unit (PDU). Two 60 A PDUs available for the server. Each PDU 3 U high and is mounted horizontally between the rear columns of the server cabinet.
The 60A IEC PDU has four 16A circuit breakers and is constructed for International use. Each of the four circuit breakers has two IEC-320 C19 outlets providing a total of eight IEC-320 C19 outlets. Each PDU is 3U high and is rack-mounted in the server cabinet. Documentation for installation will accompany the PDU. The documentation can also be found at the external Rack Solutions Web site at: http://www.hp.com/racksolutions This PDU might be referred to as a Relocatable Power Tap outside HP.
Figure 3-17 Disk Drive and DVD Drive Location DVD/DAT/ Slimline DVD Drive Path: 1/0/0/3/1.2.0 Slimline DVD Drive Path: 0/0/0/3/1.2.0 Drive 1-1 Path: 1/0/0/3/0.6.0 Drive 1-2 Path: 1/0/1/1/0/4/1.6.0 Drive 0-2 Path: 0/0/1/1/0/4/1.5.0 Drive 0-1 Path: 0/0/0/3/0.6.0 Use the following procedure to install the disk drives: 1. 2. 3. 4. Be sure the front locking latch is open, then position the disk drive in the chassis.
Figure 3-18 Removable Media Location Removable Media 1. 2. 3. 4. 5. Remove the front bezel. Remove the filler panel from the server. Install the left and right media rails and clips to the drive. Connect the cables to the rear of the drive Fold the cables out of the way and slide the drive into the chassis. The drive easily slides into the chassis; however, a slow firm pressure is needed for proper seating. The front locking tab will latch to secure the drive in the chassis. 6. 7. 8.
Table 3-3 HP Integrity rx7640 PCI-X and PCIe I/O Cards (continued) Part Number Card Description A5506B 4-port 10/100b-TX A5838A 2-port Ultra2 SCSI/2-Port 100b-T Combo A6386A Hyperfabric II A6749A 64-port Terminal MUX A6795A 2G FC Tachlite B A6825A Next Gen 1000b-T b A6826A1 2-port 2Gb FC B A6828A 1-port U160 SCSI B B A6829A 2-port U160 SCSI B B A6847A Next Gen 1000b-SX b b A6869B Obsidian 2 VGA/USB B A7011A 1000b-SX Dual Port b b b A7012A 1000b-T Dual Port b b b A
Table 3-3 HP Integrity rx7640 PCI-X and PCIe I/O Cards (continued) Part Number Card Description AD167A1 HP-UX VMS Windows® Linux® Emulex 4Gb/s B B AD168A Emulex 4Gb/s DC B B AD193A 1 port 4Gb FC & 1 port GbE HBA PCI-X Bb B AD194A 2 port 4Gb FC & 2 port GbE HBA PCI-X Bb B AD278A 8-Port Terminal MUX AD279A 64-Port Terminal MUX AD307A LOA (USB/VGA/RMP) B B J3525A 2-port Serial 337972-B21 SA P600 (Redstone) 1 B B PCI-e Cards A8002A Emulex 1–port 4Gb FC PCIe B B A8003A E
IMPORTANT: The above list of part numbers is current and correct as of September 2007. Part numbers change often. Check the following website to ensure you have the latest part numbers associated with this server: http://partsurfer.hp.com/cgi-bin/spi/main Installing an Additional PCI-X Card IMPORTANT: While the installation process for PCI/PCI-X cards and PCI-e cards is the same, PCI-e cards are physically smaller than PCI-X cards and are not interchangeable.
IMPORTANT: The installation process varies depending on what method for installing the PCI card is selected. PCI I/O card installation procedures should be downloaded from the http://docs.hp.com/ Web site. Background information and procedures for adding a new PCI I/O card using online addition are found in the Interface Card OL* Support Guide. PCI I/O OL* Card Methods cards.
Figure 3-19 PCI I/O Slot Details Manual Release Latch Closed Manual Release Latch Open OL* Attention Button Power LED (Green) Attention LED (Yellow) 7. 8. Wait for the green power LED to stop blinking. Check for errors in the hotplugd daemon log file (default: /var/adm/hotplugd.log). The critical resource analysis (CRA) performed while doing an attention button initiated add action is very restrictive and the action will not complete–it will fail–to protect critical resources from being impacted.
IMPORTANT: If you are installing the A6869B in HP servers based on the sx1000 chipset, such as HP Superdome, rx7620 or rx8620, the system firmware must be updated to a minimum revision of 3.88. IMPORTANT: Search for available PCI slots that support the conventional clock speed to conserve availability of higher speed PCI-X card slots to PCI-X cards that utilize the higher bandwidth. This applies to mid-range as well as high-end HP server I/O PCI-X backplanes.
No Console Display Black Screen. No text displayed. Hardware problem. * Must have supported power enabled. * Must have a functional VGA/USB PCI card. * Must have a functional PCI slot. Select another slot on same partition/backplane. * Must have the VGA/USB PCI card firmly seated in PCI backplane slot. * Must have a supported monitor. * Must have verified cable connections to VGA/USB PCI card. Display unreadable. * Ensure system FW supports the VGA/USB PCI card.
Figure 3-21 Voltage Reference Points for IEC 320 C19 Plug IMPORTANT: 1. 2. 3. Perform these measurements for every power cord that plugs into the server. Measure the voltage between L1 and L2. This is considered to be a phase-to-phase measurement in North America. In Europe and certain parts of Asia-Pacific, this measurement is referred to as a phase-to-neutral measurement. The expected voltage should be between 200–240 V AC regardless of the geographic region.
Figure 3-22 Safety Ground Reference Check WARNING! SHOCK HAZARD Risk of shock hazard while testing primary power. Use properly insulated probes. Be sure to replace access cover when finished testing primary power. 1. Measure the voltage between A0 and A1 as follows: 1. Take the AC voltage down to the lowest scale on the volt meter. 2. Insert the probe into the ground pin for A0. 3. Insert the other probe into the ground pin for A1. 4. Verify that the measurement is between 0-5 V AC.
Figure 3-23 Safety Ground Reference Check WARNING! SHOCK HAZARD Risk of shock hazard while testing primary power. Use properly insulated probes. Be sure to replace access cover when finished testing primary power. 1. Measure the voltage between A0 and A1 as follows: 1. Take the AC voltage down to the lowest scale on the volt meter. 2. Insert the probe into the ground pin for A0. 3. Insert the other probe into the ground pin for A1. 4. Verify that the measurement is between 0-5 V AC.
4. Measure the voltage between A1 and B1 as follows: 1. Take the AC voltage down to the lowest scale on the volt meter. 2. Insert the probe into the ground pin for A1. 3. Insert the other probe into the ground pin for B1. 4. Verify that the measurement is between 0-5 V AC. If the measurement is 5 V or greater, escalate the situation. Do not attempt to plug the power cord into the server cabinet.
7. Route and connect the server power connector to the site power receptacle. • For locking type receptacles, line up the key on the plug with the groove in the receptacle. • Push the plug into the receptacle and rotate to lock the connector in place. WARNING! Do not set site AC circuit breakers serving the processor cabinets to ON before verifying that the cabinet has been wired into the site AC power supply correctly.
The current power grid configuration is: Single gridPower grid configuration preference. 1. Single grid2. Dual grid Select Option: Figure 3-26 Distribution of Input Power for Each Bulk Power Supply BPS 0 A0 Power Source A BPS 1 A1 B0 B1 Power Source B WARNING! Voltage is present at various locations within the server whenever a power source is connected. This voltage is present even when the main power switch is in the off position.
To install the line cord anchor: 1. 2. 3. 4. 5. Remove and retain the thumb nuts from the studs. Install the line cord anchor over the studs. Refer to Figure 3-27: “Two Cell Line Cord Anchor (rp7410, rp7420, rp7440, rx7620, rx7640)”, Tighten the thumb nuts onto the studs. Weave the power cables through the line cord anchor. Leave enough slack to allow the plugs to be disconnected from the receptacles without removing the cords from the line cord anchor.
of the system. For systems running a single partition, one MP/SCSI board is required. A second MP/SCSI board is required for a dual-partition configuration, or if you want to enable primary or secondary MP failover for the server. Connections to the MP/SCSI board include the following: • DB9 connector for Local Console • 10/100 Base-T LAN RJ45 connector (for LAN and Web Console access) This LAN uses standby power and is active when AC is present and the front panel power switch is off.
3. 4. Select Com1. Check the settings and change, if required. Go to More Settings to set Xon/Xoff. Click OK to close the More Settings window. 5. 6. 7. 8. Click OK to close the Connection Setup window. Pull down the Setup menu and select Terminal (under the Emulation tab). Select the VT100 HP terminal type. Click Apply. This option is not highlighted if the terminal type you want is already selected. 9. Click OK.
Refer to power cord policies to interpret LED indicators. 3. Log in to the MP: a. Enter Admin at the login prompt. The login is case sensitive. It takes a few moments for the MP prompt to display. If it does not, be sure the laptop serial device settings are correct: 8 bits, no parity, 9600 baud, and na for both Receive and Transmit. Then, try again. b. Enter Admin at the password prompt. The password is case sensitive.
Figure 3-31 The lc Command Screen MP:CM> lc This command modifies the LAN parameters. Current configuration of MP customer LAN interface MAC address : 00:12:79:b4:03:1c IP address : 15.11.134.222 0x0f0b86de Hostname : metro-s Subnet mask : 255.255.248.0 0xfffff800 Gateway : 15.11.128.1 0x0f0b8001 Status : UP and RUNNING Link : Connected 100Mb Half Duplex Do you want to modify the configuration for the MP LAN (Y/[N]) q NOTE: The value in the IP address field has been set at the factory.
10. A screen similar to the following is displayed, allowing verification of the settings: Figure 3-32 The ls Command Screen 11. To return to the MP main menu, enter ma. 12. To exit the MP, enter x at the MP main menu. Accessing the Management Processor via a Web Browser Web browser access is an embedded feature of the MP/SCSI card. The Web browser enables access to the server through the LAN port on the core I/O card. MP configuration must be done from an ASCII console connected to the Local RS232 port..
Figure 3-33 Example sa Command 5. 6. 7. Enter W to modify web access mode. Enter option 2 to enable web access. Launch a Web browser on the same subnet using the IP address for the MP LAN port. Figure 3-34 Browser Window Zoom In/Out Title Bar 8. Select the emulation type you want to use. 9. Click anywhere on the Zoom In/Out title bar to generate a full screen MP window. 10. Login to the MP when the login window appears. Access to the MP via a Web browser is now possible.
After logging in to the MP, verify that the MP detects the presence of all the cells installed in the cabinet. It is important for the MP to detect the cell boards. If it does not, the partitions will not boot. To determine if the MP detects the cell boards: 1. At the MP prompt, enter cm. This displays the Command Menu. The Command Menu enables viewing or modifying the configuration and viewing the utilities controlled by the MP. To view a list of the commands available, enter he.
2. Select the appropriate console device (deselect unused devices): a. Choose the “Boot option maintenance menu” choice from the main Boot Manager Menu. b. Select the Console Output, Input or Error devices menu item for the device type you are modifying: • “Select Active Console Output Devices” • “Select Active Console Input Devices” • “Select Active Console Error Devices” c. Available devices will be displayed for each menu selection.
are chosen the OS may fail to boot or will boot with output directed to the wrong location. Therefore, any time new potential console devices are added to the system or anytime NVRAM on the system is cleared console selections should be reviewed to ensure that they are correct. Configuring the Server for HP-UX Installation Installation of the HP-UX operating system requires the server hardware to have a specific configuration. If the server’s rootcell value is incorrectly set an install of HP-UX will fail.
Selecting a Boot Partition Using the MP At this point in the installation process, the hardware is set up, the MP is connected to the LAN, the AC and DC power have been turned on, and the self-test is completed. Now the configuration can be verified. After the DC power on and the self-test is complete, use the MP to select a boot partition. 1. 2. 3. 4. 5. From the MP Main Menu, enter cm. From the MP Command Menu, enter bo. Select the partition to boot. Partitions can be booted in any order.
Capacity CPUs can be “activated.” Activating an Instant Capacity CPU automatically and instantaneously transforms the Instant Capacity CPU into an instantly ordered and fulfilled CPU upgrade that requires payment. After the Instant Capacity CPU is activated and paid for, it is no longer an Instant Capacity CPU, but is now an ordered and delivered CPU upgrade for the system. The following list offers information needed to update to iCAP version 8.
Table 3-5 Factory-Integrated Installation Checklist (continued) Procedure Allow proper clearance Cut polystrap bands Remove cardboard top cap Remove corrugated wrap from the pallet Remove four bolts holding down the ramps and remove the ramps Remove antistatic bag Check for damage (exterior and interior) Position ramps Roll cabinet off ramp Unpack the peripheral cabinet (if ordered) Unpack other equipment Remove and dispose of packaging material Move cabinet(s) and equipment to computer room Move cabinets i
Table 3-5 Factory-Integrated Installation Checklist (continued) Procedure In-process Completed Verify system configuration and set boot parameters Set automatic system restart Boot partitions Configure remote login (if required). See Appendix B.
4 Booting and Shutting Down the Operating System This chapter presents procedures for booting an operating system (OS) on an nPartition (hardware partition) and procedures for shutting down the OS. Operating Systems Supported on Cell-based HP Servers HP supports nPartitions on cell-based HP 9000 servers and cell-based HP Integrity servers. The following list describes the OSes supported on cell-based servers based on the HP sx2000 chipset.
NOTE: SuSE Linux Enterprise Server 10 is supported on HP rx8640 servers, and will be supported on other cell-based HP Integrity servers with the Intel® Itanium® dual-core processor (rx7640 and Superdome) soon after the release of those servers. Refer to “Booting and Shutting Down Linux” (page 114) for details. NOTE: On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM) parameter, which determines how firmware may interleave memory residing on the cell.
At the EFI Shell, the bcfg command supports listing and managing the boot options list for all OSs except Microsoft Windows. On HP Integrity systems with Windows installed the \MSUtil\nvrboot.efi utility is provided for managing Windows boot options from the EFI Shell. On HP Integrity systems with OpenVMS installed, the \efi\vms\vms_bcfg.efi and \efi\vms\vms_show utilities are provided for managing OpenVMS boot options.
Enabled means that Hyper-Threading will be active on the next reboot of the nPartition. Active means that each processor core in the nPartition has a second virtual core that enables simultaneously running multiple threads. • Autoboot Setting You can configure the autoboot setting for each nPartition either by using the autoboot command at the EFI Shell, or by using the Set Auto Boot TimeOut menu item at the EFI Boot Option Maintenance menu. To set autoboot from HP-UX, use the setboot command.
To change the nPartition behavior when an OS is shut down and halted, use either the acpiconfig enable softpowerdown EFI Shell command or the acpiconfig disable softpowerdown command, and then reset the nPartition to make the ACPI configuration change take effect.
— parconfig EFI shell command The parconfig command is a built-in EFI shell command. Refer to the help parconfig command for details. — \EFI\HPUX\vparconfig EFI shell command The vparconfig command is delivered in the \EFI\HPUX directory on the EFI system partition of the disk where HP-UX virtual partitions has been installed on a cell-based HP Integrity server. For usage details, enter the vparconfig command with no options.
To set the CLM configuration, use Partition Manager or the parmodify command. For details, refer to the HP System Partitions Guide or the Partition Manager Web site (http://docs.hp.com/en/PARMGR2/). Adding HP-UX to the Boot Options List This section describes how to add an HP-UX entry to the system boot options list. You can add the \EFI\HPUX\HPUX.EFI loader to the boot options list from the EFI Shell or EFI Boot Configuration menu (or in some versions of EFI, the Boot Option Maintenance Menu).
4. Exit the console and management processor interfaces if you are finished using them. To exit the EFI environment press ^B (Control+B); this exits the system console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Booting HP-UX This section describes the following methods of booting HP-UX: • • • “Standard HP-UX Booting” (page 96) — The standard ways to boot HP-UX. Typically, this results in booting HP-UX in multiuser mode.
Primary Boot Path: HA Alternate Boot Path: Alternate Boot Path: 0/0/2/0/0.13 0/0/2/0/0.d (hex) 0/0/2/0/0.14 0/0/2/0/0.e (hex) 0/0/2/0/0.0 0/0/2/0/0.0 (hex) Main Menu: Enter command or menu > 3. Boot the device by using the BOOT command from the BCH interface. You can issue the BOOT command in any of the following ways: • BOOT Issuing the BOOT command with no arguments boots the device at the primary (PRI) boot path.
4. Exit the console and management processor interfaces if you are finished using them. To exit the BCH environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Procedure 4-3 HP-UX Booting (EFI Boot Manager) From the EFI Boot Manager menu, select an item from the boot options list to boot HP-UX using that boot option. The EFI Boot Manager is available only on HP Integrity servers.
2. At the EFI Shell environment, issue the acpiconfig command to list the current ACPI configuration for the local nPartition. On cell-based HP Integrity servers, to boot the HP-UX OS, an nPartition ACPI configuration value must be set to default. If the acpiconfig value is not set to default, then HP-UX cannot boot; in this situation you must reconfigure acpiconfig or booting will be interrupted with a panic when launching the HP-UX kernel. To set the ACPI configuration for HP-UX: a. b. 3.
6. Exit the console and management processor interfaces if you are finished using them. To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu. Single-User Mode HP-UX Booting This section describes how to boot HP-UX in single-user mode on cell-based HP 9000 servers and cell-based HP Integrity servers.
Refer to the hpux(1M) manpage for a detailed list of hpux loader options. Example 4-1 Single-User HP-UX Boot ISL Revision A.00.42 JUN 19, 1999 ISL> hpux -is /stand/vmunix Boot : disk(0/0/2/0/0.13.0.0.0.0.0;0)/stand/vmunix 8241152 + 1736704 + 1402336 start 0x21a0e8 .... INIT: Overriding default level with level ’s’ INIT: SINGLE USER MODE INIT: Running /sbin/sh # 4. Exit the console and management processor interfaces if you are finished using them.
HP-UX Boot Loader for IA64 Revision 1.723 Press Any Key to interrupt Autoboot \efi\hpux\AUTO ==> boot vmunix Seconds left till autoboot 9 [User Types a Key to Stop the HP-UX Boot Process and Access the HPUX.EFI Loader ] Type ’help’ for help HPUX> 5. At the HPUX.EFI interface (the HP-UX Boot Loader prompt, HPUX>), enter the boot -is vmunix command to boot HP-UX (the /stand/vmunix kernel) in single-user (-is) mode. HPUX> boot -is vmunix > System Memory = 4063 MB loading section 0 .........................
3. From the ISL prompt, issue the appropriate Secondary System Loader (hpux) command to boot the HP-UX kernel in the desired mode. To boot HP-UX in LVM-maintenance mode: ISL> hpux -lm boot /stand/vmunix 4. Exit the console and management processor interfaces if you are finished using them. To exit the BCH environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu.
• • To perform a reboot for reconfiguration of an nPartition: shutdown -R To hold an nPartition at a shutdown for reconfiguration state: shutdown -R -H For details, refer to the shutdown(1M) manpage. NOTE: On HP rx7620, rx7640, rx8620, and rx8640 servers, you can configure the nPartition behavior when an OS is shut down and halted (shutdown -h or shutdown -R -H).
Booting and Shutting Down HP OpenVMS I64 This section presents procedures for booting and shutting down HP OpenVMS I64 on cell-based HP Integrity servers and procedures for adding HP OpenVMS to the boot options list. • • • • To determine whether the cell local memory (CLM) configuration is appropriate for HP OpenVMS, refer to “HP OpenVMS I64 Support for Cell Local Memory” (page 105). To add an HP OpenVMS entry to the boot options list, refer to “Adding HP OpenVMS to the Boot Options List” (page 105).
NOTE: OpenVMS I64 installation and upgrade procedures assist you in setting up and validating a boot option for your system disk. HP recommends that you allow the procedure to do this. To configure booting on Fibre Channel devices, you must use the OpenVMS I64 Boot Manager utility (BOOT_OPTIONS.COM). For more information on this utility and other restrictions, refer to the HP OpenVMS for Integrity Servers Upgrade and Installation Manual.
Booting HP OpenVMS To boot HP OpenVMS I64 on a cell-based HP Integrity server use either of the following procedures. • • “Booting HP OpenVMS (EFI Boot Manager)” (page 107) “Booting HP OpenVMS (EFI Shell)” (page 107) CAUTION: ACPI Configuration for HP OpenVMS I64 Must Be default On cell-based HP Integrity servers, to boot the HP OpenVMS OS, an nPartition ACPI configuration value must be set to default.
2. At the EFI Shell environment, issue the map command to list all currently mapped bootable devices. The bootable file systems of interest typically are listed as fs0:, fs1:, and so on. 3. Access the EFI System Partition for the device from which you want to boot HP OpenVMS (fsX:, where X is the file system number). For example, enter fs2: to access the EFI System Partition for the bootable file system number 2. The EFI Shell prompt changes to reflect the file system currently accessed.
2. At the OpenVMS command line (DCL) issue the @SYS$SYSTEM:SHUTDOWN command and specify the shutdown options in response to the prompts given.
IMPORTANT: Microsoft Windows supports using CLM on cell-based HP Integrity servers. For best performance in an nPartition running Windows, HP recommends that you configure the CLM parameter to 100 percent for each cell in the nPartition. To check CLM configuration details from an OS, use Partition Manager or the parstatus command.
3. List the contents of the \EFI\Microsoft\WINNT50 directory to identify the name of the Windows boot option file (Boot00nn) that you want to import into the system boot options list. fs0:\> ls EFI\Microsoft\WINNT50 Directory of: fs0:\EFI\Microsoft\WINNT50 09/18/03 09/18/03 12/18/03 11:58a
11:58a 08:16a 1 File(s) 2 Dir(s) 1,024 1,024 354 354 bytes . .. Boot0001 fs0:\> 4. At the EFI Shell environment, issue the \MSUtil\nvrboot.Refer to “Shutting Down Microsoft Windows” (page 113) for details on shutting down the Windows OS. CAUTION: ACPI Configuration for Windows Must Be windows On cell-based HP Integrity servers, to boot the Windows OS, an nPartition ACPI configuration value must be set to windows. At the EFI Shell, enter the acpiconfig command with no arguments to list the current ACPI configuration. If the acpiconfig value is not set to windows, then Windows cannot boot.
5. Exit the console and management processor interfaces if you are finished using them. To exit the console environment, press ^B (Control+B); this exits the console and returns to the management processor Main menu. To exit the management processor, enter X at the Main menu. Shutting Down Microsoft Windows You can shut down the Windows OS on HP Integrity servers using the Start menu or the shutdown command.
2. Check whether any users are logged in. Use the query user or query session command. 3. Issue the shutdown command and the appropriate options to shut down the Windows Server 2003 on the system. You have the following options when shutting down Windows: • To shut down Windows and reboot: shutdown /r Alternatively, you can select the Start —> Shut Down action and select Restart from the drop-down menu.
# is the cell number) or the specified nPartition (-p#, where # is the nPartition number). For details, refer to the HP System Partitions Guide or the Partition Manager Web site (http://docs.hp.com/en/PARMGR2/). To display CLM configuration details from the EFI Shell on a cell-based HP Integrity server, use the info mem command.
• • bcfg boot mv #a #b — Move the item number specified by #a to the position specified by #b in the boot options list. bcfg boot add # file.efi "Description" — Add a new boot option to the position in the boot options list specified by #. The new boot option references file.efi and is listed with the title specified by Description. For example, bcfg boot add 1 \EFI\redhat\elilo.efi "Red Hat Enterprise Linux"adds a Red Hat Enterprise Linux item as the first entry in the boot options list.
\EFI\redhat\elilo.efi \EFI\redhat\elilo.conf By default the ELILO.EFI loader boots Linux using the kernel image and parameters specified by the default entry in the elilo.conf file on the EFI System Partition for the boot device. To interact with the ELILO.EFI loader, interrupt the boot process (for example, type a space) at the ELILO boot prompt. To exit the ELILO.EFI loader, use the exit command.
Refer to “Shutting Down Linux” (page 119) for details on shutting down the SuSE Linux Enterprise Server OS. CAUTION: ACPI Configuration for SuSE Linux Enterprise Server Must Be default On cell-based HP Integrity servers, to boot the SuSE Linux Enterprise Server OS, an nPartition ACPI configuration value must be set to default. At the EFI Shell, enter the acpiconfig command with no arguments to list the current ACPI configuration.
3. Enter ELILO at the EFI Shell command prompt to launch the ELILO.EFI loader. If needed, you can specify the loader’s full path by entering \efi\SuSE\elilo at the EFI Shell command prompt. 4. Allow the ELILO.EFI loader to proceed with booting the SuSE Linux kernel. By default, the ELILO.EFI loader boots the kernel image and options specified by the default item in the elilo.conf file. To interact with the ELILO.EFI loader, interrupt the boot process (for example, type a space) at the ELILO boot prompt.
2. Issue the shutdown command with the desired command-line options, and include the required time argument to specify when the operating shutdown is to occur. For example, shutdown -r +20 will shut down and reboot the system starting in 20 minutes.
5 Server Troubleshooting This chapter contains tips and procedures for diagnosing and correcting problems with the server and its field replaceable units (CRUs). Information about the various status LEDs on the server is also included. Common Installation Problems The following sections contain general procedures to help you locate installation problems. CAUTION: Do not operate the server with the top cover removed for an extended period of time.
a. Check the LED for each bulk power supply (BPS). The LED is located in the lower left hand corner of the power supply face. Table 5-2 shows the states of the LEDs. b. Verify that the power supply and a minimum of two power cords are plugged in to the chassis. A yellow LED indicates that the line cord connections are not consistent with the pwrgrd settings. NOTE: A minimum of two power cords must be connected to A0 and B0 or A1 and B1.
Table 5-1 Front Panel LEDs (continued) LED Status Description MP Status Green At least one MP is installed and active (solid) Off No MPs are installed or at least one is installed but not active Red (flashing) Cabinet overtemp condition exists Red Cabinet shutdown for thermal reasons (solid) Yellow Cabinet fan slow or failed, master slave failover.
Figure 5-2 BPS LED Locations BPS LEDs Table 5-2 BPS LEDs LED Indication Description Blinking Green BPS is in standby state with no faults or warnings Green BPS is in run state (48 volt output enabled) with no faults or warnings Blinking Yellow BPS is in standby or run state with warning(s) present but no faults Yellow BPS is in standby state with recoverable fault(s) present but no non-recoverable faults Blinking RED BPS state might be unknown, non-recoverable fault(s) present Red Not Used Of
Figure 5-3 PCI-X Power Supply LED Locations 1 2 Table 5-3 PCI Power Supply LEDs LED 1 Power 2 Fault Driven By State Description Each supply On Green All output voltages generated by the power supply are within limits. Off Power to entire system has been removed. Flash Yellow The temperature within the power supply is above the lower threshold. On Yellow The temperature of the power supply is approaching the thermal limit.
Figure 5-4 Front, Rear and PCI I/O Fan LEDs LEDs Table 5-4 System and PCI I/O Fan LEDs LED Driven By State Description Fan Status Fan On Green Normal Flash Yellow Predictive failure Flash Red Failed Off No power OL* LEDs Cell Board LEDs There is one green power LED located next to each ejector on the cell board in the server that indicates the power is good. When the LED is illuminated green, power is being supplied to the cell board and it is unsafe to remove the cell board from the server.
Figure 5-5 Cell Board LED Locations Voltage Margin Active (Red) Standby (Green) PDHC Heartbeat (Green) Manageability Fab (Green) Cell Power (Green) Attention (Yellow) V3P3 Standby (Green) Cell Power (Green) SM (Green) Attention (Yellow) BIB (Green) V12 Standby (Green) Table 5-5 Cell Board OL* LED Indicators Location LED On cell board Power (located in the server cabinet) Attention Driven by State Description Cell LPM On Green 3.3 V Standby and Cell_Pwr_Good Off 3.3 V Standby off, or 3.
Figure 5-6 PCI-X OL* LED Locations Slot Attention (Yellow) Slot Power (Green) Card Divider Core I/O LEDs The core I/O LEDs are located on the bulkhead of the installed core I/O PCA. Refer to Table 5-6 “Core I/O LEDs” to determine status and description. .
Figure 5-7 Core I/O Card Bulkhead LEDs Power Attention MP LAN 10 - off 100 - on ACT/Link Locate Reset Active MP Pwr Table 5-6 Core I/O LEDs LED (as silk-screened on the State bulkhead) Description Power On Green I/O power on Attention On Yellow PCI attention MP LAN 10 BT On Green MP LAN in 10 BT mode MP LAN 100 BT On Green MP LAN in 100 BT mode ACT/Link On Green MP LAN activity Locate On Blue Locater LED Reset On Amber Indicates that the MP is being reset Active On Green This c
Figure 5-8 Core I/O Button Locations OLR MP Reset 130 Server Troubleshooting
Table 5-7 Core I/O Buttons Button Identification (as Location silk-screened on the bulkhead) Function MP RESET Resets the MP Center of the core I/O card NOTE: If the MP RESET button is held for longer than five seconds, it will clear the MP password and reset the LAN, RS-232 (serial port), and modem port parameters to their default values. LAN Default Parameters • • • • IP Address—192.168.1.1 Subnet mask—255.255.255.0 Default gateway—192.168.1.
Figure 5-9 Disk Drive LED Location Activity LED Status LED Table 5-9 Disk Drive LEDs Activity LED Status LED Flash Rate Description Off Green Steady Normal operation, power applied Green Off Steady Green stays on during foreground drive self-test Green Off Flutter at rate of activity I/O Disk activity Off Yellow Flashing at 1Hz or Predictive failure, needs immediate investigation 2 Hz Off Yellow Flashing at 0.
MP-to-MP link. All external connections to the MP must be to the primart MP in slot 1. The secondary MP ports will be disabled. The server configuration cannot be changed without the MP. In the event of a primary MP failure, the secondary MP automatically becomes the primary MP.
Thermal Monitoring The manageability firmware is responsible for monitoring the ambient temperature in the server and taking appropriate action if this temperature becomes too high. The ambient temperature of the server is broken into four ranges: normal, overtemp low (OTL), overtemp medium (OTM), and overtemp high (OTH). Figure 5-10 shows the actions taken at each range transition. Actions for increasing temperatures are shown on the left; actions for decreasing temps are shown on the right.
NOTE: Fans driven to a high RPM in dense air cannot maintain expected RPM and will be considered bad by the MP leading to a “False Fan Failure” condition. Power Control If active, the manageability firmware is responsible for monitoring the power switch on the front panel. Setting this switch to the ON position is a signal to the MP to turn on 48 V DC power to the server. The PE command can also be used to send this signal. This signal does not always generate a transition to the powered state.
NOTE: The LAN configuration for the server must be set for the FTP connection to function correctly regardless of whether the console LAN, local serial, or other connection is used to issue the FW command. FW – Firmware Update • • • Access Level: Administrator Scope: Complex Description: This command prompts the user for the location of the firmware software and the FLASH handle (from a list) which represents all upgradeable entities. Figure 5-11 illustrates the output and questions requiring responses.
Figure 5-12 Server Cabinet CRUs (Front View) I/O Fan 0 I/O Fan 1 I/O Fan 2 I/O Fan 3 I/O Fan 4 I/O Fan 5 Fan 0 Fan 1 Cell Board 1 Cell Board 0 PDC Code CRU Reporting 137
Figure 5-13 Server Cabinet CRUs (Rear View) Fan 2 Core I/O 0 Fan 3 Core I/O 1 A0 A1 B0 B1 Verifying Cell Board Insertion Cell Board Extraction Levers It is important that both extraction levers on the cell board be in the locked position. Both levers must be locked for the cell board to power up and function properly. Power to the cell board should only be removed using the MP:CM>PE command or by shutting down the partition or server.
Table 5-10 Ready Bit States Ready Bit State MP:CM> DE Command Power Status Meaning True “RDY” (denoted by upper case letters) All cell VRMs are installed and both cell latches are locked. False “rdy” (denoted by lower case letters) One or more VRMs are not installed or failed and/or one or more cell latches are not locked.
6 Removing and Replacing Components This chapter provides a detailed description of the server customer replaceable unit (CRU) removal and replacement procedures. The sections contained in this chapter are: Customer Replaceable Units (CRUs) The following section lists the different types of CRUs the server supports. Hot-plug CRUs A CRU is defined as hot-plug if it can be removed from the chassis while the system remains operational, but requires software intervention prior to removing the CRU.
Safety and Environmental Considerations WARNING! Before proceeding with any installation, maintenance, or service on a system that requires physical contact with electrical or electronic components, be sure that either power is removed or safety precautions are followed to protect against electric shock and equipment damage. Observe all WARNING and CAUTION labels on equipment. All installation and service work must be done by qualified personnel.
2. If the component you will power off is assigned to an nPartition, then use the Virtual Front Panel (VFP) to view the current boot state of the nPartition. Shut down HP-UX on the nPartition before you power off any of the hardware assigned to the nPartition. Refer to Chapter 4 “Operating System Boot and Shutdown.” When you are certain the nPartition is not running HP-UX, you can power off components that belong to the nPartition.
Removing and Replacing the Top Cover It is necessary to remove and replace one or more of the covers to access the components within the server chassis. CAUTION: Observe all ESD safety precautions before attempting this procedure. Failure to follow ESD safety precautions could result in damage to the server.
1. 2. 3. 4. Connect to ground with a wrist strap and grounding mat. Refer to “Electrostatic Discharge ” (page 142) for more information. Loosen the retaining screws securing the cover to the rear of the chassis. Slide the cover toward the rear of the chassis. Lift the cover up and away from the chassis. Replacing the Top Cover 1. Orient the cover on the top of the chassis. NOTE: 2. 3. Carefully seat the cover to avoid damage to the intrusion switch.
Removing a Side Cover Figure 6-4 Side Cover Retaining Screws 1. 2. 3. Connect to ground with a wrist strap and grounding mat. Refer to “Electrostatic Discharge ” (page 142) for more information. Loosen the retaining screw securing the cover to the rear of the chassis. Slide the cover toward the rear of the chassis; then rotate outward and remove from chassis. Figure 6-5 Side Cover Removal Detail Replacing a Side Cover 1. 2. 146 Slide the cover in position. The cover easily slides into position.
3. Tighten the retaining screw to secure the cover to the chassis. Removing and Replacing the Front Bezel Figure 6-6 Bezel hand slots Grasp here Removing the Front Bezel • From the front of the server, grasp both sides of the bezel and pull firmly toward you. The catches will release and the bezel will pull free. Replacing the Front Bezel • From the front of the server, grasp both sides of the bezel and push toward the server. The catches will secure the bezel to the chassis.
Figure 6-7 Front Panel Assembly Location Front Panel Board Removing the PCA Front Panel Board 1. 2. 3. Remove the front bezel and the top and left side covers. Follow proper procedures to power off the server. Disconnect the SCSI cables from MSBP and move them out of the way. This helps provide access to the common tray cage cover. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. Disconnect the DVD power cable from the mass storage backplane. ( Disconnect the front panel cable from the system backplane. (Figure 6-8).
Figure 6-8 Front Panel Board Detail Replacing the Front Panel Board 1. 2. Slide the front panel into its slot from inside the server. Insert the left side of the board into the slot first; the right side of the board is angled toward the rear of the chassis. Insert the right side of the board. Ensure that the power switch does not get caught in one of the many holes in the front of the chassis. Push the panel forward until the lock tabs click. 3. Attach the front panel bezel.
Figure 6-9 Front Panel Board Cable Location on Backplane Front Panel Board Connector System Backplane Removing and Replacing a Front Smart Fan Assembly The Front Smart Fan Assembly is located in the front of the chassis. The fan assembly is a hot swappable component.
CAUTION: Observe all ESD safety precautions before attempting this procedure. Failure to follow ESD safety precautions could result in damage to the server.
Removing a Front Smart Fan Assembly Figure 6-11 Front Fan Detail 1. 2. 3. 4. Remove the front bezel. Pull the fan release pin upward away from the fan. Slide the fan away from the connector. Pull the fan away from the chassis. Replacing a Front Smart Fan Assembly 1. 2. 3. 4. Position the fan assembly on the chassis fan guide pins. Slide the fan into the connector. Verify that the fan release pin is in the locked position. Replace the front bezel.
CAUTION: Observe all ESD safety precautions before attempting this procedure. Failure to follow ESD safety precautions could result in damage to the server.
Removing a Rear Smart Fan Assembly Figure 6-13 Rear Fan Detail 1. 2. 3. Pull the fan release pin upward away from the fan. Slide the fan away from the connector. Pull the fan away from the chassis. Replacing a Rear Smart Fan Assembly 1. 2. 3. Carefully position the fan assembly on the chassis fan guide pins. Slide the fan into the connector. Verify that the fan release pin is in the locked position. NOTE: A green fan LED indicates the fan is operational.
CAUTION: Observe all ESD safety precautions before attempting this procedure. Failure to follow ESD safety precautions could result in damage to the server.
1. 2. Disengage the front locking latch on the disk drive by pushing the release tab to the right and the latch lever to the left. Pull forward on the front locking latch and carefully slide the disk drive from the chassis. Replacing a Disk Drive NOTE: Sometimes using the diskinfo and ioscan commands will produce cached data. To resolve this, these commands should be run when the disk drive is removed. 1. Before installing the disk drive, enter the following command: #diskinfo -v /dev/rdsk/cxtxdx 2.
CAUTION: Observe all ESD safety precautions before attempting this procedure. Failure to follow ESD safety precautions could result in damage to the server. Figure 6-16 DVD/DAT Location DVD/DAT Removing a DVD/DAT Drive 1. 2. 3. 4. 5. To remove the DVD/DAT, depress the front locking latch to loosen the drive from the chassis. Partially slide the drive out. Disengage the cables from the rear of the DVD/DAT. Remove the rails and clips from the drive. Completely slide the drive from the chassis.
Figure 6-17 DVD/DAT Detail Installing a Half-Height DVD or DAT Drive. CAUTION: The following section describes precise instructions for removable media cable measurement and orientation. Failure to comply will damage drive(s), data, and power cables. Use this section to configure and install a half-height DVD or DAT drive. Internal DVD and DAT Devices That Are Not Supported In HP Integrity rx7640 Table 6-3 refers to DVD or DAT drives that are not supported in the HP Integrity rx7640 server.
Figure 6-18 Single SCSI and Power Cable in Drive Bay Top DVD/DAT SCSI Cable Single Removable Media Power Cable The following procedure provides information on configuring the removable media drive bay cables for use with the half-height DVD or DAT drive. 1. 2. 3. 4. 5. 6. 7. 8. 9. Turn off power and remove the top cover.
10. Carefully position the metal removable media cover over the SCSI data and power cable and fasten into place. CAUTION: Ensure the service length of the cables remains fixed as described in steps 7 and 8 when securing the removable media cover. Failure to comply will damage the removable media drive, data, and power cables. NOTE: The SCSI data cable end folds over the metal cover. 11. Carefully fold the Bottom DVD data cable at the orange lines and lay it in the server chassis. See Figure 6-20.
Figure 6-22 Power Cable Connection and Routing Removable Media Power Cable Routed Through the Cable Clip on the Back of the DVD Drive. 5. 6. 7. Connect the SCSI cable to the rear of the drive. Install the left and right media rails and clips to the drive. Fold the cables out of the way and slide the drive into the chassis. The drive easily slides into the chassis; however, a slow firm pressure is needed for proper seating. The front locking tab will latch to secure the drive in the chassis.
Removing a Slimline DVD Drive 1. 2. To remove the DVD drive, press the drive release mechanism to release the drive from the drive bay. Slide the drive out of the DVD carrier. Replacing a Slimline DVD Drive • Slide the drive into the DVD carrier until it clicks into place. Removing and Replacing a Dual Slimline DVD Carrier The Slimline DVD carrier is located in the front of the chassis. The system power to this component must be removed before attempting to remove or replace it.
Installation of Two Slimline DVD+RW Drives. The HP Integrity rx7640 server can be configured with two slimline DVD+RW drives. Installation of the slimline DVD+RW drives requires that two core IO card sets are installed in the server. When the slimline DVD+RW drives are installed, the top drive is associated with cell 1 and the bottom drive is associated with cell 0. Installation of the slimline DVD+RW drives requires the following configuration of the data and power cables in the removable media drive bay.
Figure 6-26 Top DVD/DAT and Bottom DVD Cables Nested Together Bottom DVD Cable Top DVD/DAT Cable Cables Nested Together 8. 9. Insert the two power cables into the media bay so they are on the left side of the drive bay when viewed from the front of the system. Carefully insert the SCSI cables into the media bay. The SCSI cables lay on top of the power cables previously inserted into the media bay.
12. Replace the top cover. 13. Connect the SCSI cables to the mass storage backplane. 14. Proceed with Installing the Slimline DVD+RW Drives. Installing the Slimline DVD+RW Drives 1. Ensure the cables are the correct length. The black line on the SCSI cables and the red flags on the red power cables must align with the front of the front bezel. See Figure 6-28.
CAUTION: Observe all ESD safety precautions before attempting this procedure. Failure to follow ESD safety precautions could result in damage to the server. Figure 6-29 PCI/PCI-X Card Location PCI/PCI-X Cards PCI/PCI-X I/O cards can be removed and replaced by using the SAM (/usr/sbin/sam) application or by using Partition Manager (/opt/parmgr/bin/parmgr).
2. If a second Core I/O card set is installed, it must be installed in PCI-X IO Chassis 0, slot number 8. This slot is reserved for the second Core I/O LAN/SCSI card. CAUTION: When a LAN/SCSI PCI card is added to an HP Integrity rx7640 server as part of a core I/O set, it must not have an external SCSI device connected to port B of the LAN/SCSI PCI card. Data corruption will result to each of the connected SCSI devices.
7. Exit SAM Option ROM To allow faster booting, system firmware does not auto-scan PCI devices with an Option ROM. In order to boot from a PCI connected device with an Option ROM, it must be added to the table of boot devices as follows: 1. 2. 3. Install the I/O card into the chassis. Boot the server to the EFI shell. Execute the EFI search command. To add a single card: search To add all cards: search all 4. Execute the following EFI command: map –r 5. |
CAUTION: Observe all ESD safety precautions before attempting this procedure. Failure to follow ESD safety precautions could result in damage to the server.
2. Grasp the fan with thumb and forefinger while depressing the locking tab. NOTE: The two right side fans, as viewed from the front, are located very close to the chassis. It might be necessary to use a tool, such as a flatblade screwdriver, to assist in removing them. 3. Slide the fan upward from the chassis. Replacing a PCI Smart Fan Assembly 1. 2. 3. Carefully position the fan assembly in the chassis. The fan easily slides into the chassis. Use a slow firm pressure to properly seat the connection.
Table 6-5 PCI-X Power Supply LEDs LED Driven By State Description Power Each supply On Green All output voltages generated by the power supply are within limits. Off Power to entire system has been removed. Flash Red Power supply has shut down due to an over temperature condition, a failure to regulate the power within expected limits, or a current-limit condition. Off Normal operation. Fault Each supply Removing a PCI-X Power Supply Figure 6-33 PCI Power Supply Detail 1. 2.
CAUTION: Observe all ESD safety precautions before attempting this procedure. Failure to follow ESD safety precautions could result in damage to the server. Figure 6-34 BPS Location Bulk Power Supplies IMPORTANT: When a BPS is pulled from the server and then immediately re-inserted, the server might report an overcurrent condition and shut down. Removing a BPS 1. 2. 172 Remove the front bezel. Press in on the extraction lever release mechanism and pull outward.
Figure 6-35 Extraction Levers Levers 3. Slide the BPS forward using the extractions levers to remove it from the chassis.
CAUTION: Use caution when handling the BPS. A BPS weighs 18 lbs. Replacing a BPS 1. 2. 3. 4. Verify that the extraction levers are in the open position, then insert the BPS into the empty slot. The BPS easily slides into the chassis. Use a slow firm pressure to properly seat the connection. Ensure the BPS has seated by closing the extraction levers. Replace the front bezel. NOTE: The BPS LED should show BPS operational and no fault. The BPS LED should be GREEN.
1. Connect to the server complex management processor and enter CM to access the Command menu. Use telnet to connect to the management processor, if possible. If a management processor is at its default configuration (including default network settings), connect to it using either of these methods: • • Establish a direct serial cable connection through the management processor local RS-232 port.
7 HP Integrity rp7440 Server The following information describes material specific to the HP Integrity rx7640 and HP 9000 rp7440 Servers and the PA-8900 processor.
Table 7-2 Typical Server Configurations for the HP 9000 rp7440 Server (continued) Cell Boards Memory Per PCI Cards Cell Board (assumes 10 watts each) DVDs Hard Disk Core I/O Bulk Power Drives Supplies Typical Power Typical Cooling Qty GBytes Qty Qty Qty Qty Qty Watts BTU/hr 2 8 8 2 2 2 2 1871 6389 1 8 8 1 1 1 2 1237 4224 The air conditioning data is derived using the following equations. • • • Watts x (0.860) = kcal/hour Watts x (3.
HP 9000 Boot Configuration Options On cell-based HP 9000 servers the configurable system boot options include boot device paths (PRI, HAA, and ALT) and the autoboot setting for the nPartition. To set these options from HP-UX, use the setboot command. From the BCH system boot environment, use the PATH command at the BCH Main Menu to set boot device paths, and use the PATHFLAGS command at the BCH Configuration menu to set autoboot options.
• BOOT LAN INSTALL or BOOT LAN.ip-address INSTALL The BOOT... INSTALL commands boot HP-UX from the default HP-UX install server or from the server specified by ip-address. • BOOT path This command boots the device at the specified path. You can specify the path in HP-UX hardware path notation (for example, 0/0/2/0/0.13) or in path label format (for example, P0 or P1) . If you specify the path in path label format, then path refers to a device path reported by the last SEARCH command.
Main Menu: Enter command or menu > BOOT 0/0/2/0/0.13 BCH Directed Boot Path: 0/0/2/0/0.13 Do you wish to stop at the ISL prompt prior to booting? (y/n) >> y Initializing boot Device. .... ISL Revision A.00.42 JUN 19, 1999 ISL> 3. From the ISL prompt, issue the appropriate Secondary System Loader (hpux) command to boot the HP-UX kernel in the desired mode. Use the hpux loader to specify the boot mode options and to specify which kernel to boot on the nPartition (for example, /stand/vmunix).
Procedure 7-3 LVM-Maintenance Mode HP-UX Booting (BCH Menu) From the BCH Menu, you can boot HP-UX in LVM-maintenance mode by issuing the BOOT command, stopping at the ISL interface, and issuing hpux loader options. The BCH Menu is available only on HP 9000 servers. 1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX in LVM-maintenance mode. Log in to the management processor, and enter CO to access the Console list. Select the nPartition console.
2. Issue the shutdown command with the appropriate command-line options. The command-line options you specify dictate the way in which HP-UX is shut down, whether the nPartition is rebooted, and whether any nPartition configuration changes take place (for example, adding or removing cells). Use the following list to choose an HP-UX shutdown option for your nPartition: • • Shut down HP-UX and halt the nPartition. Shut down HP-UX and reboot the nPartition.
A Replaceable Parts Replaceable Parts This appendix contains the server CRU list. For a more updated list of part numbers, go to the HP Part Surfer web site at: http://www.partsurfer.hp.com. Table A-1 Server CRU Descriptions and Part Numbers CRU DESCRIPTION Replacement P/N Exchange P/N 8120-6895 None Pwr Crd C19/IEC-309 L6-20 4.5m BLACK CA ASSY 8120-6897 None Pwr Crd C19/L6-20 4.5m BLACK C 8120-6903 None 240V N.AMERICAN UPS 4.5M C19/L 8120-8494 None Pwr Crd C19/GB 1002 4.
Table A-1 Server CRU Descriptions and Part Numbers (continued) 186 CRU DESCRIPTION Replacement P/N Exchange P/N Nameplate, rx7640 AB312-2108A None Box, DVD Filler (Carbon) A6912-00014 None Assy, Bezel, No NamePlate (Graphite) A7025-04001 None Assy, Front Panel Display Bezel AB312-2102A None Snap, Bezel Attach C2786-40002 None Replaceable Parts
B MP Commands This appendix contains a list of the Server Management Commands. Server Management Commands Table B-1 lists the server management commands.
Table B-3 System and Access Config Commands (continued) 188 CP Display partition cell assignments DC Reset parameters to default configuration DI Disconnect Remote or LAN console ID Change certain stable complex configuration profile fields IF Display network interface information IT Modify command interface inactivity time-out LC Configure LAN connections LS Display LAN connected console status PARPERM Enable/Disable interpartition security PD Modify default Partition for this login se
C Templates This appendix contains blank floor plan grids and equipment templates. Combine the necessary number of floor plan grid sheets to create a scaled version of the computer room floor plan. Figure C-1 illustrates the overall dimensions required for the server. Figure C-1 Server Space Requirements Equipment Footprint Templates Equipment footprint templates are drawn to the same scale as the floor plan grid (1/4 inch = 1 foot).
2. 3. 4. 5. Cut and join them together (as necessary) to create a scale model floor plan of your computer room. Remove a copy of each applicable equipment footprint template. Cut out each template selected in step 3; then place it on the floor plan grid created in step 2. Position pieces until the desired layout is obtained; then fasten the pieces to the grid. Mark locations of computer room doors, air-conditioning floor vents, utility outlets, and so on.
Figure C-3 Planning Grid Computer Room Layout Plan 191
Figure C-4 Planning Grid 192 Templates
Index A access commands, 187 air ducts, 40 illustrated, 41 AR, 187 ASIC, 19 B backplane mass storage, 34, 35, 148 PCI, 29, 34 system, 23, 29, 34, 35, 39, 149 BO, 187 BPS (Bulk Power Supply), 76 C CA, 187 cards core I/O, 132 CC, 187 cell board, 22, 23, 24, 35, 39, 75, 80, 83, 126 verifying presence, 80 cell controller, 19 checklist installation, 85 cm (Command Menu) command, 81 co (Console) command, 83 command, 187 co (Console), 83 CTRL-B, 83 di (Display), 84 PE, 143 scsi default, 143 ser, 143 T, 143 vfp (
IT, 187 K Keystone system air ducts, 40 L LAN, 132 LC, 187 LED Attention, 76 Bulk Power Supply, 76 management processor, 23 remote port, 23 SP Active, 76 Standby Power Good, 76 traffic light, 23 login name MP, 77 LS, 187 M MA, 187 management hardware, 132 Management Processor (MP), 75 management processor (MP), 132 mass storage backplane, 34, 35, 148 memory, 19 MP login name, 77 password, 77 MP (Management Processor) logging in, 76 powering on, 76 MP core I/O, 22, 23, 29, 34, 74, 75 MP/SCSI, 22, 23, 29,
TE, 187 turbocoolers, 19 U update firmware, 136 V verifying system configuration, 84 W warranty, 43 web console, 132 WHO, 187 wrist strap, 142 X XD, 187 195