Front cover IBM System p5 520 and 520Q Technical Overview and Introduction Finer system granulation using Micro-Partitioning technology to help lower TCO Support for versions of AIX 5L and Linux operating systems From Web servers to integrated cluster solutions Giuliano Anselmi Charlie Cler Carlo Costantini Bernard Filhol SahngShin Kim Gregor Linzmeier Ondrej Plachy ibm.
International Technical Support Organization IBM System p5 520 and 520Q Technical Overview and Introduction September 2006
Note: Before using this information and the product it supports, read the information in “Notices” on page vii. Second Edition (September 2006) This edition applies to IBM System p5 520 (product number 9131-52A), Linux, and IBM AIX 5L Version 5.3, product number 5765-G03. © Copyright International Business Machines Corporation 2006. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix The team that wrote this Redpaper . . . . . . . . . . . . . . . . . . . . . . . .
iv 2.6.2 SCSI adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 Integrated RAID options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.4 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.5 Fibre Channel adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.
3.1.3 Permanent monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 Self-healing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.5 N+1 redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.6 Fault masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.
vi IBM System p5 520 and 520Q Technical Overview and Introduction
Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: Eserver® Redbooks (logo) pSeries® AIX 5L™ AIX® Chipkill™ DS4000™ DS6000™ DS8000™ FICON® ™ HACMP™ IBM® Micro-Partitioning™ OpenPower™ PowerPC® POWER™ POWER Hypervisor™ POWER4™ POWER5™ POWER5+™ PTX® Redbooks™ RS/6000® Service Director™ System p™ System p5™ System Storage™ TotalStorage® Virtualization Engine™ 1350™ The following terms are trademarks of other compa
Preface This IBM Redpaper is a comprehensive guide that covers the IBM® System p5™ 520 and 520Q UNIX® servers. It introduces major hardware offerings and discusses their prominent functions. Professionals who want to acquire a better understanding of IBM System p™ products should read this document.
expertise include Mainframe Channel Subsystem, FICON®, and pSeries RAS. He has written extensively on FICON. SahngShin Kim is a sales specialist of STG infra-solution sales team in Seoul, Korea. For three years, he was a sales specialist of IBM eServer pSeries, for two years of grid computing, and for one year for infra-solutions. SahngShin has worked for IBM for six years, devoting himself to RS/6000 and pSeries systems and STG server products and as an architect for these products.
Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an e-mail to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept.
xii IBM System p5 520 and 520Q Technical Overview and Introduction
1 Chapter 1. General description The IBM System p5 520 and IBM System p5 520Q rack-mount and deskside servers (9131-52A) give you new tools for managing on demand business, greater application flexibility, and innovative technology in 1-core, 2-core, and 4-core configurations — all designed to help you capitalize on the on demand business revolution. To simplify naming, both products are referred to as p5-520 or p5-520Q.
Three non-hot-swappable media bays are used to accommodate additional devices. Two media bays only accept slim-line media devices, such as DVD-ROM or DVD-RAM drives, and one half-height bay is used for a tape drive. The rack-mount model also has I/O extension capability using the RIO-2 bus that allows attachment of the 7311 Model D20 I/O drawers. For partitioning, we recommend an HMC. Dynamic LPAR is supported on the p5-520 and p5-520Q servers, allowing up to two logical partitions.
1.1 System specifications Table 1-1 lists the general system specifications of the p5-520 and p5-520Q systems. Table 1-1 IBM System p5 520 and IBM System p5 520Q specifications Description Range Operating temperature 5 to 35 degrees Celsius (41 to 95 F) Relative humidity 8% to 80% Operating voltage 100 to 127 or 200 to 240 V ac (auto-ranging) Operating frequency 47/63 Hz Maximum power consumption 750 watts maximum Maximum thermal output 2560 BTU/hour (maximum) 1.
Figure 1-1 The deskside model (FC 7184) and acoustic cover (right FC 7185) The p5-520 or p5-520Q, when configured as a deskside server, is ideal for environments that require local access to the machine, such as applications that require a native graphics display. To order a system as a deskside version, FC 7184 or FC 7185 is required. FC 7185 is designed for quiet operation in office environments. The system is designed to be set up by the client and, in most cases, does not require the use of any tools.
1.2.2 Rack-mount model The IBM System p5 520 or IBM System p5 520Q can be configured as a 4U rack-mount model with the selected feature code. Table 1-3 lists the physical attributes and Figure 1-2 shows the system. Table 1-3 Physical attributes of the rack-mount model Dimensiona Rack (FC 7918) Height 178 mm (7.0 in.) Width 437 mm (17.2 in.) Depth 584 mm (23.0 in.) Weight Weight 43.0 kg (95 lb.) Shipping weight 53.0 kg (117 lb.) a.
AIX 5L system tools for configuration management if the adapter is connected to a common maintenance console, such as the 7316-TF3 rack-mounted flat-panel display. 1.3 Minimum and optional features The systems are based on a flexible, modular design based on POWER5+ processors. The server is available in 1-core, 2-core, and 4-core configurations that feature the following: 1.65 (SCM and DCM), 1.9 or 2.1 GHz (DCM), and 1.5 or 1.65 GHz (QCM) POWER5+ processors.
Table 1-4 Processor feature codes Feature code Description 8321 1-core 1.65 GHz POWER5+ Processor Card, no L3 Cache 8323 2-core 1.65 GHz POWER5+ Processor Card, 36 MB L3 Cache 8330 2-core 1.9 GHz POWER5+ Processor Card, 36 MB L3 Cache 8315 1-core 2.1 GHz POWER5+ Processor Card, 36 MB L3 Cache 8316 2-core 2.1 GHz POWER5+ Processor Card, 36 MB L3 Cache 8333 4-core 1.5 GHz POWER5+ Processor Card, 2 x 36 MB L3 Cache 8314 4-core 1.
Table 1-6 Hot-swappable disk drive options Feature code Description 1968 73.4 GB ULTRA320 10 K rpm SCSI hot-swappable disk drive 1969 146.8 GB ULTRA320 10 K rpm SCSI hot-swappable disk drive 1970 36.4 GB ULTRA320 15 K rpm SCSI hot-swappable disk drive 1971 73.4 GB ULTRA320 15 K rpm SCSI hot-swappable disk drive 1972 146.
four attached 7311 Model D20 drawers, the combined system supports up to 34 PCI-X adapters (in a maximum configuration, remote I/O expansion cards are required) and 56 hot-swappable SCSI disks, for a total internal capacity of 16.8 TB using 300 GB disks. PCI-X and PCI cards are inserted from the top of the I/O drawer down into the slot from the drawer’s front service position.
Note: The 7311 Model D20 I/O drawer is designed to be installed by an IBM service representative. Only the 7311 Model D20 I/O drawer is supported on a p5-520 or p5-520Q system. 1.3.6 Hardware Management Console models A p5-520 or p5-520Q can be either HMC-managed or non-HMC-managed. In HMC-managed mode, an HMC is required as a dedicated workstation that allows you to configure and manage partitions.
1.4 Express Product Offerings The Express Product Offerings provide a convenient way to order any of several configurations that are designed to meet typical client requirements. Special reduced pricing is available when a system order satisfies specific configuration requirements for memory, disk drives, and processors. 1.4.1 Express Product Offerings requirements When you order an Express Product Offering, the configurator offers a choice of starting points onto which you can add.
Table 1-8 Express Product Offering standard set of feature codes Feature code description Rack-mounted feature codes Deskside feature code System bezel and hardware 7190 7916 x 1 Rack-mount rail kit 7160 x 1 n/a 850 Watt power supply 5159 x 1 5159x 1 IDE DVD-ROM 1994 x 1 1994 x 1 Media backplane 7877 x 1 7877 x 1 4-pack disk drive enclosure 6574 x 1 6574 x 1 73.
If a server is to be installed in a non-IBM rack or cabinet, you must ensure that the rack conforms to the EIA2 standard EIA-310-D (see 1.5.9, “OEM rack” on page 21). Note: It is the client’s responsibility to ensure that the installation of the drawer in the preferred rack or cabinet results in a configuration that is stable, serviceable, safe, and compatible with the drawer requirements for power, cooling, cable management, weight, and rail security. 1.5.1 IBM 7014 Model T00 rack The 1.
Optional Rear Door Heat eXchanger (FC 6858) Improved cooling from the heat exchanger enables the client to more densely populate individual racks freeing valuable floor space without the need to purchase additional air conditioning units.
Width: 605 mm (23.8 in.) with side panels Depth: 1001 mm (39.4 in.) with front door Height: 1344 mm (49.0 in.) Weight: 100.2 kg (221.0 lb.) The S25 rack has a maximum load limit of 22.7 kg (50 lb.) per EIA unit for a maximum loaded rack weight of 667 kg (1470 lb.). 1.5.5 S11 rack and S25 rack considerations The S11 and S25 racks do not have vertical mounting space that will accommodate FC 7188 PDUs.
Table 1-11 Models supported in S11 and S25 racks Machine type-model Name Supported in: 7014-S11 rack 7014-S25 rack 7037-A50 IBM System p5 185 Y Y 7031-D24/T24 EXP24 Disk Enclosure Y Y 7311-D20 I/O Expansion Drawer Y Y 9110-510 IBM System p5 510 Y Y 9111-520 IBM System p5 520 Y Y 9113-550 IBM System p5 550 Y Y 9115-505 IBM System p5 505 Y Y 9123-710 OpenPower 710 Y Y 9124-720 OpenPower 720 Y Y 9110-51A IBM System p5 510 and 510Q Y Y 9131-52A IBM System p5 520 and
The S11 and S25 racks support as many PDUs as there is available rack space. For detailed power cord requirements and power cord feature codes, see IBM System p5, IBM Eserver p5 and i5, and OpenPower Edition Planning, SA38-0508. For an online copy, select Map of pSeries books to the information center → Planning → Printable PDFs → Planning at the following Web site: http://publib.boulder.ibm.com/infocenter/eserver/v1r3s/index.
1.5.7 Rack-mounting rules The primary rules that you should follow when you mount the server into a rack are: The p5-520 or p5-520Q is designed to be placed at any location in the rack. For rack stability, we advise that you start filling a rack from the bottom. Any remaining space in the rack can be used to install other systems or peripherals, provided that the maximum permissible weight of the rack is not exceeded and the installation rules for these devices are followed.
reading and writing on Ultrium 2 tape cartridges, it is also read and write compatible with Ultrium 1 cartridges. SLR60 Tape Drive (QIC format) comes with 37.5 GB native data physical capacity per tape cartridge and a native physical data transfer rate of up to 4 MB per second and uses 2:1 compression to achieve a single tape cartridge physical capacity up to 75 GB of data.
Occupies only 1U (1.75 in) in a 19-inch standard rack The 1x8 Console Switch can be mounted in one of the following racks: 7014-T00, 7014-T42, 7014-S11, or 7014-S25. The 1x8 Console Switch supports GXT135P (FC 1980 and FC 2849) graphics accelerators.
The switch offers four ports for server connections. Each port in the switch can connect a maximum of 16 systems: – One KCO cable (FC 4268) or USB cable (FC 4269) is required for every four systems supported on the switch. – A maximum of 16 KCO cables or USB cables per port can be used with the Netbay LCM Switch to connect up to 64 servers. Note: A server microcode update might be required on installed systems for boot-time System Management Services (SMS) menu support of the USB keyboards.
Figure 1-5 Top view of non-IBM rack specification dimensions The vertical distance between the mounting holes must consist of sets of three holes spaced (from bottom to top) 15.9 mm (0.625 in.), 15.9 mm (0.625 in.), and 12.67 mm (0.5 in.) on center, making each three-hole set of vertical hole spacing 44.45 mm (1.75 in.) apart on center. Rail-mounting holes must be 7.1 mm + 0.1 mm (0.28 in. + 0.004 in.) in diameter.
Figure 1-7 Rack specification dimensions, bottom front view It might be necessary to supply additional hardware, such as fasteners, for use in some manufacturer’s racks. The system rack or cabinet must be capable of supporting an average load of 15.9 kg (35 lb.) of product weight per EIA unit. The system rack or cabinet must be compatible with drawer mounting rails, including a secure and snug fit of the rail-mounting pins and screws into the rack or cabinet rail support hole.
24 IBM System p5 520 and 520Q Technical Overview and Introduction
2 Chapter 2. Architecture and technical overview This chapter discusses the overall system architecture of the p5-520 and p5-520Q. Figure 2-1 details the base system hardware and the DCM or QCM options. (You cannot mix an installation of DCM and QCM options.) The bandwidths in this chapter are theoretical maximums that are provided for reference. We always recommend that you obtain real-world performance measurements using production workloads. 33 MHz SMI-II Enhanced distributed switch 1.
2.1 The POWER5+ processor The IBM POWER5+ processor capitalizes on all the enhancements brought by the POWER5 processor. For a detailed description of the POWER5 processor, refer to IBM System p5 520 Technical Overview and Introduction, REDP-9111. Figure 2-2 shows a high level view of the POWER5+ processor. POWER 5+ Processor Core 2.1 GHz Core 2.1 GHz L3 Bus L3 Intf Mem Bus Mem Cntrl GX+ Intf 1.
Double the SMP support. Changes have been made in the fabric, L2 and L3 controller, memory controller, GX controller, and processor RAS to provide support for the QCM that allows the SMP system sizes to be double that which is available in POWER5 DCM-based servers. However, current POWER5+ implementations only support single address loop. Several enhancements have been made in the memory controller for improved performance. The memory controller is ready to support DDR2 667 MHz DIMMs in the future.
DIMM SMI-II DIMM DIMM Single-Core Module SCM GX+ Ctrl POWER5+ core GX+ Bus DIMM 2x8B @528 MHz 36 MB L3 cache DIMM SMI-II DIMM DIMM DIMM 2x16B 2:1 L3 Ctrl Mem Ctrl 1.9 MB Shared L2 cache Enhanced distributed switch 1056 MHz 2 x 8 B for read 2 x 2 B for write Figure 2-4 p5-520 POWER5+ 2.1 GHz SCM with DDR2 memory socket layout view The storage structure for the POWER5+ processor is a distributed memory architecture that provides high-memory bandwidth.
sees a single shared memory resource. They are interfaced to eight memory slots, controlled by two SMI-II chips, which are located in close physical proximity to the processor modules. I/O connects to the p5-520 processor module using the GX+ bus. The processor module provides a single GX+ bus. The GX+ bus provides an interface to I/O devices through the RIO-2 connections. The theoretical maximum throughput of the L3 cache is 16 byte read, 16 byte write at a bus frequency of 1.05 GHz (based on a 2.
2.2.4 Available processor speeds Table 2-1 lists the available processor capacities and speeds for the p5-520 and p5-520Q systems. Table 2-1 p5-520 and p5-520Q available processor capacities and speeds p5-520 @ 1.65 GHz p5-520 @ 1.9 GHz p5-520 @ 2.1 GHz p5-520Q @ 1.5 GHz p5-520Q @ 1.
DIMM CX JXX “Ax” DIMM CX JXX “Ax” DIMM CX JXX “Ax” POWER5+ DCM (Dual-Core Module) or SMI-II DIMM CX JXX “Ax” 2x8 B @528 MHz DIMM CX JXX “Ax” 1056 MHz 2x8 B for read 2x8 B for write DIMM CX JXX “Ax” SMI-II DIMM CX JXX “Ax” QCM (Quad-Core Module) DIMM CX JXX “Ax” First quad Second quad DIMM CX JXX “Ax” J2C DIMM CX JXX “Ax” J2D DIMM CX JXX “Ax” J0D DIMM CX JXX “Ax” J0C DIMM CX JXX “Ax” J0B DIMM CX JXX “Ax” J0A DIMM CX JXX “Ax” SMI-II J2B SMI-II DIMM CX JXX “Ax” 2x8 B @528 MHz J
To determine how much memory is installed in a system, use the following command: # lsattr -El sys0 | grep realmem realmem 524288 Amount of usable physical memory in Kbytes False Note: A quad must consist of a single feature (that is, be made of identical DIMMs). Mixed DIMM capacities in a quad will result in reduced RAS. 2.3.2 OEM memory OEM memory is not supported or certified by IBM for use in an IBM System p5 server.
Table 2-3 Theoretical throughput rates Processor speed (GHz) Processor Type Cores Memory (GBps) L2 to L3 (GBps) GX+ (GBps) 1.65 POWER5+ 1-core 21.1 26.4 4.4 1.65 POWER5+ 2-core 21.1 26.4 4.4 1.9 POWER5+ 2-core 21.1 30.4 5.1 2.1 POWER5+ 1-core 21.1 33.6 5.6 2.1 POWER5+ 2-core 21.1 33.6 5.6 1.5 POWER5+ 4-core 21.1 48 4 1.65 POWER5+ 4-core 21.1 52.8 4.4 2.
External I/O Subsystem (up to 4 7311-D20) RIO-2 Card GX+ Bus DCM or QCM Enhanced I/O Controller Internal I/O Subsystem Memory System Planar Processor Card Figure 2-9 p5-520 or p5-520Q GX+ Bus connection overview According to the processor speed, the I/O subsystem is capable of supporting 5.6 GBps when using the 2.1 GHz processor, or capable of supporting 4.4 GBps when using a 1.65 GHz processor. The bus is a dual four-byte wide bus running at a 3:1 processor to bus ratio. 2.
2.6 64-bit and 32-bit adapters IBM offers 64-bit adapter options for the p5-520 and p5-520Q, as well as 32-bit adapters. Higher-speed adapters use 64-bit slots because they can transfer 64 bits of data for each data transfer phase. Generally, 32-bit adapters can function in 64-bit PCI-X slots; however, some 64-bit adapters cannot be used in 32-bit slots.
You also have the option to make the internal Ultra320 SCSI channel externally accessible on the rear side of the system by installing FC 4275. No additional SCSI adapter is required in this case. If FC 4275 is installed, a second 4-pack disk enclosure (FC 6574 or FC 6594) cannot be installed, which limits the maximum number of internal disks to four. Slot 5 cannot be used when FC 4275 is installed. For more information about the internal SCSI system, see 2.7, “Internal storage” on page 41. 2.6.
The iSCSI protocol is implemented on top of the physical and data-link layers and presents to the operating system a standard SCSI Access Method command set. It supports SCSI-3 commands and reliable delivery over IP networks. The iSCSI protocol runs on the host initiator and the receiving target device.
IBM iSCSI software Host Support Kit The iSCSI protocol can also be used over standard Gigabit Ethernet adapters. To utilize this approach, download the appropriate iSCSI Host Support Kit for your operating system from the IBM NAS support Web site at: http://www.ibm.com/storage/support/nas/ The iSCSI Host Support Kit on AIX 5L and Linux acts as a software iSCSI initiator and allows you to access iSCSI target storage devices using standard Gigabit Ethernet network adapters.
Table 2-7 Available Fibre Channel adapters Feature code Description Slot Size Max 1905 4 Gigabit single-port Fibre Channel PCI-X 2.0 Adapter (LC) 64 Short 6 1910 4 Gigabit dual-port Fibre Channel PCI-X 2.0 Adapter (LC) 64 Short 6 1977 2 Gigabit Fibre Channel PCI-X Adapter (LC) 64 Short 6 2.6.6 Graphic accelerators The p5-520 and p5-520Q support up to four enhanced POWER GXT135P (FC 1980) 2D graphic accelerators.
In many cases, the 5723 asynchronous adapter is configured to supply a backup HACMP heartbeat. In these cases, a serial cable (FC 3927 or FC 3928) must be also configured. Both of these serial cables and the 5723 adapter have 9-pin connectors. 2.6.9 PCI-X Cryptographic Coprocessor The PCI-X Cryptographic Coprocessor (FIPS 4) (FC 4764) for selected System p servers provides both cryptographic coprocessor and secure-key cryptographic accelerator functions in a single PCI-X card.
2.6.12 Ethernet ports The two built-in Ethernet ports provide 10/100/1000 Mbps connectivity over CAT-5 cable for up to 100 meters. Table 2-9 lists the attributes of the LEDs that are visible on the side of the jack. Table 2-9 Ethernet LED descriptions LED Light Description Link Off Green No link; could indicate a bad cable, not selected, or configuration error. Connection established. On Off Data activity. Idle. Activity 2.
assigns the name ses0 to the first 4-pack, and ses1 to the second, if present). The two hot-swappable 4-pack disk drive backplanes can accommodate the devices listed in Table 2-11. Table 2-11 Available hot-swappable disk drives Feature code Description 1968 73.4 GB ULTRA320 10 K rpm SCSI hot-swappable disk drive 1969 146.8 GB ULTRA320 10 K rpm SCSI hot-swappable disk drive 1970 36.4 GB ULTRA320 15 K rpm SCSI hot-swappable disk drive 1971 73.
2.8 External I/O subsystem This section describes the external I/O subsystem, the 7311 D20 I/O drawer that is the only drawer supported on the p5-520 and p5-520Q systems. 2.8.1 I/O drawers As described in Chapter 1, “General description” on page 1, the p5-520 or p5-520Q systems have six internal PCI-X slots, which is enough in many cases.
Connect the SCSI cable feature to the SCSI adapter in rightmost slot (7) as shown below: If a SCSI card is also placed in slot 4, wire as shown below: to 6-pack backplanes to 6-pack backplanes SCSI cables FC 4257 SCSI cables FC 4257 Figure 2-11 7311 Model D20 internal SCSI cabling Note: Any 6-packs and the related SCSI adapter can be assigned to a partition. If one SCSI adapter is connected to both 6-packs, both 6-packs can be assigned only to the same partition.
PCI-X slots FC 2888 I/O drawer #1 I/O drawer #2 I/O drawer #3 I/O drawer #4 Figure 2-12 RIO-2 connections The RIO-2 cables used have different lengths to satisfy the different connection requirements: Remote I/O cable, 1.2 m (FC 3146) Remote I/O cable, 1.75 m (FC 3156) Remote I/O cable, 2.5 m (FC 3168) Remote I/O cable, 3.5 m (FC 3147) Remote I/O cable, 10 m (FC 3148) 2.8.
Primary drawer Primary drawer SPCN port 0 SPCN port 0 SPCN port 1 SPCN port 1 I/O drawer or secondary drawer I/O drawer or secondary drawer SPCN port 0 SPCN port 0 SPCN port 1 SPCN port 1 I/O drawer or secondary drawer SPCN port 0 SPCN port port 1 1 SPCN Figure 2-13 SPCN cabling examples There are different SPCN cables to satisfy different length requirements: SPCN cable drawer to drawer, 2 m (FC 6001) SPCN cable drawer to drawer, 3 m (FC 6006) SPCN cable rack to rack, 6 m (FC 6008)
36.4 GB, 73.4 GB or 146.8 GB 15K rpm drives. Each of the four six-packs disk drive enclosure can be attached independently to an Ultra320 SCSI or Ultra320 SCSI RAID adapter. For high available configurations, a dual bus repeater card (FC 5742) allows each six-pack to be attached to two SCSI adapters, installed in one or multiple servers or logical partitions. Optionally, the two front or two rear six-packs can be connected together to form a single Ultra320 SCSI bus of 12 drives. 2.9.
2.10 Logical partitioning Dynamic logical partitions (LPARs) and virtualization increase utilization of system resources and add a new level of configuration possibilities. This section provides details and configuration specifications about this topic. The virtualization discussion includes virtualization enabling technologies that are standard on the system, such as the POWER Hypervisor™, and optional ones, such as the Advanced POWER Virtualization feature. 2.10.
2.11.1 POWER Hypervisor Combined with features designed into the POWER5 and POWER5+ processors, the POWER Hypervisor delivers functions that enable other system technologies, including Micro-Partitioning technology, virtualized processors, IEEE VLAN, compatible virtual switch, virtual SCSI adapters, and virtual consoles. The POWER Hypervisor is a basic component of system firmware that is always active, regardless of the system configuration.
client adapter. Only the Virtual I/O Server partition can define virtual SCSI server adapters, other partitions are client partitions. The Virtual I/O Server is available with the optional Advanced POWER Virtualization feature (FC 7940). Virtual Ethernet The POWER Hypervisor provides a virtual Ethernet switch function that allows partitions on the same server to use a fast and secure form of communication without any need for physical interconnection.
2.12 Advanced POWER Virtualization feature The Advanced POWER Virtualization feature (FC 7940) is an optional, additional cost feature. This feature enables the implementation of more fine-grained virtual partitions on IBM System p5 servers. The Advanced POWER Virtualization feature includes: Firmware enablement for Micro-Partitioning technology. Support for up to 10 partitions per processor using 1/100 of the processor granularity. Minimum CPU requirement per partition is 1/10.
A partition can be defined with a processor capacity as small as 0.10 processing units. This represents one-tenth of a physical processor. Each physical processor can be shared by up to 10 shared processor partitions, and a partition’s entitlement can be incremented fractionally by as little as one-hundredth of the processor. The shared processor partitions are dispatched and time-sliced on the physical processors under control of the POWER Hypervisor.
Virtual processors do not introduce any additional abstraction level, they are really only a dispatch entity. When running on a physical processor, they run at the full speed of the physical processor. Each partition’s profile defines a CPU entitlement that determines how much processing power any given partition should receive. The total sum of CPU entitlement of all partitions cannot exceed the number of available physical processors in the shared processor pool.
virtual processor (VP3) dispatched dispatched virtual processor (VP4) virtual processor (VP0) virtual processor (VP0) virtual processor (VP1) .
POWER5 Partitioning Network 2 CPUs 3 CPUs 3 CPUs AIX v5.3 AIX v5.3 AIX v5.2 Linux Virtual adapter AIX v5.3 Linux Virtual SCSI Linux Virtual I/O Server External storage 6 CPUs Micro-Partitioning AIX v5.
Shared Ethernet adapter A shared Ethernet adapter (SEA) is a Virtual I/O Server service that acts as a layer 2 network bridge between a physical Ethernet adapter or aggregation of physical adapters (EtherChannel) and one or more Virtual Ethernet adapters defined by the Hypervisor on the Virtual I/O Server. A SEA enables LPARs on the virtual Ethernet to share access to the physical Ethernet and communicate with stand-alone servers and LPARs on other systems.
Important: We do not recommend using Mirrored Logical Volumes (LVs) on the Virtual I/O Server level for as backing devices. If mirroring is required, two independent devices (possibly from two separate VIO servers) should be assigned to the client partition, and then the client partition should define mirroring on top of them. Virtual I/O Server version 1.3 Virtual I/O Server version 1.
management solution that inherits some HMC features, thus avoiding the necessity of a dedicated control workstation. This solution enables the administrator to reduce system setup time. IVM is targeted at small and medium systems. IVM supports up to the maximum 16-core configuration. The IVM provides a management model for a single system. Although it does not provide the full flexibility of an HMC, it enables the exploitation of the IBM Virtualization Engine™ technology.
IVM cannot be used by HACMP software to activate Capacity on Demand (CoD) resources on machines that support CoD. IVM provides advanced virtualization functionality without the need for an extra-cost workstation. For more information about IVM functionality and best practices, see Virtual I/O Server Integrated Virtualization Manager, REDP-4061 at this Web site: http://www.ibm.com/systems/p/hardware/meetp5/ivm.pdf Figure 2-16 shows how a system with IVM is organized.
Table 2-13 Operating system supported functions Advanced POWER Virtualization feature AIX 5L Version 5.2 AIX 5L Version 5.3 Linux SLES 9 Linux RHEL AS 3 Linux RHEL AS 4 Micro-partitions (1/10th of processor) N Y Y Y Y Virtual Storage N Y Y Y Y Virtual Ethernet N Y Y Y Y Partition Load Manager Y Y N N N 2.
Management LAN Service Processor LPAR n HMC 1 LPAR... eth0 ` LPAR 2 eth1 LPAR 1 eth0 eth0 eth0 eth0 HMC 2 HMC p5 System Figure 2-17 HMC to service processor and LPARs network connection The default mechanism for allocation of the IP addresses for the service processor HMC ports is dynamic. The HMC can be configured as a DHCP server, providing the IP address at the time the managed server is powered on.
2.13.1 High availability using the HMC The HMC is an important hardware component. HACMP Version 5.3 High Availability cluster software can be used to activate resources automatically (where available), thus becoming an integral part of the cluster. For some environments, we recommend that you work with redundant HMCs.
unpartitioned system.
2.14 Operating system support The p5-520 and p5-520Q are capable of running the AIX 5L and Linux operating systems. The AIX 5L operating system has been developed and enhanced specifically to exploit and to support the extensive RAS features on IBM System p systems. 2.14.1 AIX 5L If you are installing AIX 5L on the server, the following minimum requirements must be met: AIX 5L for POWER V5.2 with the 5200-09 Technology Level (APAR IY82425), or later AIX 5L for POWER V5.
configuration changes are necessary to enable a system to use 64 KB pages; they are fully pageable, and the size of the pool of 64 KB page frames on a system is dynamic and fully managed by AIX 5L. The main benefit of a larger page size is improved performance for applications that allocate and repeatedly access large amounts of memory.
Many of the features that are described in this document are operating system dependent and might not be available on Linux. For more information, see: http://www.ibm.com/systems/p/software/whitepapers/linux_overview.pdf Note: IBM only supports the Linux systems of clients with a SupportLine contract that covers Linux. Otherwise, contact the Linux distributor for support.
2.15.1 Touch point colors Blue (IBM blue) or terra-cotta (orange) on a component indicates a touch point (for electronic parts) where you can grip the hardware to remove it from or install it into the system, open or close a latch, and so on. IBM defines the touch point colors as follows: Blue This requires a shutdown of the system, before the task can be performed, for example, installing additional processors contained in the second processor book.
Figure 2-20 Pull the server to the service position 3. Release the rack latches (C) on both the left and right sides, as shown in the Figure 2-20. 4. Review the following notes, and then slowly pull the system or expansion unit out from the rack until the rails are fully extended and locked: – If the procedure you are performing requires you to unplug cables from the back of the system or expansion unit, do so before you pull the unit out from the rack.
2.15.5 Operator control panel The service processor provides an interface to the control panel that is used to display server status and diagnostic information. See Figure 2-21 for operator control panel physical details and buttons. Figure 2-21 Operator control panel physical details and buttons Note: For servers managed by the HMC, use the HMC to perform control panel functions.
Function 30 – CEC SP IP address and location Function 30 is one of the Extended control panel functions and is only available when Manual mode is selected. You can use this function to display the central electronic complex (CEC) Service Processor IP address and location segment. Table 2-14 shows an example of how to use Function 30. Table 2-14 CEC SP IP address and location Information on operator panel Action or description 3 0 Use the increment or decrement buttons to scroll to Function 30.
The following examples are the output of the lsmcode command for AIX 5L and Linux, showing the firmware levels as they are displayed in the outputs: AIX 5L The current permanent system firmware image is SF220_005. The current temporary system firmware image is SF220_006. The system is currently booted from the temporary image. Linux system:SF220_006 (t) SF220_005 (p) SF220_006 (b) When you install a server firmware fix, it is installed on the temporary side.
Receive server firmware fixes using an HMC If you use an HMC to manage your server and you need to configure several partitions on the server periodically, you need to download and install fixes for your server and power subsystem firmware. How you get the fix depends on whether the HMC or server is connected to the Internet: The HMC or server is connected to the Internet. There are several repository locations from which you can download the fixes using the HMC.
Note: To view existing levels of server firmware using the lsmcode command, you need to have the following service tools installed on your server: AIX 5L You must have AIX 5L diagnostics installed on your server to perform this task. AIX 5L diagnostics are installed when you install AIX 5L on your server. However, it is possible to deselect the diagnostics. Therefore, you need to ensure that the online AIX 5L diagnostics are installed before proceeding with this task.
This interface is accessible using a Web browser on a client system that is connected directly to the service processor (in this case, you can use either a standard Ethernet cable or a crossed cable) or through an Ethernet network. Using the network configuration menu, the ASMI enables the possibility to change the service processor IP addresses or to apply some security policies and avoid access from undesired IP addresses or ranges.
3. Look for the power-on self-test (POST) indicators: memory, keyboard, network, SCSI, and speaker that appear across the bottom of the screen. Press the numeric 1 key after the word keyboard appears and before the word speaker appears. The SMS menus are useful to define the operating system installation method, choosing the installation boot device, or setting the boot device priority list for a fully managed server or a logical partition.
through a power-on reset. Open Firmware, which runs in addition to the Hypervisor in a partitioned environment, runs in two modes: global and partition. Each mode of Open Firmware shares the same firmware binary that is stored in the flash memory. In a partitioned environment, Open Firmware runs on top of the global Open Firmware instance. The partition Open Firmware is started when a partition is activated.
3 Chapter 3. RAS and manageability This chapter provides information about IBM System p5 design features that help lower the total cost of ownership (TCO). IBM reliability, availability, and service (RAS) technology allow you to improve your TCO architecture by reducing unplanned down time. This chapter includes several features based on the benefits that are available when you use AIX 5L. Support of these features using Linux can vary. © Copyright IBM Corp. 2006. All rights reserved.
3.1 Reliability, availability, and serviceability Excellent quality and reliability are inherent in all aspects of the IBM System p5 processor design and manufacturing. The fundamental objective of the design approach is to minimize outages. The RAS features help to ensure that the system operates when required, performs reliably, and efficiently handles any failures that might occur. This is achieved using capabilities that both the hardware and the operating system AIX 5L provide.
Error Checkers CPU Fault Isolation Register (FIR) (unique fingerprint of each error captured) L1 Cache L2/L3 Cache Service Processor Log Error Memory Non-volatile RAM Disk Figure 3-1 Schematic of Fault Isolation Register implementation The FIRs are important because they enable an error to be uniquely identified, thus enabling the appropriate action to be taken. Appropriate actions might include such things as a bus retry, ECC correction, or system firmware recovery routines.
The operating system cannot program or access the temperature threshold using the SP. EPOW events can, for example, trigger the following actions: Temperature monitoring, which increases the fan’s rotation speed when ambient temperature is above a preset operating range. Temperature monitoring warns the system administrator of potential environment-related problems. It also performs an orderly system shutdown when the operating temperature exceeds a critical level.
3.1.5 N+1 redundancy The use of redundant parts allows the p5-520 and p5-520Q to remain operational with full resources: Redundant spare memory bits in L1, L2, L3, and main memory Redundant fans Redundant power supplies (optional) Note: With this optional feature, every deskside or rack-mounted p5-520 or p5-520Q requires two power cords, which are not included in the base order.
If the output shows CPU Guard as disabled, enter the following command to enable it: chdev -l sys0 -a cpuguard='enable' Cache or cache-line deallocation is aimed at performing dynamic reconfiguration to bypass potentially failing components. This capability is provided for both L2 and L3 caches. Dynamic run-time deconfiguration is provided if a threshold of L1 or L2 recovered errors is exceeded.
internal LED diagnostics that identify the parts that require service. Attenuation of the error is provided through a series of light attention signals, starting on the exterior of the system (System Attention LED) which is located on the front of the system and ending with an LED near the failing Field Replaceable Unit. For more information about Client Replaceable Units, including videos, see: http://publib.boulder.ibm.
manage the machine-wide settings using the ASMI and allows complete system and partition management from the HMC. Also, the surveillance function of the SP is monitoring the operating system to check that it is still running and has not stalled. Note: The IBM System p5 service processor enables the analysis of a system that does not boot. It can be performed either from ASMI, an HMC, or an ASCI console (depending on the presence of an HMC). ASMI is provided in any case.
– Concurrent mode enables the normal system functions to continue while you are checking selected resources. Because the system is running in normal operation, some devices might require additional actions by the user or diagnostic application before testing can be done. – Maintenance mode enables checking of most system resources. Maintenance mode provides the exact same test coverage as Service Mode. The difference between the two modes is the way you invoke them.
Service Agent sends outbound transmissions only and does not allow any inbound connection attempts. Only hardware machine configuration, machine status, or error information is transmitted. Service Agent does not access or transmit any other data on the monitored systems.
You can download the latest version of Service Agent at: ftp://ftp.software.ibm.com/aix/service_agent_code Service Focal Point Traditional service strategies become more complicated in a partitioned environment. Each logical partition reports errors it detects, without determining if other logical partitions also detect and report the errors.
cannot be concurrently activated, are deferred. Figure 3-4 shows the system firmware file naming convention. Firmware Release level Last disruptive Firmware Service Pack level 01SFXXX_YYY_ZZZ Firmware Service Pack level Figure 3-4 System firmware file naming convention An installation is disruptive if: The release levels (XXX) of currently installed and new firmware are different. The service pack level (YYY) and the last disruptive service pack level (ZZZ) are equal in new firmware.
cluster. If a server that is configured in partition mode (with physical or virtual resources) is part of the cluster, all partitions must be part of the cluster. Monitoring is much easier to use, and the system administrator can monitor all of the network interfaces, not just the switch and administrative interfaces. The management server pushes information out to the nodes, which releases the management server from having to trust the node.
90 IBM System p5 520 and 520Q Technical Overview and Introduction
Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics that are covered in this Redpaper. IBM Redbooks For information about ordering these publications, see “How to get IBM Redbooks” on page 93. Note that some of the documents that are referenced here might be available in softcopy only.
IBM Sserver Hardware Management Console for pSeries Installation and Operations Guide, SA38-0590, provides information to operators and system administrators about how to use a IBM Hardware Management Console for pSeries (HMC) to manage a system. It also discusses the issues associated with logical partitioning planning and implementation.
How to get IBM Redbooks You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site: ibm.com/redbooks Help from IBM IBM Support and downloads ibm.com/support IBM Global Services ibm.
94 IBM System p5 520 and 520Q Technical Overview and Introduction
Back cover ® IBM System p5 520 and 520Q Technical Overview and Introduction Finer system granulation using Micro-Partitioning technology to help lower TCO Support for versions of AIX 5L and Linux operating systems From Web servers to integrated cluster solutions This IBM Redpaper is a comprehensive guide that covers the IBM System p5 520 and 520Q UNIX servers. It introduces major hardware offerings and discusses their prominent functions.