Front cover IBM Power 770 and 780 Technical Overview and Introduction Features the 9117-MMC and 9179-MHC based on the latest POWER7 processor technology Describes MaxCore and TurboCore for redefining performance Discusses Active Memory Mirroring for Hypervisor Alexandre Bicas Caldeira Carlo Costantini Steve Harnett Volker Haug Craig Watson Fabien Willmann ibm.
International Technical Support Organization IBM Power 770 and 780 Technical Overview and Introduction December 2011 REDP-4798-00
Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (December 2011) This edition applies to the IBM Power 770 (9117-MMC) and Power 780 (9179-MHC) Power Systems servers. © Copyright International Business Machines Corporation 2011. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix The team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2 POWER7 processor core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Simultaneous multithreading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Memory access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Flexible POWER7 processor packaging and offerings . . . . . . . . . . . . . . . . . . . . . 2.1.6 On-chip L3 cache innovation and Intelligent Cache . . . .
2.11 External disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 2.11.1 EXP 12S Expansion Drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 2.11.2 EXP24S SFF Gen2-bay Drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 2.11.3 TotalStorage EXP24 disk drawer and tower . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 2.11.4 IBM TotalStorage EXP24 . . . . . . . . . . . .
vi 4.3 Serviceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Detecting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Diagnosing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.
Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries.
Preface This IBM® Redpaper™ publication is a comprehensive guide covering the IBM Power® 770 (9117-MMC) and Power 780 (9179-MHC) servers supporting IBM AIX®, IBM i, and Linux operating systems. The goal of this paper is to introduce the major innovative Power 770 and 780 offerings and their prominent functions, including these: The IBM POWER7® processor available at frequencies of 3.3 GHz, 3.44 GHz, 3.72 GHz, and 3.92 GHz, and 4.
Business Partners on Power Systems hardware, AIX, and PowerVM virtualization products. He is also skilled on IBM System Storage®, IBM Tivoli® Storage Manager, IBM System x®, and VMware. Carlo Costantini is a Certified IT Specialist for IBM and has over 33 years of experience with IBM and IBM Business Partners. He currently works in Italy Power Systems Platforms as Presales Field Technical Sales Support for IBM Sales Representatives and IBM Business Partners.
Thanks to the following people for their contributions to this project: Larry Amy, Gary Anderson, Sue Beck, Terry Brennan, Pat Buckland, Paul D. Carey, Pete Heyrman, John Hilburn, Dan Hurlimann, Kevin Kehne, James Keniston, Jay Kruemcke, Robert Lowden, Hilary Melville, Thoi Nguyen, Denis C. Nizinski, Pat O’Rourke, Jan Palmer, Ed Prosser, Robb Romans, Audrey Romonosky, Todd Rosedahl, Melanie Steckham, Ken Trusits, Al Yanes IBM U.S.A.
Stay connected to IBM Redbooks Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.
1 Chapter 1. General description The IBM Power 770 (9117-MMC) and IBM Power 780 servers (9179-MHC) utilize the latest POWER7 processor technology designed to deliver unprecedented performance, scalability, reliability, and manageability for demanding commercial workloads. The innovative IBM Power 770 and Power 780 servers with POWER7 processors are symmetric multiprocessing (SMP), rack-mounted servers. These modular-built system uses one to four enclosures.
1.1 Systems overview You can find detailed information about the Power 770 and Power 780 systems within the following sections. 1.1.1 IBM Power 770 server Each Power 770 processor card features 64-bit architecture designed with two single-chip module (SCM) POWER7 processors. Each POWER7 SCM enables up to either six or eight active processor cores with 2 MB of L2 cache (256 KB per core), 24 MB of L3 cache (4 MB per core) for the 6-core SCM, and 32 MB of L3 cache (4 MB per core) for the 8-core SCM.
Figure 1-1 shows a Power 770 with the maximum four enclosures, and the front and rear views of a single-enclosure Power 770. Power 770 with 4 enclosures Power 770 single enclosure front view Power 770 single enclosure rear view Figure 1-1 Four-enclosure Power 770, a single-enclosure Power 770 front and rear views 1.1.2 IBM Power 780 server Each Power 780 processor card comprises either two single-chip module (SCM) POWER7 processors or four SCM POWER7 processors, each designed with 64-bit architecture.
The Power 780 has two new integrated POWER7 I/O controllers that enhance I/O performance while supporting a maximum of six internal PCIe adapters and six internal small form-factor SAS DASD bays. The Power 780 features AMM for Hypervisor, which is available as a standard feature. AMM guards against system-wide outages due to any uncorrectable error associated with firmware. Also available as an option is Active Memory Expansion, which enhances memory capacity. 1.
Power 770 and Power 780 operating environment Description Operating Non-operating Noise level for one enclosure Power 770 (one enclosure with 16 active cores): 7.1 bels (operating/idle) 6.6 bels (operating/idle) with acoustic rack doors Power 780 (one enclosure with 24 active cores): 7.1 bels (operating/idle) 6.6 bels (operating/idle) with acoustic rack doors Noise level for four enclosures Power 770 (four enclosure with 64 active cores): 7.6 bels (operating/idle) 7.
Figure 1-2 shows the front and rear views of the Power 770 and Power 780. Power 770 and Power 780 enclosure rear view HMC Ports FSP connectors GX++ Bus P C I e P C I e P C I e P C I e P C I e P C I e SPCN Ports Power Supplies Integrated Ports USB Ports Serial Port Figure 1-2 Front and rear views of the Power 770 and Power 780 1.4 System features The Power 770 processor card features 64-bit architecture designed with two single-chip module (SCM) POWER7 processors.
One hot-plug, slim-line, SATA media bay per enclosure (optional) Redundant hot-swap AC power supplies in each enclosure Choice of Integrated Multifunction Card options; maximum one per enclosure: – Dual 10 Gb Copper and Dual 1 Gb Ethernet (#1768) – Dual 10 Gb Optical and Dual 1 Gb Ethernet (#1769) One serial port included on each Integrated Multifunction Card Two USB ports included on each Integrated Multifunction Card, plus another USB port on each enclosure (maximum nine usable per system) Addit
Disk-only I/O drawers – Up to 56 EXP24S SFF SAS I/O drawers on external SAS controller (#5887) – Up to 110 EXP12S SAS DASD/SSD I/O drawers on SAS PCI controllers (#5886) – Up to 60 EXP24 SCSI DASD Expansion drawers on SCSI PCI controllers (7031-D24) IBM Systems Director Active Energy Manager™ The Power 770 operator interface controls located on the front panel of the primary I/O drawer consist of a power ON/OFF button with a POWER® indicator, an LCD display for diagnostic feedback, a RESET button, and a
Additional considerations: Note the following considerations: The Ethernet ports of the Integrated Multifunction Card cannot be used for an IBM i console. Separate Ethernet adapters that can be directly controlled by IBM i without the Virtual I/O server should be used for IBM i LAN consoles if desired. Alternatively, an HMC can also be used for an IBM i console. The first and second CEC enclosure must contain one Integrated Multifunction Card (#1768 or #1769).
1.4.3 Minimum features Each system has a minimum feature set in order to be valid. Table 1-3 shows the minimum system configuration for a Power 770. Table 1-3 Minimum features for Power 770 system Power 770 minimum features Additional notes 1x CEC enclosure (4U) 1x primary operating system (one of these) AIX (#2146) Linux (#2147) IBM i (#2145) 1x Processor Card 0/12-core, 3.72 GHz processor card (#4983) 0/16-core, 3.
Power 770 minimum features Additional notes Note: Consider the following: A minimum number of four processor activations must be ordered per system. The minimum activations ordered with all initial orders of memory features #5600, #5601, and #5602 must be 50% of their installed capacity. The minimum activations ordered with MES orders of memory features #5600, #5601, and #5602 will depend on the total installed capacity of features #5600, #5601, and #5602.
Power 780 minimum features Additional notes 1x Removable Media Device (#5762) Optionally orderable, a standalone system (not network attached) requires this feature. 1x HMC Required for every Power 780 (9179-MHC) Note the following considerations: A minimum number of four processor activations must be ordered per system. The minimum activations ordered with all initial orders of memory features #5600, #5601, and #5602 must be 50% of their installed capacity.
The processor card houses the two or four POWER7 SCMs and the system memory. The Power 780 processor card offers the following features: Feature #5003 offers two 8-core POWER7 SCMs with 32 MB of L3 cache (16 cores per processor card are activated in MaxCore mode and each core with 4 MB of L3 cache) at 3.92 GHz. Feature #5003 also offers two 8-core POWER7 SCMs with 32 MB of L3 cache (8 cores per processor card are activated in TurboCore mode and each core is able to use 8 MB of L3 cache) at 4.14 GHz.
Figure 1-4 shows the top view of the Power 780 system having four SCMs installed. The four POWER7 SCMs and the system memory reside on a single processor card feature. Memory PCIe Slot #1 PCIe Slot #2 POWER7 TPMD PCIe Slot #3 POWER7 PCIe Slot #4 PCIe Slot #5 Memory PCIe Slot #6 Fans POWER7 POWER7 Memory Figure 1-4 Top view of a Power 780 system with four SCMs In standard or MaxCore mode, the Power 780 system uses all processor cores running at 3.
1.4.6 Summary of processor features Table 1-5 summarizes the processor feature codes for the Power 770. Table 1-5 Summary of processor features for the Power 770 Feature code Description OS support #4983 0/12-core 3.72 GHz POWER7 processor card: 12-core 3.72 GHz POWER7 CUoD processor planar containing two six-core processors. Each processor has 2 MB of L2 cache (256 KB per core) and 32 MB of L3 cache (4 MB per core).
Feature code Description OS support #4984 0/16-core 3.3 GHz POWER7 processor card: 16-core 3.3 GHz POWER7 CUoD processor planar containing two eight-core processors. Each processor has 2 MB of L2 cache (256 KB per core) and 32 MB of L3 cache (4 MB per core). There are 16 DDR3 DIMM slots on the processor planar (8 DIMM slots per processor), which can be used as Capacity on Demand (CoD) memory without activating the processors. The voltage regulators are included in this feature code.
Table 1-6 summarizes the processor feature codes for the Power 780. Table 1-6 Summary of processor features for the Power 780 Feature code Description OS support #5003 0/16 core 3.92 GHz / 4.14 GHz POWER7 Turbocore processor card: This feature has two modes. Standard mode utilizes all 16 cores at 3.92 GHz and TurboCore mode utilizes eight cores at 4.14 GHz. This feature is a POWER7 CUoD processor planar containing two 8-core processors.
Feature code Description OS support #EP24 0/24 core 3.44 GHz POWER7 processor card: 24-core 3.44 GHz POWER7 CUoD processor planar containing four 6-core processors. Each processor has 2 MB of L2 cache (256 KB per core) and 32 MB of L3 cache (4 MB per core). There are 16 DDR3 DIMM slots on the processor planar (eight DIMM slots per processor), which can be used as CoD memory without activating the processors. The voltage regulators are included in this feature code.
1.4.7 Memory features In POWER7 systems, DDR3 memory is used throughout. The POWER7 DDR3 memory uses a memory architecture to provide greater bandwidth and capacity. This enables operating at a higher data rate for large memory configurations. All processor cards have 16 memory DIMM slots (eight per processor) running at speeds up to 1066 MHz and must be populated with POWER7 DDR3 Memory DIMMs. Figure 1-5 outlines the general connectivity of an 8-core POWER7 processor and DDR3 memory DIMMS.
Note: DDR2 DIMMs (used in POWER6®-based systems) are not supported in POWER7-based systems. The Power 770 and Power 780 have memory features in 32 GB, 64 GB, 128 GB, and 256 GB capacities. Table 1-7 summarizes the capacities of the memory features and highlights other characteristics.
Feature code Activation capacity Additional information OS support #7377 N/A On/Off, 1 GB-1Day, Memory Billing POWER7: After the ON/OFF Memory function is enabled in a system you must report the client’s on/off usage to IBM on a monthly basis. This information is used to compute IBM billing data. One #7377 feature must be ordered for each billable day for each 1 GB increment of POWER7 memory that was used. AIX IBM i Linux Note that inactive memory must be available in the system for temporary use.
Feature code Description OS support #1790 600 GB 10 K RPM SAS SFF Disk Drive AIX, Linux #1964 600 GB 10 K RPM SAS SFF-2 Disk Drive AIX, Linux #1947 139 GB 15 K RPM SAS SFF-2 Disk Drive IBM i #1888 139 GB 15 K RPM SFF SAS Disk Drive IBM i #1996 177 GB SSD Module with eMLC IBM i #1787 177 GB SFF-1 SSD with eMLC IBM i #1794 177 GB SFF-2 SSD with eMLC IBM i #1956 283 GB 10 K RPM SAS SFF-2 Disk Drive IBM i #1911 283 GB 10 K RPM SFF SAS Disk Drive IBM i #1879 283 GB 15 K RPM SAS SFF
The Power 770 and Power 780 support both 2.5-inch and 3.5-inch SAS SFF hard disks. The 3.5-inch DASD hard disk can be attached to the Power 770 and Power 780 but must be located in a feature #5886 EXP12S I/O drawer, whereas 2.5-inch DASD hard files can be either mounted internally or in the EXP24S SFF Gen2-bay Drawer (#5887). If you need more disks than available with the internal disk bays, you can attach additional external disk subsystems.
The I/O drawer has the following attributes: A 4U (EIA units) rack-mount enclosure (#7314) holding one or two #5796 drawers Six PCI-X DDR slots: 64-bit, 3.3 V, 266 MHz (blind-swap) Redundant hot-swappable power and cooling units 1.6.2 12X I/O Drawer PCIe (#5802 and #5877) The #5802 and #5877 expansion units are 19-inch, rack-mountable, I/O expansion drawers that are designed to be attached to the system using 12X double data rate (DDR) cables.
Table 1-11 summarizes the maximum number of I/O drawers supported and the total number of PCI slots available when expansion consists of a single drawer type.
Table 1-13 summarizes the processor core options and frequencies and matches them to the L3 cache sizes for the Power 770 and Power 780. Table 1-13 Summary of processor core counts, core frequencies, and L3 cache sizes System Cores per POWER7 SCM Frequency (GHz) L3 cachea Enclosure summationb System maximum (cores)c Power 770 6 3.72 24 MB 12-cores and 48 MB L3 cache 48 Power 770 8 3.30 32 MB 16-cores and 64 MB L3 cache 64 Power 780 6 3.
used to obtain a new part must be returned to IBM also. Clients can keep and reuse any features from the CEC enclosures that were not involved in a feature conversion transaction. Upgrade considerations Feature conversions have been set up for: POWER6 and POWER6+ processors to POWER7 processors DDR2 memory DIMMS to DDR3 memory DIMMS Trim kits (A new trim kit is needed when upgrading to a 2-door, 3-door, or 4-door system.
the new configuration report in a quantity that equals feature #8018. Additional #7942 features can be ordered during the upgrade. 1.11 Hardware Management Console models The Hardware Management Console (HMC) is required for managing the IBM Power 770 and Power 780.
1.12 System racks The Power 770 and its I/O drawers are designed to be mounted in the 7014-T00, 7014-T42, 7014-B42, 7014-S25, #0551, #0553, or #0555 rack. The Power 780 and I/O drawers can be ordered only with the 7014-T00 and 7014-T42 racks. These are built to the 19-inch EIA standard. An existing 7014-T00, 7014-B42, 7014-S25, 7014-T42, #0551, #0553, or #0555 rack can be used for the Power 770 and Power 780 if sufficient space and power are available. The 36U (1.8-meter) rack (#0551) and the 42U (2.
1.12.2 IBM 7014 model T42 rack The 2.0-meter (79.3-inch) Model T42 addresses the client requirement for a tall enclosure to house the maximum amount of equipment in the smallest possible floor space. The features that differ in the model T42 rack from the model T00 include: It has 42U (EIA units) of usable space (6U of additional space). The model T42 supports AC only.
1.12.7 The AC power distribution unit and rack content For rack models T00 and T42, 12-outlet PDUs are available. These include PDUs Universal UTG0247 Connector (#9188 and #7188) and Intelligent PDU+ Universal UTG0247 Connector (#7109). Four PDUs can be mounted vertically in the back of the T00 and T42 racks. Figure 1-6 shows the placement of the four vertically mounted PDUs. In the rear of the rack, two additional PDUs can be installed horizontally in the T00 rack and three in the T42 rack.
Note: Ensure that the appropriate power cord feature is configured to support the power being supplied. The Base/Side Mount Universal PDU (#9188) and the optional, additional, Universal PDU (#7188) and the Intelligent PDU+ options (#7109) support a wide range of country requirements and electrical power specifications. The PDU receives power through a UTG0247 power line connector. Each PDU requires one PDU-to-wall power cord.
The design of the Power 770 and Power 780 is optimized for use in a 7014-T00, -T42, -B42, -S25, #0551, or #0553 rack. Both the front cover and the processor flex cables occupy space on the front left side of an IBM 7014, #0551, and #0553 rack that might not be available in typical non-IBM racks. Acoustic door features are available with the 7014-T00, 7014-B42, 7014-T42, #0551, and #0553 racks to meet the lower acoustic levels identified in the specification section of this document.
The following optional drive technologies are available for the 7216-1U2: DAT160 80 GB SAS Tape Drive (#5619 DAT320 160 GB SAS Tape Drive (#1402) DAT320 160 GB USB Tape Drive (#5673) Half-high LTO Ultrium 5 1.5 TB SAS Tape Drive (#8247) DVD-RAM - 9.4 GB SAS Slim Optical Drive (#1420 and #1422) RDX Removable Disk Drive Docking Station (#1103) Note: The DAT320 160 GB SAS Tape Drive (#1402) and the DAT320 160 GB USB Tape Drive (#5673) are no longer available as of July 15, 2011.
Figure 1-7 shows the 7216 Multi-Media Enclosure. Figure 1-7 7216 Multi-Media Enclosure In general, the 7216-1U2 is supported by the AIX, IBM i, and Linux operating system. However, the RDX Removable Disk Drive Docking Station and the DAT320 USB Tape Drive are not supported with IBM i. Flat panel display options The IBM 7316 Model TF3 is a rack-mountable flat panel console kit consisting of a 17-inch 337.9 mm x 270.
36 IBM Power 770 and 780 Technical Overview and Introduction
2 Chapter 2. Architecture and technical overview The IBM Power 780 offers two versions of CEC enclosure. The first is a 2-socket CEC enclosure, populated with 8-core POWER7 processor cards. This architecture (Figure 2-1 on page 38) enables a maximum system configuration of 64 processors. The Power 780 also offers a 4-socket CEC enclosure, populated with 6-core POWER7 processor cards (Figure 2-2 on page 39), enabling a maximum system configuration of 96 cores.
DIMM #8 Buffer Buffer 136.448 GBps per socket DIMM #7 Buffer DIMM #6 DIMM #3 Buffer DIMM #5 DIMM #2 Buffer DIMM #4 DIMM #1 Figure 2-1 shows the logical system diagram of the 2-socket Power 770 and Power 780. Buffer Buffer Buffer PCIe Gen2 x8 (FH/HL) SLOT #1 TPMD PCIe Gen2 x8 (FH/HL) SLOT #2 P7-IOC PCIe Gen2 x8 (FH/HL) SLOT #3 PCIe Gen2 x8 (FH/HL) SLOT #4 Memory Controller SMP Connector B SMP Connector A 2.46 GHz (2 * 4 Bytes) 19.712 GBps POWER7 Chip 1 2.46 GHz (2 * 4 Bytes) 19.
Figure 2-2 shows the logical system diagram of the 4-socket Power 780. PCIe Gen2 x8 (FH/HL) SLOT #1 DIMM #1 DIMM #2 DIMM #3 DIMM #4 DIMM #5 DIMM #6 DIMM #7 DIMM #8 TPMD Buffer Buffer Buffer Buffer Buffer Buffer Buffer Buffer PCIe Gen2 x8 (FH/HL) SLOT #2 P7-IOC PCIe Gen2 x8 (FH/HL) SLOT #3 SMP Connector A 2.46 GHz PCIe Gen2 x8 (FH/HL) SLOT #4 136.448 GBps per socket Memory Controller 136.448 GBps per socket 2 x 10 Gbps + 2 x 1 Gbps Ethernet 2.46 GHz (2 * 4 Bytes) 19.
2.1 The IBM POWER7 processor The IBM POWER7 processor represents a leap forward in technology achievement and associated computing capability. The multi-core architecture of the POWER7 processor has been matched with innovation across a wide range of related technologies to deliver leading throughput, efficiency, scalability, and RAS. Although the processor is an important component in delivering outstanding servers, many elements and facilities have to be balanced on a server to deliver maximum throughput.
Figure 2-3 shows the POWER7 processor die layout with the major areas identified: Processor cores L2 cache L3 cache and chip interconnection Simultaneous multiprocessing (SMP) links Memory controllers. Figure 2-3 POWER7 processor die with key areas indicated 2.1.1 POWER7 processor overview The POWER7 processor chip is fabricated using the IBM 45 nm Silicon-On-Insulator (SOI) technology using copper interconnect and implements an on-chip L3 cache using eDRAM.
Table 2-1 summarizes the technology characteristics of the POWER7 processor. Table 2-1 Summary of POWER7 processor technology Technology POWER7 processor Die size 567 mm2 Fabrication technology Components 1.2 billion components/transistors offering the equivalent function of 2.7 billion (For further details see 2.1.6, “On-chip L3 cache innovation and Intelligent Cache” on page 46.
2.1.3 Simultaneous multithreading An enhancement in the POWER7 processor is the addition of the SMT4 mode to enable four instruction threads to execute simultaneously in each POWER7 processor core.
2.1.4 Memory access Each POWER7 processor chip has two DDR3 memory controllers, each with four memory channels (enabling eight memory channels per POWER7 processor). Each channel operates at 6.4 GHz and can address up to 32 GB of memory. Thus, each POWER7 processor chip is capable of addressing up to 256 GB of memory. Note: In certain POWER7 processor-based systems, one memory controller is active with four memory channels being used.
Note: TurboCore is available on the Power 780 and Power 795. MaxCore mode MaxCore mode is for workloads that benefit from a higher number of cores and threads handling multiple tasks simultaneously that take advantage of increased parallelism. MaxCore mode provides up to eight cores and up to 32 threads per POWER7 processor. POWER7 processor 4-core and 6-core offerings The base design for the POWER7 processor is an 8-core processor with 32 MB of on-chip L3 cache (4 MB per core).
2.1.6 On-chip L3 cache innovation and Intelligent Cache A breakthrough in material engineering and microprocessor fabrication has enabled IBM to implement the L3 cache in eDRAM and place it on the POWER7 processor die. L3 cache is critical to a balanced design, as is the ability to provide good signaling between the L3 cache and other elements of the hierarchy, such as the L2 cache or SMP interconnect. The on-chip L3 cache is organized into separate areas with differing latency characteristics.
No off-chip driver or receivers Removing drivers or receivers from the L3 access path lowers interface requirements, conserves energy, and lowers latency. Small physical footprint The performance of eDRAM when implemented on-chip is similar to conventional SRAM but requires far less physical space. IBM on-chip eDRAM uses only a third of the components used in conventional SRAM, which has a minimum of six transistors to implement a 1-bit memory cell.
b. For more information about sleep and nap modes, see 2.15.1, “IBM EnergyScale technology” on page 114. 2.2 POWER7 processor cards IBM Power 770 and Power 780 servers are modular systems built using one to four CEC enclosures. The processor and memory subsystem in each CEC enclosure is contained on a single processor card. The processor card contains either two or four processor sockets and 16 fully buffered DDR3 memory DIMMs.
Power 770 systems IBM Power 770 systems support two POWER7 processor options of varying clock speed and core counts. Table 2-3 summarizes these options. Table 2-3 Summary of POWER7 processor options for the Power 770 server Cores per POWER7 processor Frequency L3 cache size available per POWER7 processor 6 3.72 GHz 24 MB 8 3.
With two POWER7 processors in each enclosure, systems can be equipped as follows: MaxCore mode: – 16 cores – 32 cores – 48 cores – 64 cores TurboCore mode: – 8 cores – 16 cores – 24 cores – 32 cores 2.2.2 Four-socket processor card A 4-socket processor card is supported on the Power 780 (Figure 2-9), enabling a maximum system configuration of 96 cores (6-core processors). Each POWER7 processor is connected to four memory DIMMs through a single memory controller.
2.2.3 Processor comparison The 2-socket and 4-socket processor cards available for the Power 780 utilize slightly different POWER7 processors. Table 2-6 shows a comparison. Table 2-6 Comparison of processors used with 2-socket and 4-socket processor cards Area POWER7 processor used on 2-socket CPU card POWER7 processor used on 4-socket CPU card Technology 45 nm 45 nm Die size 567 mm2 567 mm2 Power 250 W 150 W Cores 8 6 Max frequency 3.92 GHz (4.14 GHz with TurboCore) 3.
The POWER7 processor used on the 4-socket processor card also has two memory controllers, but only one is used. This results in four DIMMs per memory controller, the same as the processor used on the 2-socket processor card. Similarly, the processor used on the 4-socket CPU card has two GX++ buses, but only one is used (Figure 2-11). DIMM 1 DIMM 2 DIMM 3 DIMM 4 MC1 B Buses 8 byte 2.464 Gb/s A P7 B GX1 GX1 Bus (4 Byte – 2.46Gb/s) GX0 MC0 W Y X W Y X Buses 8 byte 2.
All the memory DIMMs for the Power 770 and Power 780 are Capacity Upgrade on Demand capable and must have a minimum of 50% of its physical capacity activated. For example, the minimum installed memory for both servers is 64 GB RAM, whereas they can have a minimum of 32 GB RAM active. Note: DDR2 memory (used in POWER6 processor-based systems) is not supported in POWER7 processor-based systems.
Figure 2-13 shows the physical memory DIMM topology for the Power 780 with four single-chip-modules (SCMs).
– Quad 2: J3A, J4A, J7A, J8A (mandatory minimum for each enclosure) – Quad 3: J1B, J2B, J5B, J6B – Quad 4: J3B, J4B, J7B, J8B Table 2-7 shows the optimal placement of each DIMM-quad within a single enclosure system. Each enclosure must have at least DIMM-quads installed in slots J1A, J2A, J5A, J6A, and J5A, J6A, J7A, and J8A, as shown with the highlighted color.
Table 2-9 shows the optimal placement of each DIMM-quad within a three-enclosure system. Each enclosure must have at least two DIMM-quads installed.
Table 2-10 shows the optimal placement of each DIMM-quad within a four-enclosure system. Each enclosure must have at least two DIMM-quads installed.
2.3.3 Memory throughput POWER7 has exceptional cache, memory, and interconnect bandwidths. Table 2-11 shows the bandwidth estimate for the Power 770 system running at 3.3 GHz. Table 2-11 Power 770 memory bandwidth estimates for POWER7 cores running at 3.3 GHz Memory Bandwidth L1 (data) cache 158.4 Gbps L2 cache 158.4 Gbps L3 cache 105.6 Gbps System memory: 4x enclosures: 136.44 Gbps per socket 1091.58 Gbps Inter-node buses (four enclosures) 158.02 Gbps Intra-node buses (four enclosures) 415.
2.3.4 Active Memory Mirroring Power 770 and Power 780 servers have the ability to provide mirroring of the hypervisor code among different memory DIMMs. This feature will enhance the availability of a server and keep it operable in case a DIMM failure occurs in one of the DIMMs that hold the hypervisor code. The hypervisor code, which resides on the initial DIMMs (J1A to J8A), will be mirrored on the same group of DIMMs to allow for more usable memory.
It is possible to check whether the Memory Mirroring option is enabled and change its current status via HMC, under the Advanced Tab on the CEC Properties Panel (Figure 2-15). Figure 2-15 CEC Properties window on an HMC After a failure on one of the DIMMs containing hypervisor data occurs, all the server operations remain active and the service processor will isolate the failing DIMMs.
On the post-pay options, charges are based on usage reporting collected monthly. Processors and memory can be activated and turned off an unlimited number of times when additional processing resources are needed. This offering provides a system administrator an interface at the HMC to manage the activation and deactivation of resources. A monitor that resides on the server records the usage activity. This usage data must be sent to IBM on a monthly basis.
For more information regarding registration, enablement, and usage of On/Off CoD, visit: http://www.ibm.com/systems/power/hardware/cod 2.4.3 Utility Capacity on Demand (Utility CoD) Utility CoD automatically provides additional processor performance on a temporary basis within the Shared Processor Pool. Utility CoD enables you to place a quantity of inactive processors into the server's Shared Processor Pool, which then becomes available to the pool's resource manager.
2.4.5 Software licensing and CoD For software licensing considerations with the various CoD offerings, see the most recent revision of the Capacity on Demand User’s Guide at: http://www.ibm.com/systems/power/hardware/cod 2.5 CEC Enclosure interconnection cables IBM Power 770 or 780 systems can be configured with more than one system enclosure. The connection between the processor cards in the separate system enclosures requires a set of processor drawer interconnect cables.
The cables are also designed to allow the concurrent maintenance of the Power 770 or Power 780 in case the IBM service representative needs to extract a system enclosure from the rack. The design of the flexible cables allows each system enclosure to be disconnected without any impact on the other drawers. To allow such concurrent maintenance operation, plugging in the SMP Flex cables in the order of their numbering is extremely important. Each cable is numbered (Figure 2-16).
Processor type SMP cables Cable number From connector To connector #EP24 (0-24 core) #3715 1 U1-P3-T1 U2-P3-T1 #3716 4 U2-P3-T2 U3-P3-T2 2 U1-P3-T4 U2-P3-T4 7 U2-P3-T3 U3-P3-T3 3 U1-P3-T2 U3-P3-T1 6 U1-P3-T3 U3-P3-T4 #3717 Table 2-16 reports the SMP cable usage for the four-enclosure scenario.
Similarly, the Flexible Service Processor (FSP) flex cables must be installed in the correct order (Figure 2-17), as follows: 1. Install a second node flex cable from node 1 to node 2. 2. Add a third node flex cable from node 1 and node 2 to node 3. 3. Add a fourth node flex cable from node 1 and node 2 to node 4. Figure 2-17 FSP flex cables The design of the Power 770 and Power 780 is optimized for use in an IBM 7014-T00 or 7014-T42 rack.
The total width of the server, with cables installed, is 21 inches (Figure 2-18). SMP Cable SMP Cable Drawer 1 - A Left to Drawer 4 - A Left Drawer 1 - B Right to Drawer 2 - B Right SMP Cable Drawer 2 – B Left to Drawer 3 - B Left SMP Cable Drawer 2 - A Right to Drawer 4 - B Right SMP Cable Drawer 1 – B Left to Drawer 3 - A Left SMP Cable Drawer 3 - A Right to Drawer 4 - A Right Figure 2-18 Front view of the rack with SMP cables overlapping the rack rails Chapter 2.
In the rear of the rack, the FSP cables require only some room in the left side of the racks (Figure 2-19). Two Drawer Cable Three Drawer Cable Four Drawer Cable Figure 2-19 Rear view of rack with detail of FSP flex cables 2.6 System bus This section provides additional information related to the internal buses. 2.6.
Table 2-17 shows the I/O bandwidth for available processors cards. Table 2-17 I/O bandwidth Processor card Slot description Frequency Device Bandwidth (maximum theoretical) #4983, #4984 or #5003 CPU Socket 0 (CP0) GX bus 1 2.464 Gbps P7IOC-B 19.712 GBps CPU Socket 0 (CP0) GX bus 0 2.464 Gbps P7IOC-A 19.712 GBps CPU Socket 1 (CP1) GX bus 1 2.464 Gbps GX slot 2 19.712 GBps CPU Socket 1 (CP1) GX bus 0 2.464 Gbps GX slot 1 19.712 GBps Single enclosure 78.
Table 2-18 lists the slot configuration of the Power 770 and Power 780. Table 2-18 Slot configuration of the Power 770 and 780 Slot number Description Location code PCI host bridge (PHB) Max.
2.8 PCI adapters This section covers the different types and functionalities of the PCI cards supported by IBM Power 770 and Power 780 systems. 2.8.1 PCIe Gen1 and Gen2 Peripheral Component Interconnect Express (PCIe) uses a serial interface and allows for point-to-point interconnections between devices (using a directly wired interface between these connection points).
If you are installing a new feature, ensure that you have the software required to support the new feature, and determine whether there are any existing PTF prerequisites to install. See the IBM Prerequisite website for information: https://www-912.ibm.com/e_dir/eServerPreReq.nsf 2.8.4 PCIe adapter form factors IBM POWER7 processor-based servers are able to support two form factors of PCIe adapters: PCIe low profile (LP) cards, which are used with the Power 710 and Power 730 PCIe slots.
Table 2-19 is a list of low-profile adapter cards and their equivalent in full height.
Other LAN adapters are supported in the CEC enclosure PCIe slots or in I/O enclosures that are attached to the system using a 12X technology loop. Table 2-20 lists the additional LAN adapters that are available.
2.8.6 Graphics accelerator adapters The IBM Power 770 and Power 780 support up to eight graphics adapters (Table 2-21). They can be configured to operate in either 8-bit or 24-bit color modes. These adapters support both analog and digital monitors, and do not support hot-plug. The total number of graphics accelerator adapters in any one partition cannot exceed four.
Table 2-23 compares Parallel SCSI to SAS attributes.
2.8.9 Fibre Channel adapter The IBM Power 770 and Power 780 servers support direct or SAN connection to devices that use Fibre Channel adapters. Table 2-25 summarizes the available Fibre Channel adapters. All of these adapters except #5735 have LC connectors. If you attach a device or switch with an SC type fibre connector, an LC-SC 50 Micron Fiber Converter Cable (#2456) or an LC-SC 62.5 Micron Fiber Converter Cable (#2459) is required.
For more information about FCoE, read An Introduction to Fibre Channel over Ethernet, and Fibre Channel over Convergence Enhanced Ethernet, REDP-4493. IBM offers a 10 Gb FCoE PCIe Dual Port adapter (#5708). This is a high-performance 10 Gb dual port PCIe Converged Network Adapter (CNA) utilizing SR optics. Each port can provide Network Interface Card (NIC) traffic and Fibre Channel functions simultaneously. It is supported on AIX and Linux for FC and Ethernet. 2.8.
Table 2-27 lists the available InfiniBand adapters. Table 2-27 Available asynchronous adapters Feature code CCIN Adapter description Slot Size OS support #2728 57D1 4-port USB PCIe adapter PCIe Short A, L 4-Port Asynchronous EIA-232 PCIe adapter PCIe Short A, L 2-Port Async EIA-232 PCIe adapter PCIe Short A, L #5785 #5289 2B42 2.9 Internal storage Serial Attached SCSI (SAS) drives the Power 770 and Power 780 internal disk subsystem.
Note: These solid-state drives (SSD) or hard disk drive (HDD) configuration rules apply: You can mix SSD and HDD drives when configured as one set of six bays. If you want to have both SSDs and HDDs within a dual split configuration, you must use the same type of drive within each set of three. You cannot mix SSDs and HDDs within a subset of three bays. If you want to have both SSDs and HDDs within a triple split configuration, you must use the same type of drive within each set of two.
(2/2/2) without feature 5662. With feature #5662, they support dual controllers running one set of six bays. Figure 2-22 shows the internal SAS topology overview. VSES Integrated SAS Adapter Redriver DVD P7IOC Integrated SAS Adapter SAS Port Exp. A Optional Battery DASD Optional Battery P7IOC Integrated SAS Adapter SAS Port Exp.
SAS subsystem configuration #5662 External SAS components SAS port cables SAS cables Notes Dual storage IOA with internal disk Yes None None N/A Internal SAS port cable (#1815) cannot be used with this or HA RAID configuration. Dual storage IOA with internal disk and external disk enclosure Yes Requires an external disk enclosure (#5886) Internal SAS port (#1819) SAS cable assembly for connecting to an external SAS drive enclosure #3686 or #3687 #3686 is a 1-meter cable.
2.9.2 Triple split backplane The triple split backplane mode offers three sets of two disk drives each. This mode requires #1815 internal SAS cable, a SAS cable #3679, and a SAS controller, such as #5901. Figure 2-24 shows how the six disk bays are shared with the triple split backplane mode. The PCI adapter that drives two of the six disks can be located in the same Power 770 (or Power 780) CEC enclosure as the disk drives or adapter, even in a different system enclosure or external I/O drawer.
The disk drives are required to be in RAID arrays. There are no separate SAS cables required to connect the two embedded SAS RAID adapters to each other. The connection is contained within the backplane. RAID 0, 10, 5, and 6 support up to six drives. Solid-state drives (SSD) and HDDs can be used, but can never be mixed in the same disk enclosure. To connect to the external storage, you need to connect to the #5886 disk drive enclosure. Figure 2-25 shows the topology of the RAID mode.
2.10 External I/O subsystems This section describes the external 12X I/O subsystems that can be attached to the Power 770 and Power 780, listed as follows: PCI-DDR 12X Expansion Drawer (#5796) 12X I/O Drawer PCIe, small form factor (SFF) disk (#5802) 12X I/O Drawer PCIe, No Disk (#5877) Table 2-29 provides an overview of all the supported I/O drawers.
Figure 2-26 shows the back view of the expansion unit. 12X Port 0 (P1-C7-T1) E1 E2 SPCN 0 (P1-C8-T1) SPCN 1 (P1-C8-T2) P1-C8-T3 P1-C1 P1-C3 P1-C4 P1-C2 P1-C6 P1-C5 12X Port 1 (P1-C7-T2) Figure 2-26 PCI-X DDR 12X Expansion Drawer rear side 2.10.2 12X I/O Drawer PCIe The 12X I/O Drawer PCIe is a 19-inch I/O and storage drawer.
Figure 2-27 shows the front view of the 12X I/O Drawer PCIe (#5802). Service card Port cards Disk drives Power cables Figure 2-27 Front view of the 12X I/O Drawer PCIe Figure 2-28 shows the rear view of the 12X I/O Drawer PCIe (#5802). 10 PCIe cards X2 SAS connectors Mode Switch 12X Connectors SPCN Connectors Figure 2-28 Rear view of the 12X I/O Drawer PCIe 2.10.3 Dividing SFF drive bays in 12X I/O drawer PCIe Disk drive bays in the 12X I/O drawer PCIe can be configured as one, two, or four sets.
Figure 2-29 indicates the mode switch in the rear view of the #5802 I/O Drawer. PCIe 12X I/O Drawer – SFF Drive Bays #5802 12X I/O Drawer AIX/Linux • One set: 18 bays • Two sets: 9 + 9 bays • Four sets: 5 + 4 + 5 + 4 bays IBMi • Two sets: 9 + 9 bays MODE SWITCH 1 2 4 Figure 2-29 Disk bay partitioning in #5802 PCIe 12X I/O drawer Each disk bay set can be attached to its own controller or adapter. The #5802 PCIe 12X I/O Drawer has four SAS connections to drive bays.
E2 ARECW500-0 P3-C3 P3-C4 P3-D12 P3-D13 P3-D14 P3-D15 P3-D16 P3-D17 P3-D18 P3-C1 P3-C2 P3-D8 P3-D9 P3-D10 P3-D11 E1 P3-D1 P3-D2 P3-D3 P3-D4 P3-D5 P3-D6 P3-D7 The location codes for the front and rear views of the #5802 I/O drawer are provided in Figure 2-30 and Figure 2-31.
Each disk bay set can be attached to its own controller or adapter. The feature #5802 PCIe 12X I/O Drawer has four SAS connections to drive bays. It connects to PCIe SAS adapters or controllers on the host systems. For detailed information about how to configure, see the IBM Power Systems Hardware Information Center: http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp 2.10.
General rule for the 12X IO Drawer configuration To optimize performance and distribute workload, use as many multiple GX++ buses as possible. Figure 2-33 shows several examples of a 12X IO Drawer configuration.
3. From J16 (T2) of the final expansion unit, connect to the second CEC enclosure, SPCN 1 (T2). 4. To complete the cabling loop, connect SPCN 1 (T2) of the topmost (first) CEC enclosure to the SPCN 0 (T1) of the next (second) CEC. 5. Ensure that a complete loop exists from the topmost CEC enclosure, through all attached expansions and back to the next lower (second) CEC enclosure.
IBM 7031 TotalStorage EXP24 Ultra320 SCSI Expandable Storage Disk Enclosure (no longer orderable) IBM System Storage Table 2-29 on page 85 provides an overview of SAS external disks subsystems. Table 2-34 I/O drawer capabilities Drawer feature code DASD PCI slots Requirements for a Power 770 and Power 780 #5886 12 x SAS disk drive bays - Any supported SAS adapter #5887 24x - Any supported SAS adapter 2.11.1 EXP 12S Expansion Drawer The EXP 12S (#5886) is an expansion drawer with twelve 3.
The following SAS X cables are available for usage with a PCIe2 1.8 GB Cache RAID SAS adapter (#5913): 3 meters (#3454) 6 meters (#3455) 10 meters (#3456) In all of these configurations, all 12 SAS bays are controlled by a single controller or a single pair of controllers. A second EXP 12S Expansion Drawer can be attached to another drawer by using two SAS EE cables, providing 24 SAS bays instead of 12 bays for the same SAS controller port. This configuration is called cascading.
2.11.2 EXP24S SFF Gen2-bay Drawer The EXP24S SFF Gen2-bay Drawer (#5887) is an expansion drawer supporting up to 24 hot-swap 2.5-inch SFF SAS HDDs on POWER6 or POWER7 servers in 2U of 19-inch rack space. The SFF bays of the EXP24S are different from the SFF bays of the POWER7 system units or 12X PCIe I/O Drawers (#5802, #5803). The EXP 24S uses Gen-2 or SFF-2 SAS drives that physically do not fit in the Gen-1 or SFF-1 bays of the POWER7 system unit or 12X PCIe I/O Drawers, or vice versa.
controllers is now running 30 SAS bays (six SFF bays in the system unit and twenty-four 2.5-inch bays in the drawer). The disk drawer is attached to the SAS port with a SAS YI cable. In this 30-bay configuration, all drives must be HDD. A second unit cannot be cascaded to a EXP24S SFF Gen2-bay Drawer attached in this way. The EXP24S SFF Gen2-bay Drawer can be ordered in one of three possible manufacturing-configured MODE settings (not customer set-up) of 1, 2 or 4 sets of disk bays.
Include the EXP24S SFF Gen2-bay Drawer no-charge specify codes with EXP24S orders to indicate to IBM Manufacturing the mode to which the drawer should be set and the adapter/controller/cable configuration that will be used. Table 2-36 lists the no-charge specify codes and the physical adapters/controllers/cables with their own chargeable feature numbers.
Note: IBM plans to offer a 15-meter, 3 Gb bandwidth SAS cable for the #5913 PCIe2 1.8 GB Cache RAID SAS Adapter when attaching the EXP24S Drawer (#5887) for large configurations where the 10 meter cable is a distance limitation. The EXP24S Drawer rails are fixed length and designed to fit Power Systems provided racks of 28 inches (711 mm) deep. EXP24S uses 2 EIA of space in a 19-inch wide rack. Other racks might have different depths, and these rails will not adjust.
Note: A new IBM 7031 TotalStorage EXP24 Ultra320 SCSI Expandable Storage Disk Enclosure cannot be ordered for the Power 720 and Power 740, and thus only existing 7031-D24 drawers or 7031-T24 towers can be moved to the Power 720 and 740 servers. AIX and Linux partitions are supported along with the usage of a IBM 7031 TotalStorage EXP24 Ultra320 SCSI Expandable Storage Disk Enclosure. 2.11.
supporting a greater potential return on investment (ROI). For more information about Storwize V7000, see: http://www.ibm.com/systems/storage/disk/storwize_v7000/index.html IBM XIV Storage System IBM offers a mid-sized configuration of its self-optimizing, self-healing, resilient disk solution, the IBM XIV® Storage System, storage reinvented for a new era.
Type-model Availability Description 7042-CR4 Withdrawn IBM 7042 Model CR4 Rack-mounted Hardware Management Console 7042-CR5 Withdrawn IBM 7042 Model CR5 Rack-mounted Hardware Management Console 7042-CR6 Available IBM 7042 Model CR6 Rack-mounted Hardware Management Console At the time of writing, the HMC must be running V7R7.4.0. It can also support up to 48 Power7 systems.
HMC Console management The last group relates to the management of the HMC itself, its maintenance, security, and configuration, for example: Guided set-up wizard Electronic Service Agent set up wizard User Management – User IDs – Authorization levels – Customizable authorization Disconnect and reconnect Network Security – Remote operation enable and disable – User definable SSL certificates Console logging HMC Redundancy Scheduled Operations Back-up and Restore Updates, Upgrades Custo
For the HMC to communicate properly with the managed server, eth0 of the HMC must be connected to either the HMC1 or HMC2 ports of the managed server, although other network configurations are possible. You can attach a second HMC to HMC Port 2 of the server for redundancy (or vice versa). These must be addressed by two separate subnets. Figure 2-36 shows a simple network configuration to enable the connection from HMC to server and to enable Dynamic LPAR operations.
Note: The service processor is used to monitor and manage the system hardware resources and devices. The service processor offers two Ethernet 10/100 Mbps ports as connections. Note the following information: Both Ethernet ports are visible only to the service processor and can be used to attach the server to an HMC or to access the ASMI options from a client web browser using the HTTP server integrated into the service processor internal operating system.
Figure 2-37 shows one possible highly available HMC configuration managing two servers. These servers have only one CEC and therefore only one FSP. Each HMC is connected to one FSP port of all managed servers.
Figure 2-38 shows a redundant HMC and redundant service processor connectivity configuration.
Figure 2-39 describes the four possible Ethernet connectivity options between the HMC and service processors. Configuration #1 – Single drawer and one HMC HMC #1 Enet HUB 0 Enet 1 FSP Card Enet 2 Note: Drawer 1 HUB is optional. Customer can have a direct connection to the FSP card. Configuration #2 – Single drawer and two HMCs HMC #1 Enet HUB 0 Enet 1 HMC #2 Enet FSP Card Enet 2 Drawer 1 HUB 1 Note: HUB is optional.
Tips: Note the following tips: When upgrading the code of a dual HMC configuration, a good practice is to disconnect one HMC to avoid having both HMCs connected to the same server but running different levels of code. If no profiles or partition changes take place during the upgrade, both HMCs can stay connected. If the HMCs are at different levels and a profile change is made from the HMC at level V7R7.4.
or later), and KVM (Red Hat Enterprise Linux (RHEL) 5.5). The virtual appliance is only supported or managing small-tier Power servers and Power Systems blades. Note: At the time of writing, the SDMC is not supported for the Power 770 (9117-MMC) and Power 780 (9179-MHC) models. IBM intends to enhance the IBM Systems Director Management Console (SDMC) to support the Power 770 (9117-MMC) and Power 780 (9179-MHC).
The IBM SDMC Virtual Appliance requires an IBM Systems Director Management Console V6.7.3 (5765-MCV). Note: If you want to use the software appliance, you have to provide the hardware and virtualization environment. At a minimum, the following resources must be available to the virtual machine: 2.53 GHz Intel Xeon E5630, Quad Core processor 500 GB storage 8 GB memory The following hypervisors are supported: VMware (ESXi 4.0 or later) KVM (RHEL 5.
IBM periodically releases maintenance packages (service packs or technology levels) for the AIX operating system. Information about these packages, downloading, and obtaining the CD-ROM is on the Fix Central website: http://www-933.ibm.com/support/fixcentral/ The Fix Central website also provides information about how to obtain the fixes shipping on CD-ROM.
2.14.4 Linux operating system Linux is an open source operating system that runs on numerous platforms from embedded systems to mainframe computers. It provides a UNIX-like implementation across many computer architectures.
through to the POWER4+, POWER5, POWER5+, and POWER6 processors, and now including the new POWER7 processors. With the support of the latest POWER7 processor chip, IBM advances a more than 20-year investment in the XL compilers for POWER series and PowerPC® series architectures.
2.15 Energy management The Power 770 and 780 servers are designed with features to help clients become more energy efficient. The IBM Systems Director Active Energy Manager exploits EnergyScale technology, enabling advanced energy management features to dramatically and dynamically conserve power and further improve energy efficiency.
When a system is idle, the system firmware will lower the frequency and voltage to power energy saver mode values. When fully utilized, the maximum frequency will vary depending on whether the user favors power savings or system performance. If an administrator prefers energy savings and a system is fully utilized, the system is designed to reduce the maximum frequency to 95% of nominal values.
temperature and assumes a high-altitude environment. When a power savings setting is enforced (either Power Energy Saver Mode or Dynamic Power Saver Mode), fan speed will vary based on power consumption, ambient temperature, and altitude available. System altitude can be set in IBM Director Active Energy Manager. If no altitude is set, the system will assume a default value of 350 meters above sea level.
A new power savings mode, called inherit host setting, is available and is only applicable to partitions. When configured to use this setting, a partition will adopt the power savings mode of its hosting server. By default, all partitions with dedicated processing units, and the system processor pool, are set to the inherit host setting. On POWER7 processor-based systems, several EnergyScales are imbedded in the hardware and do not require an operating system or external management component.
118 IBM Power 770 and 780 Technical Overview and Introduction
3 Chapter 3. Virtualization As you look for ways to maximize the return on your IT infrastructure investments, consolidating workloads becomes an attractive proposition. IBM Power Systems combined with PowerVM technology are designed to help you consolidate and simplify your IT environment with the following key capabilities: Improve server utilization and sharing I/O resources to reduce total cost of ownership and make better use of IT assets.
3.1 POWER Hypervisor Combined with features designed into the POWER7 processors, the POWER Hypervisor delivers functions that enable other system technologies, including logical partitioning technology, virtualized processors, IEEE VLAN compatible virtual switch, virtual SCSI adapters, virtual Fibre Channel adapters, and virtual consoles.
Virtual SCSI The POWER Hypervisor provides a virtual SCSI mechanism for virtualization of storage devices. The storage virtualization is accomplished using two, paired adapters: A virtual SCSI server adapter A virtual SCSI client adapter A Virtual I/O Server partition or a IBM i partition can define virtual SCSI server adapters. Other partitions are client partitions. The Virtual I/O Server partition is a special logical partition, as described in 3.4.4, “Virtual I/O Server” on page 137.
Virtual Fibre Channel A virtual Fibre Channel adapter is a virtual adapter that provides client logical partitions with a Fibre Channel connection to a storage area network through the Virtual I/O Server logical partition. The Virtual I/O Server logical partition provides the connection between the virtual Fibre Channel adapters on the Virtual I/O Server logical partition and the physical Fibre Channel adapters on the managed system.
On Power System servers, partitions can be configured to run in several modes, including: POWER6 compatibility mode This execution mode is compatible with Version 2.05 of the Power Instruction Set Architecture (ISA). For more information, visit the following address: http://www.power.org/resources/reading/PowerISA_V2.05.pdf POWER6+ compatibility mode This mode is similar to POWER6, with eight additional Storage Protection Keys.
Table 3-2 lists the differences between these modes.
sample work loads, showed excellent results for many workloads in terms of memory expansion per additional CPU utilized. Other test workloads had more modest results. Clients have much control over Active Memory Expansion usage. Each individual AIX partition can turn on or turn off Active Memory Expansion. Control parameters set the amount of expansion desired in each partition to help control the amount of CPU used by the Active Memory Expansion function.
To help you perform this study, a planning tool is included with AIX 6.1 Technology Level 4, allowing you to sample actual workloads and estimate how expandable the partition's memory is and how much CPU resource is needed. Any model Power System can run the planning tool. Figure 3-4 shows an example of the output returned by this planning tool. The tool outputs various real memory and CPU resource combinations to achieve the desired effective memory. It also recommends one particular combination.
After you select the value of the memory expansion factor that you want to achieve, you can use this value to configure the partition from the managed console (Figure 3-5). Active Memory Expansion Modeled Statistics: ----------------------Modeled Expanded Memory Size : 8.00 GB Expansion Factor --------1.21 1.31 1.41 1.51 1.61 True Memory Modeled Size -------------6.75 GB 6.25 GB 5.75 GB 5.50 GB 5.00 GB ple o Sam Modeled Memory Gain ----------------1.25 GB [ 19%] 1.75 GB [ 28%] 2.25 GB [ 39%] 2.
hypervisorFrom the HMC, you can view whether the Active Memory Expansion feature has been activated (Figure 3-6). Figure 3-6 Server capabilities listed from the HMC Note: If you want to move an LPAR using Active Memory Expansion to a different system using Live Partition Mobility, the target system must support AME (the target system must have AME activated with the software key).
3.4.1 PowerVM editions This section provides information about the virtualization capabilities of the PowerVM. The three editions of PowerVM are suited for various purposes, as follows: PowerVM Express Edition PowerVM Express Edition is designed for customers looking for an introduction to more advanced virtualization features at a highly affordable price, generally in single-server projects.
Micro-Partitioning Micro-Partitioning technology allows you to allocate fractions of processors to a logical partition. This technology was introduced with POWER5 processor-based systems. A logical partition using fractions of processors is also known as a Shared Processor Partition or micro-partition.
The Power 780 allows up to 96 cores in a single system, supporting the following maximums: Up to 96 dedicated partitions Up to 960 micro-partitions (10 micro-partitions per physical active core) An important point is that the maximums stated are supported by the hardware, but the practical limits depend on application workload demands.
Dedicated mode In dedicated mode, physical processors are assigned as a whole to partitions. The simultaneous multithreading feature in the POWER7 processor core allows the core to execute instructions from two or four independent software threads simultaneously. To support this feature we use the concept of logical processors. The operating system (AIX, IBM i, or Linux) sees one physical processor as two or four logical processors if the simultaneous multithreading feature is on.
To implement MSPPs, there is a set of underlying techniques and technologies. Figure 3-8 shows an overview of the architecture of Multiple Shared Processor Pools. Unused capacity in SPP0 is redistributed to uncapped micro-partitions within SPP0 Unused capacity in SPP1 is redistributed to uncapped micro-partitions within SPP1 Shared Processor Pool0 Set of micro-partitions Shared Processor Pool1 Set of micro-partitions AIX V5.3 AIX V6.1 Linux AIX V6.1 AIX V6.1 Linux EC 1.6 EC 0.8 EC 0.5 EC 1.
Default Shared Processor Pool (SPP0 ) On any Power Systems server supporting Multiple Shared Processor Pools, a default Shared Processor Pool is always automatically defined. The default Shared Processor Pool has a pool identifier of zero (SPP-ID = 0) and can also be referred to as SPP0 . The default Shared Processor Pool has the same attributes as a user-defined Shared Processor Pool except that these attributes are not directly under the control of the system administrator.
Figure 3-9 shows the levels of unused capacity redistribution implemented by the POWER Hypervisor.
Important: Level1 capacity resolution: When allocating additional processor capacity in excess of the entitled pool capacity of the Shared Processor Pool, the POWER Hypervisor takes the uncapped weights of all micro-partitions in the system into account, regardless of the Multiple Shared Processor Pool structure. Where there is unused processor capacity in under-utilized Shared Processor Pools, the micro-partitions within the Shared Processor Pools cede the capacity to the POWER Hypervisor.
Live Partition Mobility and Multiple Shared Processor Pools A micro-partition can leave a Shared Processor Pool because of PowerVM Live Partition Mobility. Similarly, a micro-partition can join a Shared Processor Pool in the same way. When performing PowerVM Live Partition Mobility, you are given the opportunity to designate a destination Shared Processor Pool on the target server to receive and host the migrating micro-partition.
The Virtual Fibre Channel adapter is used with the NPIV feature, described in 3.4.8, “N_Port ID virtualization” on page 147. Shared Ethernet Adapter A Shared Ethernet Adapter (SEA) can be used to connect a physical Ethernet network to a virtual Ethernet network. The Shared Ethernet Adapter provides this access by connecting the internal hypervisor VLANs with the VLANs on the external switches.
A single SEA setup can have up to 16 Virtual Ethernet trunk adapters and each virtual Ethernet trunk adapter can support up to 20 VLAN networks. Therefore, a possibility is for a single physical Ethernet to be shared between 320 internal VLAN networks. The number of shared Ethernet adapters that can be set up in a Virtual I/O Server partition is limited only by the resource availability, because there are no configuration limits.
Figure 3-12 shows an example where one physical disk is divided into two logical volumes by the Virtual I/O Server. Each client partition is assigned one logical volume, which is then accessed through a virtual I/O adapter (VSCSI Client Adapter). Inside the partition, the disk is seen as a normal hdisk.
Includes IBM Systems Director agent and a number of pre-installed Tivoli agents, such as: – Tivoli Identity Manager (TIM), to allow easy integration into an existing Tivoli Systems Management infrastructure – Tivoli Application Dependency Discovery Manager (ADDM), which creates and automatically maintains application infrastructure maps including dependencies, change-histories, and deep configuration values vSCSI eRAS. Additional CLI statistics in svmon, vmstat, fcstat, and topas.
An N_Port ID virtualization (NPIV) device is considered virtual and is compatible with partition migration. The hypervisor must support the Partition Mobility functionality (also called migration process) available on POWER 6 and POWER 7 processor-based hypervisors. Firmware must be at firmware level eFW3.2 or later. All POWER7 processor-based hypervisors support Live Partition Mobility. Source and destination systems can have separate firmware levels, but they must be compatible with each other.
migration to fail. During the migration, the managed console controls all phases of the process. Improved Live Partition Mobility benefits The possibility to move partitions between POWER6 and POWER7 processor-based servers greatly facilitates the deployment of POWER7 processor-based servers, as follows: Installation of the new server can be performed while the application is executing on a POWER6 server.
3.4.7 Active Memory Deduplication In a virtualized environment, the systems might have a considerable amount of duplicated information stored on RAM after each partition has its own operating system, and some of them might even share the same kind of applications. On heavily loaded systems this might lead to a shortage of the available memory resources, forcing paging by the AMS partition operating systems, the AMD pool, or both, which might decrease overall system performance.
Figure 3-14 shows the behavior of a system with Active Memory Deduplication enabled on its AMS shared memory pool. Duplicated pages from different LPARs are stored just once, providing the AMS pool with more free memory.
Figure 3-15 shows two pages being written in the AMS memory pool and having their signatures matched on the deduplication table.
Figure 3-16 shows the Active Memory Deduplication being enabled to a shared memory pool. Figure 3-16 Enabling the Active Memory Deduplication for a shared memory pool The Active Memory Deduplication feature requires the following minimum components: PowerVM Enterprise edition System firmware level 740 AIX Version 6: AIX 6.1 TL7 or later AIX Version 7: AIX 7.1 TL1 SP1 or later IBM i: 7.14 or 7.2 or later SLES 11 SP2 or later RHEL 6.2 or later 3.4.
3.4.9 Operating system support for PowerVM Table 3-5 summarizes the PowerVM features supported by the operating systems compatible with the POWER7 processor-based servers. Table 3-5 PowerVM features supported by AIX, IBM i, and Linux Feature AIX V5.3 AIX V6.1 AIX V7.1 IBM i 6.1.1 IBM i 7.1 RHEL V5.7 RHEL V6.
Feature AIX V5.3 AIX V6.1 AIX V7.1 IBM i 6.1.1 IBM i 7.1 RHEL V5.7 RHEL V6.1 SLES V10 SP4 SLES V11 SP1 Simultaneous Multi-Threading (SMT) Yesc Yesd Yes Yese Yes Yesc Yesc Yesc Yes Active Memory Expansion No Yesf Yes No No No No No No a. Requires IBM i 7.1 TR1. b. Will become a fully provisioned device when used by IBM i. c. Only supports two threads. d. AIX 6.1 up to TL4 SP2 only supports two threads, and supports four threads as of TL4 SP3. e. IBM i 6.1.1 and up support SMT4.
Features Tickless idle Linux releases Comments SLES 10 SP4 SLES 11 RHEL 5.7 RHEL 6.1 No Yes No Yes Improved energy utilization and virtualization of partially to fully idle partitions For information regarding Advance Toolchain, see the following website: http://www.ibm.com/developerworks/wikis/display/hpccentral/How+to+use+Advance+Tool chain+for+Linux+on+POWER Also see the University of Illinois Linux on Power Open Source Repository: http://ppclinux.ncsa.illinois.edu ftp://linuxpatch.ncsa.
You can use the SPT before you order a system to determine what you must order to support your workload. You can also use the SPT to determine how you can partition a system that you already have. Using the System Planning Tool is an effective way of documenting and backing up key system settings and partition definitions. It allows the user to create records of systems and export them to their personal workstation or backup system of choice.
152 IBM Power 770 and 780 Technical Overview and Introduction
4 Chapter 4. Continuous availability and manageability This chapter provides information about IBM reliability, availability, and serviceability (RAS) design and features. This set of technologies implemented on IBM Power Systems servers provides the possibility to improve your architecture’s total cost of ownership (TCO) by reducing unplanned down time.
IBM is the only vendor that designs, manufactures, and integrates its most critical server components, including: POWER processors Caches Memory buffers Hub-controllers Clock cards Service processors Design and manufacturing verification and integration, as well as field support information, is used as feedback for continued improvement on the final products. This chapter also includes a manageability section describing the means to successfully manage your systems.
4.1 Reliability Highly reliable systems are built with highly reliable components. On IBM POWER processor-based systems, this basic principle is expanded upon with a clear design for reliability architecture and methodology. A concentrated, systematic, architecture-based approach is designed to improve overall system reliability with each successive generation of system offerings. 4.1.1 Designed for reliability Systems designed with fewer components and interconnects have fewer opportunities to fail.
4.1.2 Placement of components Packaging is designed to deliver both high performance and high reliability. For example, the reliability of electronic components is directly related to their thermal environment, that is, large decreases in component reliability are directly correlated with relatively small increases in temperature. POWER processor-based systems are carefully packaged to ensure adequate cooling.
The POWER7 family of systems continues to introduce significant enhancements that are designed to increase system availability and ultimately a high availability objective with hardware components that are able to perform the following functions: Self-diagnose and self-correct during run time. Automatically reconfigure to mitigate potential problems from suspect hardware. Self-heal or automatically substitute good components for failing components.
Persistent deallocation To enhance system availability, a component that is identified for deallocation or deconfiguration on a POWER processor-based system is flagged for persistent deallocation. Component removal can occur either dynamically (while the system is running) or at boot time (IPL), depending both on the type of fault and when the fault is detected. In addition, runtime unrecoverable hardware faults can be deconfigured from the system after the first occurrence.
If there are no CoD processor cores available system-wide, total processor capacity is lowered below the licensed number of cores. Single processor checkstop As in POWER6, POWER7 provides single-processor check-stopping for certain processor logic, command, or control errors that cannot be handled by the availability enhancements in the preceding section.
CRC The bus that is transferring data between the processor and the memory uses CRC error detection with a failed operation-retry mechanism and the ability to dynamically retune bus parameters when a fault occurs. In addition, the memory bus has spare capacity to substitute a spare data bit-line, for that which is determined to be faulty. Chipkill Chipkill is an enhancement that enables a system to sustain the failure of an entire DRAM chip.
Figure 4-3 shows a POWER7 chip, with its memory interface, consisting of two controllers and four DIMMs per controller. Advanced memory buffer chips are exclusive to IBM and help to increase performance, acting as read/write buffers. On the Power 770 and Power 780, the advanced memory buffer chips are integrated into the DIMM that they support.
Finally, if an uncorrectable error in memory is discovered, the logical memory block associated with the address with the uncorrectable error is marked for deallocation by the POWER Hypervisor. This deallocation takes effect on a partition reboot if the logical memory block is assigned to an active partition at the time of the fault. In addition, the system deallocates the entire memory group that is associated with the error on all subsequent system reboots until the memory is repaired.
Hypervisor data that is not mirrored – Advanced Memory Sharing (AMS) pool – Memory used to hold contents of platform dump while waiting for offload to management console Partition data that is not mirrored – Desired memory configured for individual partitions is not mirrored. To enable mirroring, the requirement is to have eight equally sized functional memory DIMMs behind at least one POWER7 chip in each CEC enclosure. The DIMMs will be managed by the same memory controller.
The Active Memory Mirroring can be disabled or enabled on the management console using the Advanced tab of the server properties (Figure 4-5). Figure 4-5 Enabling or disabling active memory sharing The system must be entirely powered off and then powered on to change from mirroring mode to non-mirrored mode.
Mirroring optimization Hypervisor mirroring requires specific memory locations. Those locations might be assign to other purposes (for LPAR memory, for example) due to memory’s management based on the logical memory block. To “reclaim” those memory locations, an optimization tool is available on the Advanced tab of the system properties (Figure 4-6).
Advanced memory mirroring features On the Power 770 server, the Advanced Memory Mirroring for Hypervisor function is an optional chargable feature. It must be selected in econfig. On this server, the advanced memory mirroring is activated by entering an activation code (also called Virtualization Technology Code, or VET) in the management console.
Sometimes an uncorrectable error is temporary in nature and occurs in data that can be recovered from another repository. For example: Data in the instruction L1 cache is never modified within the cache itself. Therefore, an uncorrectable error discovered in the cache is treated like an ordinary cache-miss, and correct data is loaded from the L2 cache. The L2 and L3 cache of the POWER7 processor-based systems can hold an unmodified copy of data in a portion of main memory.
The traditional means of handling these problems is through adapter internal-error reporting and recovery techniques, in combination with operating system device-driver management and diagnostics. In certain cases, an error in the adapter can cause transmission of bad data on the PCI bus itself, resulting in a hardware-detected parity error and causing a global machine check interrupt, eventually requiring a system reboot to continue.
4.3 Serviceability IBM Power Systems design considers both IBM and client needs. The IBM Serviceability Team has enhanced the base service capabilities and continues to implement a strategy that incorporates best-of-breed service characteristics from diverse IBM systems offerings. Serviceability includes system installation, system upgrades and downgrades (MES), and system maintenance and repair.
problems so that the system administrator can take appropriate corrective actions before a critical failure threshold is reached. The service processor can also post a warning and initiate an orderly system shutdown in the following circumstances: – The operating temperature exceeds the critical level (for example, failure of air conditioning or air circulation around the system). – The system fan speed is out of operational specification (for example, because of multiple fan failures).
Note: The auto-restart (reboot) option has to be enabled from the Advanced System Manager Interface or from the Control (Operator) Panel. Figure 4-8 shows this option using the ASMI.
Concurrent access to the service processors menus of the Advanced System Management Interface (ASMI) This access allows nondisruptive abilities to change system default parameters, interrogate service processor progress and error logs andset and reset server indicators (Guiding Light for midrange and high-end servers, Light Path for low-end servers), accessing all service processor functions without having to power down the system to the standby state.
Figure 4-9 shows a schematic of a fault isolation register implementation.
Boot time When an IBM Power Systems server powers up, the service processor initializes the system hardware. Boot-time diagnostic testing uses a multi-tier approach for system validation, starting with managed low-level diagnostics that are supplemented with system firmware initialization and configuration of I/O hardware, followed by OS-initiated software test routines.
result is stored in system NVRAM. Error log analysis (ELA) can be used to display the failure cause and the physical location of the failing hardware. With the integrated service processor, the system has the ability to automatically send out an alert through a phone line to a pager, or call for service in the event of a critical system failure. A hardware fault also illuminates the amber system fault LED located on the system unit to alert the user of an internal hardware problem.
When a local or globally reported service request is made to the operating system, the operating system diagnostic subsystem uses the Remote Management and Control Subsystem (RMC) to relay error information to the Hardware Management Console. For global events (platform unrecoverable errors, for example) the service processor will also forward error notification of these events to the Hardware Management Console, providing a redundant error-reporting path in case of errors in the RMC network.
Client Notify events are serviceable events, by definition, because they indicate that something has happened that requires client awareness in the event that the client wants to take further action. These events can always be reported back to IBM at the client’s discretion.
connection. Positive retention mechanisms such as latches, levers, thumb-screws, pop Nylatches (U-clips), and cables are included to help prevent loose connections and aid in installing (seating) parts correctly. These positive retention items do not require tools. Light Path The Light Path LED feature is for low-end systems, including Power Systems up to models 750 and 755, that can be repaired by clients.
Service labels Service providers use these labels to assist them in performing maintenance actions. Service labels are found in various formats and positions, and are intended to transmit readily available information to the servicer during the repair process. Several of these service labels and their purposes are described in the following list: Location diagrams are strategically located on the system hardware, relating information regarding the placement of hardware components.
Hot-node add, hot-node repair, and memory upgrade With the proper configuration and required protective measures, the Power 770 and Power 780 servers are designed for node add, node repair, or memory upgrade without powering down the system. The Power 770 and Power 780 servers support the addition of another CEC enclosure (node) to a system (hot-node add) or adding more memory (memory upgrade) to an existing node.
If the system is managed by a management console, you will use the management console for firmware updates. Using the management console allows you to take advantage of the Concurrent Firmware Maintenance (CFM) option when concurrent service packs are available. CFM is the IBM term used to describe the IBM Power Systems firmware updates that can be partially or wholy concurrent or non-disruptive.
Clients can subscribe through the subscription services to obtain the notifications about the latest updates available for service-related documentation. The latest version of the documentation is accessible through the internet. 4.4 Manageability Several functions and tools help manageability and enable you to efficiently and effectively manage your system. 4.4.
With two CEC enclosures and more, there are two redundant FSP, one in each of the first CECs. While one is active, the second one is in standby mode. In case of a failure, there will be a automatic takeover. Note: The service processor enables a system that does not boot to be analyzed. The error log analysis can be performed from either the ASMI or the management console. The service processor uses two Ethernet 10/100Mbps ports.
You might be able to use the service processor’s default settings. In that case, accessing the ASMI is not necessary. To access ASMI, use one of the following methods: Access the ASMI by using an management console. If configured to do so, the management console connects directly to the ASMI for a selected system from this task. To connect to the Advanced System Management interface from an management console, follow these steps: a. Open Systems Management from the navigation pane. b.
The operator panel can be accessed in two ways: By using the normal operational front view. By pulling it out to access the switches and viewing the LCD display. Figure 4-10 shows that the operator panel on a Power 770 and Power 780 is pulled out.
error log and the AIX configuration data. IBM i has a service tools problem log, IBM i history log (QHST), and IBM i problem log. These are the modes: Service mode Requires a service mode boot of the system and enables the checking of system devices and features. Service mode provides the most complete checkout of the system resources. All system resources, except the SCSI adapter and the disk drives used for paging, can be tested.
Depending on the operating system, these are the service-level functions that you typically see when using the operating system service menus: Product activity log Trace Licensed Internal Code Work with communications trace Display/Alter/Dump Licensed Internal Code log Main storage dump manager Hardware service manager Call Home/Customer Notification Error information menu LED management menu Concurrent/Non-concurrent maintenance (within scope of the OS) Managing firmware levels – Server – Adapter Remote
For access to the initial web pages that address this capability, see the Support for IBM Systems web page: http://www.ibm.com/systems/support For Power Systems, select the Power link (Figure 4-11). Figure 4-11 Support for Power servers web page Although the content under the Popular links section can change, click Firmware and HMC updates to go to the resources for keeping your system’s firmware current.
If there is a management console to manage the server, the management console interface can be use to view the levels of server firmware and power subsystem firmware that are installed and are available to download and install.
An installation is disruptive if the following statements are true: The release levels (SSS) of currently installed and new firmware differ. The service pack level (FFF) and the last disruptive service pack level (DDD) are equal in new firmware. Otherwise, an installation is concurrent if the service pack level (FFF) of the new firmware is higher than the service pack level currently installed on the system and the conditions for disruptive installation are not met. 4.4.
AIX 5.3 AIX 6.1 AIX 7.1 IBM i RHEL 5.7 RHEL 6.
AIX 5.3 AIX 6.1 AIX 7.1 IBM i RHEL 5.7 RHEL 6.
Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this paper. IBM Redbooks The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only.
IBM Power 710 server Data Sheet http://public.dhe.ibm.com/common/ssi/ecm/en/pod03048usen/POD03048USEN.PDF IBM Power 720 server Data Sheet http://public.dhe.ibm.com/common/ssi/ecm/en/pod03048usen/POD03048USEN.PDF IBM Power 730 server Data Sheet http://public.dhe.ibm.com/common/ssi/ecm/en/pod03050usen/POD03050USEN.PDF IBM Power 740 server Data Sheet http://public.dhe.ibm.com/common/ssi/ecm/en/pod03051usen/POD03051USEN.PDF IBM Power 750 server Data Sheet http://public.dhe.ibm.
Support for IBM Systems website http://www.ibm.com/support/entry/portal/Overview?brandind=Hardware~Systems~Power IBM Power Systems website http://www.ibm.com/systems/power/ IBM Storage website http://www.ibm.com/systems/storage/ Help from IBM IBM Support and downloads ibm.com/support IBM Global Services ibm.
196 IBM Power 770 and 780 Technical Overview and Introduction
Back cover IBM Power 770 and 780 Technical Overview and Introduction Features the 9117-MMC and 9179-MHC based on the latest POWER7 processor technology Describes MaxCore and TurboCore for redefining performance Discusses Active Memory Mirroring for Hypervisor This IBM Redpaper publication is a comprehensive guide covering the IBM Power 770 (9117-MMC) and Power 780 (9179-MHC) servers supporting IBM AIX, IBM i, and Linux operating systems.