Front cover IBM Flex System p260 and p460 Planning and Implementation Guide Describes the new POWER7 compute nodes for IBM PureFlex System Provides detailed product and planning information Set up partitioning and OS installation David Watts Jose Martin Abeleira Kerry Anders Alberto Damigella Bill Miller William Powell ibm.
International Technical Support Organization IBM Flex System p260 and p460 Planning and Implementation Guide June 2012 SG24-7989-00
Note: Before using this information and the product it supports, read the information in “Notices” on page ix. First Edition (June 2012) This edition applies to: IBM PureFlex System IBM Flex System Enterprise Chassis IBM Flex System Manager IBM Flex System p260 Compute Node IBM Flex System p24L Compute Node IBM Flex System p460 Compute Node © Copyright International Business Machines Corporation 2012. All rights reserved. Note to U.S.
Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.3 Top-of-Rack SAN switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.6 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5 IBM POWER7 processor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.5.1 Processor options for Power Systems compute nodes. . . . . . . . . . . 77 4.5.2 Unconfiguring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.5.3 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.6 Memory subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.6.
5.4.2 SAN and Fibre Channel redundancy . . . . . . . . . . . . . . . . . . . . . . . 133 5.5 Dual VIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.5.1 Dual VIOS on Power Systems compute nodes. . . . . . . . . . . . . . . . 136 5.6 Power planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.6.1 Power Systems compute node power supply features . . . . . . . . . . 138 5.6.
Chapter 7. Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 7.1 PowerVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 7.1.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 7.1.2 POWER Hypervisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 7.1.3 Preparing to use the IBM Flex System Manager for partitioning. . . 284 7.
viii IBM Flex System p260 and p460 Planning and Implementation Guide
Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries.
Preface To meet today’s complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources that is simple to deploy and can quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven preferred practices in systems management, applications, hardware maintenance, and more.
The team who wrote this book This book was produced by a team of specialists from around the world working at the International Technical Support Organization, Raleigh Center. David Watts is a Consulting IT Specialist at the IBM ITSO Center in Raleigh. He manages residencies and produces IBM Redbooks publications on hardware and software topics related to IBM Flex System, IBM System x®, and IBM BladeCenter® servers and associated client platforms. He has authored over 200 books, papers, and Product Guides.
Bill Miller is an IT Specialist in Lab Services Technical Training. He has been with IBM since 1983. He has had an array of responsibilities, starting in development, and then moving to roles as a Systems Engineer and IBM Global Services consultant that focuses on AIX, IBM Tivoli® Storage Manager, and IBM HACMP™ (IBM PowerHA®) planning and implementation. He is currently responsible for course development, maintenance, and delivery for the PowerHA and Flex System curriculums.
Mike Easterly Diana Cunniffe Kyle Hampton Botond Kiss Shekhar Mishra Justin Nguyen Sander Kim Dean Parker Hector Sanchez David Tareen David Walker Randi Wood Bob Zuber From IBM Power Systems development: Chris Austen Kaena Freitas Jim Gallagher Ned Gamble Bill Johnson Rick McBride Lenny Nichols Amartey Pearson Dean Price Mike Stys Richard Vasconi Others from IBM around the world Bill Champion Michael L.
Now you can become a published author, too! Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships.
Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.
1 Chapter 1. IBM PureSystems During the last 100 years, information technology moved from a specialized tool to a pervasive influence on nearly every aspect of life. From tabulating machines that counted with mechanical switches or vacuum tubes to the first programmable computers, IBM has been a part of this growth, while always helping customers solve problems. Information technology (IT) is a constant part of business and of our lives.
The offerings in IBM PureSystems™ are designed to deliver value in the following ways: Built-in expertise helps you address complex business and operational tasks automatically. Integration by design helps you tune systems for optimal performance and efficiency. Simplified experience, from design to purchase to maintenance, creates efficiencies quickly. The IBM PureSystems offerings are optimized for performance and virtualized for efficiency.
IBM PureFlex System recommends workload placement based on virtual machine compatibility and resource availability. Using built-in virtualization across servers, storage, and networking, the infrastructure system enables automated scaling of resources and true workload mobility. IBM PureFlex System undergoes significant testing and experimentation, so it can mitigate IT complexity without compromising the flexibility to tune systems to the tasks businesses demand.
Component IBM PureFlex System Express IBM PureFlex System Standard IBM PureFlex System Enterprise IBM Flex System FC3171 8Gb SAN Switch 1 2 2 IBM Flex System Manager Node 1 1 1 IBM Flex System Manager software license IBM Flex System Manager with 1-year service and support IBM Flex System Manager Advanced with 3-year service and support Flex System Manager Advanced with 3-year service and support Chassis Management Module 2 2 2 Chassis power supplies (std/max) 2/6 4/6 6/6 Chassis 80
With the IBM PureApplication System, you can provision your own patterns of software, middleware, and virtual system resources. You can provision these patterns within a unique framework that is shaped by IT preferred practices and industry standards that are culled from many years of IBM experience with clients and from a deep understanding of smarter computing. These IT preferred practices and standards are infused throughout the system.
Increased simplicity. You need a less complex environment. You can use patterns of expertise to help you easily consolidate diverse servers, storage, and applications onto an easier-to-manage, integrated system. Control. With optimized patterns of expertise, you can accelerate cloud implementations to lower risk by improving security and reducing human error. IBM PureApplication System is available in four configurations.
1.3.1 Management IBM Flex System Manager is designed to optimize the physical and virtual resources of the IBM Flex System infrastructure while simplifying and automating repetitive tasks. From easy system set-up procedures with wizards and built-in expertise, to consolidated monitoring for all of your resources (compute, storage, networking, virtualization, and energy), IBM Flex System Manager provides core management functionality along with automation.
IBM Flex System simplifies storage administration by using a single user interface for all your storage through a management console that is integrated with the comprehensive management system. You can use these management and storage capabilities to virtualize third-party storage with nondisruptive migration of the current storage infrastructure. You can also take advantage of intelligent tiering so you can balance performance and cost for your storage needs.
1.4 IBM Flex System overview The expert integrated system of IBM PureSystems is based on a new hardware and software platform called IBM Flex System. 1.4.1 IBM Flex System Manager The IBM Flex System Manager (FSM) is a high performance scalable systems management appliance with a preloaded software stack.
Figure 1-1 shows the IBM Flex System Manager. Figure 1-1 IBM Flex System Manager 1.4.2 IBM Flex System Enterprise Chassis The IBM Flex System Enterprise Chassis (Enterprise Chassis) offers compute, networking, and storage capabilities far exceeding products that are currently available in the market. With the ability to handle up 14 compute nodes, intermixing POWER7 and Intel x86, the Enterprise Chassis provides flexibility and tremendous compute capacity in a 10 U package.
Figure 1-2 shows the IBM Flex System Enterprise Chassis. Figure 1-2 The IBM Flex System Enterprise Chassis 1.4.3 Compute nodes IBM Flex System offers compute nodes that vary in architecture, dimension, and capabilities. The new, no-compromise nodes feature leadership designs for current and future workloads.
Figure 1-3 shows the IBM Flex System p460 Compute Node. Figure 1-3 IBM Flex System p460 Compute Node The nodes have complementary leadership I/O capabilities of up to 16 x 10 Gb lanes per node.
Here are the I/O Modules offered with IBM Flex System: IBM Flex System Fabric EN4093 10Gb Scalable Switch IBM Flex System EN2092 1Gb Ethernet Scalable Switch IBM Flex System EN4091 10Gb Ethernet Pass-thru IBM Flex System FC3171 8Gb SAN Switch IBM Flex System FC3171 8Gb SAN Pass-thru IBM Flex System FC5022 16Gb SAN Scalable Switch IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch IBM Flex System IB6131 InfiniBand Switch IBM Flex System IB6132 2-port QDR InfiniBand Adapter Figure
14 IBM Flex System p260 and p460 Planning and Implementation Guide
2 Chapter 2. IBM PureFlex System IBM PureFlex System provides an integrated computing system that combines servers, enterprise storage, networking, virtualization, and management into a single structure. You can use its built-in expertise to manage and flexibly deploy integrated patterns of virtual and hardware resources through unified management.
IBM PureFlex System has three preintegrated offerings that support compute, storage, and networking requirements. You can select from these offerings, which are designed for key client initiatives and help simplify ordering and configuration. As a result, PureFlex System helps cut the cost, time, and complexity of system deployments. The IBM PureFlex System offerings are as follows: Express: An infrastructure system for small-sized and midsized businesses; the most cost-effective entry point. See 2.
2.1 IBM PureFlex System Express The tables in this section represent the hardware, software, and services that make up IBM PureFlex System Express. We describe the following items: Chassis Top-of-Rack Ethernet switch Top-of-Rack SAN switch Compute nodes IBM Flex System Manager IBM Storwize V7000 Rack cabinet Software Services To specify IBM PureFlex System Express in the IBM ordering system, specify the indicator feature code listed in Table 2-1 for each machine type.
AAS feature code XCC feature code Description Minimum quantity 3595 A0TD IBM Flex System FC3171 8Gb SAN Switch 1 3286 5075 IBM 8Gb SFP+ Short-Wave Optical Transceiver 2 3590 A0UD Additional PSU 2500 W 0 4558 6252 2.5 m, 16A/100-240V, C19 to IEC 320-C20 power cord 2 9039 A0TM Base Chassis Management Module 1 3592 A0UE Additional Chassis Management Module 1 9038 None Base Fan Modules (four) 1 7805 A0UA Additional Fan Modules (two) 0 2.1.
2.1.3 Top-of-Rack SAN switch If more than one chassis is configured, then a Top-of-Rack SAN switch is added to the configuration. If only one chassis is configured, then the SAN switch is optional. Table 2-4 lists the switch components. Table 2-4 Components of the Top-of-Rack SAN switch AAS feature code XCC feature code Description Minimum quantity 2498-B24 2498-B24 24-port SAN Switch 0 5605 5605 5m optic cable 1 2808 2808 8 Gb SFP transceivers (8 pack) 1 2.1.
AAS feature code Description Minimum quantity Memory - 8 GB per core minimum with all DIMM slots filled with same memory type 8145 32GB (2x 16GB), 1066MHz, LP RDIMMs (1.35V) 8199 16GB (2x 8GB), 1066MHz, VLP RDIMMs (1.35V) Table 2-6 lists the major components of the IBM Flex System p24L Compute Node.
Table 2-7 lists the major components of the IBM Flex System x240 Compute Node.
2.1.6 IBM Storwize V7000 Table 2-9 lists the major components of the IBM Storwize V7000 storage server. Table 2-9 Components of the IBM Storwize V7000 storage server AAS feature code XCC feature code Description Minimum quantity 2076-124 2076-124 IBM Storwize V7000 Controller 1 5305 5305 5m Fiber Optic Cable 2 3512 3514 3512 3514 200GB 2.5 INCH SSD or 400GB 2.
a. Select one PDU line item from this list. These items are mutually exclusive. Most of them have a quantity of 2, except for the 16A PDU, which has a quantity of 4. the selection depends on the customer’s country and utility power requirements. 2.1.8 Software This section lists the software features of IBM PureFlex System Express. AIX and IBM i Table 2-11 lists the software features included with the Express configuration on POWER7 processor-based compute nodes for AIX and IBM i.
AIX V6 AIX V7 IBM i V6.1 IBM i V7.
Red Hat Enterprise Linux (RHEL) Virtualization SUSE Linux Enterprise Server (SLES) 5765-PVE PowerVM Enterprise Intel Xeon based compute nodes Table 2-13 lists the software features included with the Express configuration on Intel Xeon based compute nodes.
2.1.9 Services IBM PureFlex System Express includes the following services: Service & Support offerings: – Software maintenance: 1 year 9x5 (9 hours per day, 5 days per week). – Hardware maintenance: 3 years 9x5 Next Business Day service. Maintenance and Technical Support (MTS) offerings: – Three years with one microcode analysis per year. Lab Services: – Three days of on-site lab services – If the first compute node is a p260 or p460, 6911-300 is specified.
2.2.1 Chassis Table 2-15 lists the major components of the IBM Flex System Enterprise Chassis, including the switches and options. Feature codes: The tables in this section do not list all feature codes. Some features are not listed here for brevity.
Table 2-16 lists the switch components. Table 2-16 Components of the Top-of-Rack Ethernet switch AAS feature code XCC feature code Description Minimum quantity 7309-HC3 1455-64C IBM System Networking RackSwitch G8264 0a 7309-G52 1455-48E IBM System Networking RackSwitch G8052 0a ECB5 A1PJ 3m IBM Passive DAC SFP+ Cable 1 per EN4093 switch EB25 A1PJ 3m IBM QSFP+ DAC Break Out Cable 0 a. One required when a two or more Enterprise Chassis are configured 2.2.
Table 2-18 lists the major components of the IBM Flex System p460 Compute Node. Table 2-18 Components of IBM Flex System p460 Compute Node AAS feature code Description Minimum quantity IBM Flex System p460 Compute Node 7895-42x IBM Flex System p460 Compute Node 1 1764 IBM Flex System FC3172 2-port 8Gb FC Adapter 2 1762 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter 2 Base Processor 1 Required, select only one, Min 1, Max 1 EPR2 16 Cores, (4x 4 core), 3.
AAS feature code XCC feature code Description EBK2 49Y8119 IBM Flex System x240 USB Enablement Kit EBK3 41Y8300 2GB USB Hypervisor Key (VMware 5.0) Minimum quantity 2.2.5 IBM Flex System Manager Table 2-20 lists the major components of the IBM Flex System Manager.
AAS feature code XCC feature code Description Minimum quantity 6008 6008 8 GB Cache 2 9730 9730 power cord to PDU (includes 2 power cord) 1 9801 9801 Power supplies 2 a. If a Power Systems compute node is selected, then at least eight drives must be installed in the Storwize V7000. If an Intel Xeon based compute node is selected with SmartCloud Entry, then four drives must be installed in the Storwize V7000. 2.2.7 Rack cabinet Table 2-22 lists the major components of the rack and options.
2.2.8 Software This section lists the software features of IBM PureFlex System Standard. AIX and IBM i Table 2-23 lists the software features included with the Standard configuration on POWER7 processor-based compute nodes for AIX and IBM i. Table 2-23 Software features for IBM PureFlex System Standard with AIX and IBM i on Power AIX V6 AIX V7 IBM i V6.1 IBM i V7.
AIX V6 AIX V7 IBM i V6.1 IBM i V7.1 Security (PowerSC) Not applicable Not applicable Not applicable Not applicable Cloud Software (optional) Not applicable Not applicable Not applicable Not applicable RHEL and SUSE Linux on Power Table 2-24 lists the software features included with the Standard configuration on POWER7 processor-based compute nodes for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) on Power.
Intel Xeon based compute nodes Table 2-25 lists the software features included with the Standard configuration on Intel Xeon based compute nodes.
2.2.9 Services IBM PureFlex System Standard includes the following services: Service & Support offerings: – Software maintenance: 1 year 9x5 (9 hours per day, 5 days per week). – Hardware maintenance: 3 years 9x5 Next Business Day service. Maintenance and Technical Support (MTS) offerings: – 3 years with one microcode analysis per year. Lab Services: – 5 days of on-site Lab services – If the first compute node is a p260 or p460, 6911-300 is specified.
2.3.1 Chassis Table 2-27 lists the major components of the IBM Flex System Enterprise Chassis, including the switches and options. Feature codes: The tables in this section do not list all feature codes. Some features are not listed here for brevity.
2.3.2 Top-of-Rack Ethernet switch A minimum of two Top-of-Rack (TOR) Ethernet switches are required in the Enterprise configuration. Table 2-28 lists the switch components.
Table 2-30 lists the major components of the IBM Flex System p260 Compute Node. Table 2-30 Components of IBM Flex System p460 Compute Node AAS feature code Description Minimum quantity IBM Flex System p460 Compute Node 7895-42x IBM Flex System p460 Compute Node 2 1764 IBM Flex System FC3172 2-port 8Gb FC Adapter 2 1762 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter 2 Base Processor 1 Required, select only one, Min 1, Max 1 EPR2 16 Cores, (4x 4 core), 3.
AAS feature code XCC feature code Description Minimum quantity EBK2 49Y8119 IBM Flex System x240 USB Enablement Kit EBK3 41Y8300 2GB USB Hypervisor Key (VMware 5.0) 2.3.5 IBM Flex System Manager Table 2-32 lists the major components of the IBM Flex System Manager.
AAS feature code XCC feature code Description Minimum quantity 6008 6008 8 GB Cache 2 9730 9730 power cord to PDU (includes 2 power cord) 1 9801 9801 Power supplies 2 a. If Power Systems compute node is selected, then at least eight drives must be installed in the Storwize V7000. If an Intel Xeon based compute node is selected with SmartCloud Entry, then four drives must be installed in the Storwize V7000. 2.3.7 Rack cabinet Table 2-34 lists the major components of the rack and options.
AIX and IBM i Table 2-35 lists the software features included with the Enterprise configuration on POWER7 processor-based compute nodes for AIX and IBM i. Table 2-35 Software features for IBM PureFlex System Enterprise with AIX and IBM i on Power AIX 6 AIX 7 IBM i 6.1 IBM i 7.
RHEL and SUSE Linux on Power Table 2-36 lists the software features included with the Enterprise configuration on POWER7 processor-based compute nodes for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) on Power.
Intel Xeon based compute nodes (AAS) Intel Xeon based compute nodes (HVEC) Operating system Varies Virtualization VMware ESXi selectable in the hardware configuration Cloud Software (optional) 5765-SCP SmartCloud Entry 5662-SCP 3 yr SWMA Varies 5641-SC3 SmartCloud Entry, 3 yr SWMA Optional components - Enterprise Expansion IBM Storwize V7000 Software 5639-EV1 V7000 External virtualization software 5639-RM1 V7000 Remote Mirroring IBM Flex System Manager 5765-FMS IBM Flex
2.4 IBM SmartCloud Entry In IT environments, you face the challenges of delivering new capabilities with growth of data, and the increase of applications and the amount of physical hardware, such as servers, storages, and networks. The traditional means of deploying, provisioning, managing, and maintaining physical and virtual resources can no longer meet the demands of increasingly complex IT infrastructure.
Reliably track images to ensure compliance and minimize security risks. Optimize resources, reducing the number of virtualized images and the storage required for them. When you deploy VMs using this product, you: Slash time to value for new workloads from months to a few days. Deploy application images across compute and storage resources. Provide user self-service for improved responsiveness. Ensure security through VM isolation and project-level user access controls.
46 IBM Flex System p260 and p460 Planning and Implementation Guide
3 Chapter 3. Introduction to IBM Flex System IBM Flex System is a solution composed of hardware, software, and expertise. The IBM Flex System Enterprise Chassis, the major hardware component, is the next generation platform that provides new capabilities in many areas: Scalability Current and future processors Memory Storage Bandwidth and I/O speeds Power Energy efficiency and cooling Systems management © Copyright IBM Corp. 2012. All rights reserved.
Figure 3-1 shows the front and rear views of the Enterprise Chassis. Figure 3-1 IBM Flex System Enterprise Chassis - front and rear The chassis provides locations for 14 half-wide nodes, four scalable I/O switch modules, and two Chassis Management Modules. Current node configurations include half-wide and full-wide options. The chassis supports other configurations, such as full-wide by double-high. Power and cooling can be scaled up in a modular fashion as additional nodes are added.
Feature Specifications Management One or two Chassis Management Modules, for basic chassis management. Two CMMs form a redundant pair; one CMM is standard in 8721-A1x. The CMM interfaces with the integrated management module (IMM) or flexible service processor (FSP) integrated in each compute node in the chassis. There is an optional IBM Flex System Manager management appliance for comprehensive management, including virtualization, networking, and storage management.
3.1 Compute nodes The IBM Flex System portfolio of servers, or compute nodes, includes IBM POWER7 and Intel Xeon processors. Depending on the compute node design, there are two form factors: Half-wide node: This node occupies one chassis bay, or half of the chassis width. An example is the IBM Flex System p260 Compute Node. Full-wide node: This node occupies two chassis bays side-by-side, or the full width of the chassis. An example is the IBM Flex System p460 Compute Node.
3.2 I/O modules The I/O modules or switches provide external connectivity to nodes outside the chassis and internal connectivity to the nodes in the chassis. These switches are scalable in terms of the number of internal and external ports that can be enabled, and how these ports can be used to aggregate bandwidth and create virtual switches within a physical switch. The number of internal and external physical ports available exceeds previous generations of products.
The internal connections between the node ports and the I/O module internal ports are defined by: I/O modules 1 and 2 These two modules connect to the ports on an I/O expansion card in slot position 1 for a half-wide compute node (such as the p260) or slot positions 1 and 3 for a full-wide compute node (such as the p460). Intel computer nodes: Certain Intel compute nodes offer an integrated local area network (LAN) on the system board (LOM). POWER based compute nodes do not have the LOM option.
The following Ethernet switches were announced at the time of writing: IBM Flex System Fabric EN4093 10Gb Scalable Switch – 42x internal ports, 14x 10 Gb and 2x 40 Gb (convertible to 8x 10 Gb) uplinks – Base switch: 10x external 10 Gb uplinks, 14x 10 Gb internal 10 Gb ports – Upgrade 1: Adds 2x external 40 Gb uplinks and 14x internal 10 Gb ports – Upgrade 2: Adds 4x external 10 Gb uplinks, 14x internal 10 Gb ports IBM Flex System EN2092 1Gb Ethernet Scalable Switch – 28 Internal ports, 20 x 1 Gb and 4 x
For details about the available switches, see IBM PureFlex System and IBM Flex System Products & Technology, SG24-7984. 3.3 Systems management IBM Flex System uses a tiered approach to overall system management.
3.3.3 Chassis Management Module The Chassis Management Module (CMM) is a hot-swap module that is central to the management of the chassis and is required in each chassis. The CMM automatically detects any installed modules in the chassis and stores vital product data (VPD) from the modules. The CMM also acts as an aggregation point for the chassis nodes and switches, including enabling all of the management communications by Ethernet connection.
The hardware platform for FSM, although based on a Intel compute node, is not interchangeable with any other compute node. A unique expansion card, not available on other compute nodes, allows the software stack to communicate on the private management network. The FSM is available in two editions: IBM Flex System Manager and IBM Flex System Manager Advanced.
3.4 Power supplies A minimum of two and a maximum of six power supplies can be installed in the Enterprise Chassis (Figure 3-5). All power supply modules are combined into a single power domain in the chassis, which distributes power to each of the compute nodes and I/O modules through the Enterprise Chassis midplane.
Tip: N+1 in this context means a single backup device for N number of devices. Any component can replace any other component, but only one time. N+N means that there are N backup devices for N devices, where N number of devices can fail and each has a backup. The redundancy options are configured from the Chassis Management Module and can be changed nondisruptively. The five policies are shown in Table 3-2.
3.5 Cooling On the topic of Enterprise Chassis cooling, the flow of air in the Enterprise Chassis follows a front to back cooling path, where cool air is drawn in at the front of the chassis and warm air is exhausted to the rear. Air movement is controlled by hot-swappable fan modules in the rear of the chassis and a series of internal dampers. The cooling is scaled up as required, based upon the number of nodes installed.
Figure 3-6 shows the fan modules’ locations. Fan bay 10 Fan bay 5 Fan bay 4 Fan bay 9 Fan bay 3 Fan bay 8 Fan bay 2 Fan bay 7 Fan bay 6 Fan bay 1 Figure 3-6 Enterprise Chassis fan module locations 3.5.1 Node cooling There are two compute node cooling zones: zone 1 on the right side of the chassis, and zone 2 on the left side of the chassis (both viewed from the rear). The chassis can contain up to eight 80 mm fan modules across the two zones.
Figure 3-7 shows the node cooling zones and fan module locations. 9 4 8 3 7 2 6 1 Cooling zone 2 Cooling zone 1 Figure 3-7 Enterprise Chassis node cooling zones and fan module locations When a node is not inserted in a bay, an airflow damper closes in the midplane to prevent air from being drawn through the unpopulated bay. By inserting a node into a bay, the damper is opened, thus allowing cooling of the node in that bay.
3.5.2 Switch and Chassis Management Module cooling There are two additional cooling zones for the I/O switch bays. These zones, zones 3 and 4, are on the right and left side of the bays, as viewed from the rear of the chassis. Cooling zones 3 and 4 are serviced by 40 mm fan modules that are included in the base configuration and cool the four available I/O switch bays.
4 Chapter 4. Product information and technology The IBM Flex System p260, p460, and p24L Compute Nodes are based on IBM POWER architecture technologies. These compute nodes run in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment, using advanced processing technology. In this chapter, we describe the server offerings and the technology used in their implementation.
Warranty and maintenance agreements Software support and remote technical support 4.1 Overview The Power Systems compute nodes for IBM Flex System have three variations tailored to your business needs. They are shown in Figure 4-1.
The IBM Flex System p260 Compute Node has the following features: Two processors with up to 16 POWER7 processing cores Sixteen DDR3 memory DIMM slots that support IBM Active Memory Expansion Supports Very Low Profile (VLP) and Low Profile (LP) DIMMs Two P7IOC I/O hubs A RAID-capable SAS controller that supports up to two solid-state drives (SSDs) or hard disk drives (HDDs) Two I/O adapter slots Flexible Support Processor (FSP) System management alerts IBM Light Path Diagnostics USB 2.
Figure 4-2 shows the system board layout of the IBM Flex System p260 Compute Node. POWER7 processors 16 DIMM slots Two I/O adapter connectors (HDDs are mounted on the cover, located over the memory DIMMs.) Two I/O Hubs Connector for future expansion Figure 4-2 System board layout of the IBM Flex System p260 Compute Node 4.1.
IBM Light Path Diagnostics USB 2.0 port IBM EnergyScale technology Figure 4-3 shows the system board layout of the IBM Flex System p460 Compute Node. POWER7 processors 32 DIMM slots Four I/O adapter connectors I/O adapter installed Figure 4-3 System board layout of the IBM Flex System p460 Compute Node Chapter 4.
4.1.3 IBM Flex System p24L Compute Node The IBM Flex System p24L Compute Node shares several similarities to the IBM Flex System p260 Compute Node in that it is a half-wide, Power Systems compute node with two POWER7 processor sockets, 16 memory slots, two I/O adapter slots, and an option for up to two internal drives for local storage. The IBM Flex System p24L Compute Node is optimized for low-cost Linux on Power Systems Servers installations.
4.2 Front panel The front panel of Power Systems compute nodes has the following common elements, as shown in Figure 4-4: One USB 2.0 port Power button and light path, light-emitting diode (LED) (green) Location LED (blue) Information LED (amber) Fault LED (amber) USB 2.
4.2.1 Light path diagnostic LED panel The power button on the front of the server (Figure 4-4 on page 69) has two functions: When the system is fully installed in the chassis: Use this button to power the system on and off. When the system is removed from the chassis: Use this button to illuminate the light path diagnostic panel on the top of the front bezel, as shown in Figure 4-5.
If problems occur, you can use the light path diagnostics LEDs to identify the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. This action temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts towards a resolution.
Node bay labeling on IBM Flex System Enterprise Chassis Each bay of the IBM Flex System Enterprise Chassis has space for a label to be affixed to identify or provide information about each Power Systems compute node, as shown in Figure 4-7. Figure 4-7 Chassis bay labeling Pull-out labeling Each Power Systems compute node has two pull-out tabs that can also accommodate labeling for the server.
4.3 Chassis support The Power Systems compute nodes can be used only in the IBM Flex System Enterprise Chassis. They do not fit in the previous IBM modular systems, such as IBM iDataPlex or IBM BladeCenter. There is no onboard video capability in the Power Systems compute nodes. The machines are designed to use Serial Over LAN (SOL) or the IBM Flex System Manager (FSM). For more information about the IBM Flex System Enterprise Chassis, see Chapter 3, “Introduction to IBM Flex System” on page 47.
The overall system architecture for the p260 and p24L is shown in Figure 4-9. DIMM DIMM SMI DIMM DIMM SMI DIMM DIMM SMI DIMM DIMM SMI POWER7 Processor 0 GX++ 4 bytes PCIe to PCI P7IOC I/O hub SMI DIMM DIMM SMI DIMM DIMM SMI DIMM DIMM SMI To front panel USB controller Each: PCIe 2.0 x8 I/O connector 1 4 bytes each DIMM DIMM HDDs/SSDs SAS I/O connector 2 POWER7 Processor 1 Each: PCIe 2.0 x8 P7IOC I/O hub ETE connector Each: PCIe 2.
The IBM Flex System p460 Compute Node system architecture is shown in Figure 4-10. DIMM DIMM SMI DIMM DIMM SMI DIMM DIMM SMI DIMM DIMM SMI GX++ 4 bytes PCIe to PCI I/O connector 1 SMI SMI DIMM DIMM SMI DIMM DIMM SMI To front panel Each: PCIe 2.0 x8 4 bytes each DIMM DIMM I/O connector 2 POWER7 Processor 1 Each: PCIe 2.
The four processors in the IBM Flex System p460 Compute Node are connected in a cross-bar formation, as shown in Figure 4-11. POWER7 Processor 0 POWER7 Processor 1 4 bytes each POWER7 Processor 2 POWER7 Processor 3 Figure 4-11 IBM Flex System p460 Compute Node processor connectivity 4.5 IBM POWER7 processor The IBM POWER7 processor represents a leap forward in technology and associated computing capability.
4.5.1 Processor options for Power Systems compute nodes Table 4-1 defines the processor options for the Power Systems compute nodes. Table 4-1 Processor options Feature code Cores per POWER7 processor Number of POWER7 processors Total cores Core frequency L3 cache size per POWE7 processor IBM Flex System p260 Compute Node EPR1 4 2 8 3.3 GHz 16 MB EPR3 8 2 16 3.2 GHz 32 MB EPR5 8 2 16 3.55 GHz 32 MB IBM Flex System p460 Compute Node EPR2 4 4 16 3.3 GHz 16 MB EPR4 8 4 32 3.
This core deconfiguration feature can also be updated after installation by using the field core override option. The field core override option specifies the number of functional cores that are active in the compute node. The field core override option provides the capability to increase or decrease the number of active processor cores in the compute node. The compute node firmware sets the number of active processor cores to the entered value. The value takes effect when the compute node is rebooted.
4.5.3 Architecture IBM uses innovative methods to achieve the required levels of throughput and bandwidth.
POWER7 processor overview The POWER7 processor chip is fabricated with the IBM 45 nm silicon-on-insulator technology, using copper interconnects, and uses an on-chip L3 cache with eDRAM. The POWER7 processor chip is 567 mm2 and is built using 1,200,000,000 components (transistors). Eight processor cores are on the chip, each with 12 execution units, 256 KB of L2 cache, and access to up to 32 MB of shared on-chip L3 cache.
POWER7 processor core Each POWER7 processor core implements aggressive out of order (OoO) instruction execution to drive high efficiency in the use of available execution paths. The POWER7 processor has an instruction sequence unit that can dispatch up to six instructions per cycle to a set of queues. Up to eight instructions per cycle can be issued to the instruction execution units.
Figure 4-13 shows the evolution of simultaneous multithreading.
Intelligent threads The POWER7 processor features intelligent threads, which can vary based on the workload demand. The system automatically selects (or the system administrator manually selects) whether a workload benefits from dedicating as much capability as possible to a single thread of work, or if the workload benefits more from having this capability spread across two or four threads of work.
Flexible POWER7 processor packaging and offerings POWER7 processors have the unique ability to optimize to various workload types. For example, database workloads typically benefit from fast processors that handle high transaction rates at high speeds. Web workloads typically benefit more from processors with many threads that allow the breakdown of web requests into many parts and handle them in parallel. POWER7 processors have the unique ability to provide leadership performance in either case.
Figure 4-15 shows the physical packaging options that are supported with POWER7 processors.
Figure 4-16 shows the FLR-L3 cache regions for the cores on the POWER7 processor die.
Small physical footprint The performance of eDRAM when implemented on-chip is similar to conventional SRAM but requires far less physical space. IBM on-chip eDRAM uses only one-third of the components used in conventional SRAM, which has a minimum of six transistors to implement a 1-bit memory cell. Low energy consumption The on-chip eDRAM uses only 20% of the standby power of SRAM.
Model Minimum memory Maximum memory p24L 24 GB 256 GB (16 x 16 GB DIMMs) Use a minimum of 2 GB of memory per core. The functional minimum memory configuration for the machine is 4 GB (two 2 GB DIMMs), but that is not sufficient for reasonable production usage of the machine. Low Profile and Very Low Profile form factors One benefit of deploying IBM Flex System systems is the ability to use Low Profile (LP) memory DIMMs. This design allows for more choices to configure the machine to match your needs.
There are 16 buffered DIMM slots on the p260 and p24L, as shown in Figure 4-17. The p460 adds two more processors and 16 additional DIMM slots, divided evenly (eight memory slots) per processor.
For the p260 and p24L, Table 4-6 shows the required placement of memory DIMMs, depending on the number of DIMMs installed.
Processor 2 Processor 3 DIMM 1 DIMM 2 DIMM 3 DIMM 4 DIMM 5 DIMM 6 DIMM 7 DIMM 8 DIMM 9 DIMM 10 DIMM 11 DIMM 12 DIMM 13 DIMM 14 DIMM 15 DIMM 16 DIMM 17 DIMM 18 DIMM 19 DIMM 20 DIMM 21 DIMM 22 DIMM 23 DIMM 24 DIMM 25 DIMM 26 DIMM 27 DIMM 28 DIMM 29 DIMM 30 DIMM 31 DIMM 32 Processor 1 Number of DIMMs Processor 0 12 x x x x x x 14 x x x x x 16 x x x 18 x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x 20 x x x x x x x x x x
4.7 Active Memory Expansion The optional Active Memory Expansion feature is a POWER7 technology that allows the effective maximum memory capacity to be much larger than the true physical memory. Applicable to AIX V6.1 or later, this innovative compression and decompression of memory content using processor cycles allows memory expansion of up to 100%. This situation allows an AIX V6.
Figure 4-18 represents the percentage of processor used to compress memory for two partitions with various profiles. The green curve corresponds to a partition that has spare processing power capacity. The blue curve corresponds to a partition constrained in processing power.
Figure 4-19 shows an example of the output returned by this planning tool. The tool outputs various real memory and processor resource combinations to achieve the wanted effective memory and proposes one particular combination. In this example, the tool proposes to allocate 58% of a processor core, to benefit from 45% extra memory capacity. Active Memory Expansion Modeled Statistics: ----------------------Modeled Expanded Memory Size : 8.00 GB Expansion Factor --------1.21 1.31 1.41 1.51 1.
4.8 Storage The Power Systems compute nodes have an onboard SAS controller that can manage up to two, non-hot-pluggable internal drives. Both 2.5-inch hard disk drives (HDDs) and 1.8-inch solid-state drives (SSDs) are supported. The drives attach to the cover of the server, as shown in Figure 4-20. Even though the p460 is a full-wide server, it has the same storage options as the p260 and the p24L.
4.8.1 Storage configuration impact to memory configuration The type of local drives, HDDs or SSDs, used impacts the form factor of your memory DIMMs: If HDDs are chosen, then only Very Low Profile (VLP) DIMMs can be used because of internal spacing. There is not enough room for the 2.5-inch drives to be used with Low Profile (LP) DIMMs (currently the 2 GB and 16 GB sizes). Verify your memory choice to make sure that it is compatible with the local storage configuration.
Feature code Part number Description 8207 74Y9114 177 GB SATA non-hot-swap SSD 7067 None Top cover for no drives on the p260 and the p24L 7005 None Top cover for no drives on the p460 (full-wide) No drives 4.8.3 Local drive connection On covers that accommodate drives, the drives attach to an interposer that connects to the system board when the cover is properly installed. This connection is shown in more detail in Figure 4-21.
On the system board, the connection for the cover’s drive interposer is shown in Figure 4-22. Figure 4-22 Connection for drive interposer card mounted to the system cover (connected to the system board through a flex cable) 4.8.4 RAID capabilities Disk drives and solid-state drives in the Power Systems compute nodes can be used to implement and manage various types of RAID arrays in operating systems that are on the ServerProven list.
Tip: Depending on your RAID configuration, you might need to create the array before you install the operating system in the compute node. Before you can create a RAID array, you must reformat the drives so that the sector size of the drives changes from 512 bytes to 528 bytes. If you later decide to remove the drives, delete the RAID array before you remove the drives.
Slot 1 requirements: You must have an EN4054 4-port 10Gb Ethernet Adapter (Feature Code #1762) or EN2024 4-port 1Gb Ethernet Adapter (Feature Code #1763) card installed in slot 1 of the Power Systems compute nodes. Similarly, you must have an EN4093 10Gb Scalable Switch (Feature Code #3593), EN2092 1Gb Ethernet Switch (Feature Code #3598) or EN4091 10Gb Ethernet Pass-thru Switch (Feature Code #3700) installed in bay 1 of the chassis. A typical I/O adapter is shown in Figure 4-23.
4.9.2 PCI hubs The I/O is controlled by two (IBM Flex System p260 Compute Node) or four (IBM Flex System p460 Compute Node) P7-IOC I/O controller hub chips. This configuration provides additional flexibility when assigning resources within Virtual I/O Server (VIOS) to specific Virtual Machine/LPARs. 4.9.3 Available adapters Table 4-9 shows the available I/O adapter cards for Power Systems compute nodes.
4.9.4 Adapter naming convention Figure 4-24 shows the naming structure for the I/O adapters. IBM Flex System EN2024 4-port 1 Gb Ethernet Adapter EN2024 Fabric Type: EN = Ethernet FC = Fibre Channel CN = Converged Network IB = InfiniBand Series: 2 for 1 Gb 3 for 8 Gb 4 for 10 Gb 5 for 16 Gb 6 for InfiniBand Vendor name where A=01 02 = Brocade 09 = IBM 13 = Mellanox 17 = QLogic Maximum number of ports 4 = 4 ports Figure 4-24 Naming structure for the I/O expansion cards 4.9.
The IBM Flex System EN4054 4-port 10Gb Ethernet Adapter has the following features and specifications: On-board flash memory: 16 MB for FC controller program storage Uses standard Emulex SLI drivers Interoperates with existing FC SAN infrastructures (switches, arrays, SRM tools (including Emulex utilities), SAN practices, and so on) Provides 10 Gb MAC features, such as MSI-X support, jumbo frames (8 K bytes) support, VLAN tagging (802.
4.9.6 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter The IBM Flex System EN2024 4-port 1Gb Ethernet Adapter is a quad-port network adapter from Broadcom that provides 1 Gb per second, full duplex, Ethernet links between a compute node and Ethernet switch modules installed in the chassis. The adapter interfaces to the compute node using the PCIe bus. Table 4-11 lists the ordering part number and feature code.
Figure 4-26 shows the IBM Flex System EN2024 4-port 1Gb Ethernet Adapter. Figure 4-26 The EN2024 4-port 1Gb Ethernet Adapter for IBM Flex System 4.9.7 IBM Flex System FC3172 2-port 8Gb FC Adapter The IBM Flex System FC3172 2-port 8Gb FC Adapter from QLogic enables high-speed access for IBM Flex System Enterprise Chassis compute nodes to connect to a Fibre Channel storage area network (SAN).
Support for Fibre Channel service (classes 2 and 3) Configuration and boot support in UEFI The IBM Flex System FC3172 2-port 8Gb FC Adapter has the following specifications: Bandwidth: 8 Gb per second maximum at half-duplex and 16 Gb per second maximum at full-duplex per port Throughput: 3200 MBps (full-duplex) Support for both FCP-SCSI and IP protocols Support for point-to-point fabric connections: F-Port Fabric Login Support for Fibre Channel Arbitrated Loop (FCAL) public loop profile: Fibre
4.9.8 IBM Flex System IB6132 2-port QDR InfiniBand Adapter The IBM Flex System IB6132 2-port QDR InfiniBand Adapter from Mellanox provides the highest performing and most flexible interconnect solution for servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Table 4-13 lists the ordering part number and feature code.
Figure 4-26 on page 105 shows the IBM Flex System IB6132 2-port QDR InfiniBand Adapter. Figure 4-28 The IB6132 2-port QDR InfiniBand Adapter for IBM Flex System 4.10 System management There are several advanced system management capabilities built into Power Systems compute nodes. A Flexible Support Processor handles most of the server-level system management. It has features, such as system alerts and Serial-over-LAN capability, that we describe in this section. 4.10.
4.10.2 Serial over LAN (SOL) The Power Systems compute nodes do not have an on-board video chip and do not support keyboard, video, and mouse (KVM) connections. Server console access is obtained by a SOL connection only. SOL provides a means to manage servers remotely by using a command-line interface (CLI) over a Telnet or Secure Shell (SSH) connection. SOL is required to manage servers that do not have KVM support or that are attached to the IBM Flex System Manager.
4.10.3 Anchor card The anchor card, shown in Figure 4-29, contains the smart vital product data chip that stores system-specific information. The pluggable anchor card provides a means for this information to be transferable from a faulty system board to the replacement system board. Before the service processor knows what system it is on, it reads the smart vital product data chip to obtain system information.
4.12 IBM EnergyScale IBM EnergyScale technology provides functions that help you to understand and dynamically optimize the processor performance versus processor power and system workload, to control IBM Power Systems power and cooling usage. The BladeCenter Advanced Management Module and IBM Systems Director Active Energy Manager use EnergyScale technology, enabling advanced energy management features to conserve power and improve energy efficiency.
Power Saver Mode Power Saver Mode reduces the processor frequency and voltage by a fixed amount, reducing the energy consumption of the system, while delivering predictable performance. This percentage is predetermined to be within a safe operating limit and is not user configurable. The server is designed for a fixed frequency drop of 50% from nominal.
Soft power capping Soft power capping extends the allowed energy capping range further, beyond a region that can be guaranteed in all configurations and conditions. When an energy management goal is to meet a particular consumption limit, soft power capping is the mechanism to use. Processor core nap The IBM POWER7 processor uses a low-power mode called nap that stops processor execution when there is no work to be done by that processor core.
The IBM POWER7 chip provides significant improvement in power and performance over the IBM POWER6 chip.
4.15 Software support and remote technical support IBM also offers technical assistance to help solve software-related challenges. Our team assists with configuration, how-to questions, and setup of your servers. Information about these options is at the following website: http://ibm.com/services/us/en/it-services/tech-support-and-maintenanceservices.html Chapter 4.
116 IBM Flex System p260 and p460 Planning and Implementation Guide
5 Chapter 5. Planning In this chapter, we describe the steps you should take before ordering and installing Power Systems compute nodes as part of an IBM Flex System solution. We cover the following topics in this chapter: Planning your system: An overview Network connectivity SAN connectivity Configuring redundancy Dual VIOS Power planning Cooling Planning for virtualization © Copyright IBM Corp. 2012. All rights reserved.
5.1 Planning your system: An overview One of the initial tasks for your team is to plan for the successful implementation of your Power Systems compute node. This planning includes ensuring that the primary reasons for acquiring the server are effectively planned for. Consider the overall uses for the server, the planned growth of your applications, and the operating systems in your environment. Correct planning of these issues ensures that the server meets the needs of your organization.
Memory Your Power Systems compute node supports a wide spread of memory configurations. The memory configuration depends on whether you have internal disks installed, as described “Hard disk drives (HDDs) and solid-state drives (SSDs)” on page 118). Mixing both types of memory is not recommended. Active memory expansion (AME) is available on POWER7, as is Active Memory Sharing (AMS) when using PowerVM Enterprise Edition.
In addition, the Virtual I/O Server can be installed in special virtual servers that provide support to the other operating systems for using features such as virtualized I/O devices, PowerVM Live Partition Mobility, or PowerVM Active Memory Sharing. For details about the software available on IBM Power Systems servers, see the IBM Power Systems Software™ website at: http://www.ibm.com/systems/power/software/ Note: The p24L supports Virtual I/O Server (VIOS) and Linux only.
AIX V6.1 The supported versions are: AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283 AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later (the planned availability 29 June 2012) AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later (the planned availability is 29 June 2012) For information about AIX V6.1 maintenance and support, go to the Fix Central website at: http://www.ibm.
Linux Linux is an open source operating system that runs on numerous platforms from embedded systems to mainframe computers. It provides a UNIX like implementation in many computer architectures. At the time of this writing, the supported versions of Linux on POWER7 processor technology-based servers are as follows: SUSE Linux Enterprise Server 11 Service Pack 2 for POWER, with current maintenance updates available from Novell to enable all planned functionality Red Hat Enterprise Linux 5.
When you install AIX V6.1 TL7 and AIX V7.1 TL1, you can virtualize through WPARs, as described in 8.3.1, “Installing AIX” on page 364. (Older versions of AIX 5L V5.2 and V5.3 on lower TL levels can run WPARS within a Virtual Server running AIX V7.) Also, Linux installations are supported on the Power Systems compute node. Supported versions are listed in “Operating system support” on page 119.
Implementing a dual VIOS solution is the best way to achieve a high availability (HA) environment. This environment allows for maintenance on one VIOS without disrupting the clients, and avoids depending on just one VIOS to do all of the work functions. Dual VIOS: If you want a dual VIOS environment, external disk access is required, as the two internal disks are connected to the same SAS/SATA controller. The two internal disks are used for the rootvg volume group on one VIOS only. 5.
Detailed information about I/O module configuration can be found in IBM PureFlex System and IBM Flex System Products & Technology, SG24-7984. The available Ethernet switches and pass-through modules are listed in Table 5-2.
5.2.2 VLANs Virtual LANs (VLANs) are commonly used in the Layer 2 network to split up groups of network users into manageable broadcast domains, to create a logical segmentation of workgroups, and to enforce security policies among logical segments. VLAN considerations include the number and types of VLANs supported, VLAN tagging protocols supported, and specific VLAN configuration protocols implemented. All IBM Flex System switch modules support the 802.1Q protocol for VLAN tagging. Another usage of 802.
5.3 SAN connectivity SAN connectivity in the Power Systems compute nodes is provided by the expansion cards. The list of SAN Fibre Channel (FC) adapters currently supported by the Power Systems compute nodes is listed in Table 5-4. For more details about the supported expansion cards, see 4.9, “I/O adapters” on page 99.
5.4 Configuring redundancy Your environment might require continuous access to your network services and applications. Providing highly available (HA) network resources is a complex task that involves the integration of multiple hardware and software components. One HA component is to provide network infrastructure availability. This availability is required for both network and SAN connectivity. 5.4.
– Virtual Link Aggregation Groups (VLAG) – Virtual Router Redundancy Protocol (VRRP) – Routing protocol (such as RIP or OSPF) Redundant network topologies The IBM Flex System Enterprise Chassis can be connected to the enterprise network in several ways, as shown in Figure 5-1.
Topology 2 in Figure 5-1 on page 129 has each switch module in the chassis with two direct connections to two enterprise switches. This topology is more advanced, and it has a higher level of redundancy, but certain specific protocols such as Spanning Tree or Virtual Link Aggregation Groups must be implemented. Otherwise, network loops and broadcast storms can cause the meltdown of the network. Spanning Tree Protocol Spanning Tree Protocol is a 802.
Layer 2 failover Depending on the configuration, each compute node can have one IP address per each Ethernet port, or it can have one virtual NIC consisting of two or more physical interfaces with one IP address. This configuration is known as NIC teaming technology. From an IBM Flex System perspective, NIC Teaming is useful when you plan to implement high availability configurations with automatic failover if there are internal or external uplink failures.
Important: To avoid possible issues when you replace a failed switch module, do not use automatic failback for NIC teaming. A newly installed switch module has no configuration data, and it can cause service disruption. Virtual Link Aggregation Groups (VLAGs) In many data center environments, downstream switches connect to upstream devices which consolidate traffic, as shown in Figure 5-2.
VLAGs are also useful in multi-layer environments for both uplink and downlink redundancy to any regular LAG-capable device, as shown in Figure 5-3. Layer 2/3 Border LACP-capable Routers VLAG 5 Layer 2/3 Region with multiple levels VLAG 6 ISL VLAG Peers VLAG 3 VLAG 4 ISL ISL VLAG Peers VLAG Peers VLAG 1 VLAG 2 LACP-capable Switch LACP-capable Server Servers Figure 5-3 VLAG with multiple layers 5.4.
Consider the scenario of dual-FC, dual-SAN switch redundancy, connected with storage attached through a SAN for an p460. In this scenario, the OS has four paths to each storage, and the behavior of the multipathing driver might vary, depending on the storage and switch type. This scenario is one of the best scenarios for high availability.
Another scenario for the p260 is one in which the redundancy in the configuration is provided by the Fibre Channel switches in the chassis. There is no hardware redundancy on the compute node, as it has only two expansion cards, with one used for Ethernet access and the other for Fibre Channel access. For this reason, in the case of Fibre Channel or Ethernet adapter failure on the compute node, redundancy is maintained. Figure 5-5 shows this scenario.
Dual VIOS: The dual VIOS environment is not currently supported in the p260 or in the p24L. This feature might be added in future versions based on certain adapter configurations. 5.5.1 Dual VIOS on Power Systems compute nodes One of the capabilities that is available with Power Systems compute nodes managed by an IBM Flex System Manager is the ability to implement dual Virtual I/O Servers in the same way as SDMC- or HMC-managed systems are able to.
Two Fibre Channel adapters (using FC3172 2-port 8Gb FC Adapters) One IBM Flex System Enterprise Chassis, with at least one Ethernet switch or pass-through node and one Fibre Channel switch or pass-through node. As mentioned earlier, the four ports are assigned in pairs to each of the two VIOS virtual servers if only one Ethernet adapter is used, or each Ethernet adapter on the p460 is assigned to each VIOS if two Ethernet adapters are used. Each FC Card on the p460 is assigned to each VIOS.
5.6 Power planning When planning the power consumption for your Power Systems compute node, you must consider the server estimated power consumptions highs and lows based on the power supply features installed in the chassis and tools, such as Active Energy Manager. You can use these features to manage, measure, and monitor your energy consumption. 5.6.
Power cabling: 32A at 380-415V 3-phase (international) As shown in Figure 5-7, one 3-phase 32A wye PDU (WW) can provide power feeds for two chassis. In this case, an appropriate 3-phase power cable is selected for the Ultra-Dense Enterprise PDU+, which then splits the phases, supplying one phase to each of the three PSUs within each chassis. One 3-phase 32A wye PDU can power two fully populated chassis within a rack.
Power cabling: 60A at 208V 3-phase (North America) In North America, this configuration requires four 60A 3-phase delta supplies at 200 - 208 VAC, so an optimized 3-phase configuration is shown in Figure 5-8. g g pp IEC320 16A C19-C20 3m power cable 46M4003 1U 9 C19/3 C13 Switched and monitored DPI PDI L1 L1 G L2 L3 G L2 L3 46M4003 Includes fixed IEC60309 3P+G 60A line cord Figure 5-8 Example power cabling 60 A at 208 V 3-phase configuration 5.6.
80 PLUS is a performance specification for power supplies used within servers and computers. To meet the 80 PLUS standard, the power supply must have an efficiency of 80% or greater, at 20, 50, and 100 percent of rated load with a Power Factor (PF) of 0.09 or greater. The standard has several grades, such as Bronze, Silver, Gold, and Platinum. Further information about 80 PLUS is at the following website: http://www.80PLUS.
The integral power supply fans are not dependent upon the power supply being functional. Rather, they operate and are powered independently from the midplane. For detailed information about the power supply features of the chassis, see the IBM Flex System Power Guide, available at the following website: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4401 5.6.
Power redundancy settings There are five power management redundancy settings available for selection: AC Power Source Redundancy Intended for dual AC power sources into the chassis. Maximum input power is limited to the capacity of two power modules. This approach is the most conservative one, and is best used when all four power modules are installed. When the chassis is correctly wired with dual AC power sources, one AC power source can fail without affecting compute node server operation.
The power redundancy options are shown in Figure 5-11. We clicked Change (next to Power Redundancy with Compute Node Throttling Policy) to show the power redundancy options. Figure 5-11 Changing the redundancy 5.6.5 Power limiting and capping policies Simple power capping policies can be set to limit the amount of power consumed by the chassis.
The power capping options can be set as shown in Figure 5-12. Figure 5-12 Setting power capping in the Chassis Management Module 5.6.6 Chassis power requirements It is expected that the initial configuration (based on the IBM PureFlex System configuration that is ordered), plus any additional nodes, contains the necessary number of power supplies.
Number of half-wide compute nodes 500 W ITE 600 W ITE 700 W ITE Number of power suppliesa Faultb Number of power suppliesa Faultb 13 6 444 W 6 444 W 12 6 481 W 6 481 W 6 481 W 11 6 500 W 6 525 W 6 525 W 10 4 335 W 6 577 W 6 577 W 9 4 380 W 4 380 W 6 650 W 8 4 437 W 4 437 W 6 700 W 7 4 499 W 4 499 W 4 499 W 6 4 500 W 4 583 W 4 583 W 5 4 500 W 4 600 W 4 700 W 4 2 305 W 4 600 W 4 700 W 3 2 407 W 2 407 W 2 407 W 2 2 500 W 2 6
Number of full-wide compute nodes 1000 W ITE 1200 W ITE 1400 W ITE Number of power suppliesa Faultb Number of power suppliesa Faultb Number of power suppliesa Faultb 6 6 962 W 6 962 W 6 962 W 5 4 670 W 6 1154 W 6 1154 W 4 4 874 W 4 874 W 6 1400 W 3 4 1000 W 4 1166 W 4 1166 W 2 2 610 W 4 1200 W 4 1400 W 1 2 1000 W 2 1200 W 2 1222 W a. Theoretical number. Might require unrealistic throttle levels. b.
Number of half-wide compute nodes 500 W ITE 600 W ITE 700 W ITE Number of power suppliesa Faultb Number of power suppliesa Faultb Number of power suppliesa Faultb 5 3 500 W 3 600 W 3 700 W 4 2 305 W 3 600 W 3 700 W 3 2 407 W 2 407 W 2 407 W 2 2 500 W 2 600 W 2 611 W 1 2 500 W 2 600 W 2 700 W a. Theoretical number. Might require unrealistic throttle levels. b.
5.7 Cooling The flow of air within the Enterprise Chassis follows a front to back cooling path; cool air is drawn in at the front of the chassis and warm air is exhausted to the rear. There are two cooling zones for the nodes: a left zone and a right zone. The cooling is scaled up as required, based upon which node bays are populated. The number of cooling fans required for a given number of nodes is described further in this section.
The minimum configuration of 80 mm fans is four, which provide cooling for a maximum of four half-wide nodes, as shown in Figure 5-13. This configuration is the base configuration.
To cool more than eight half-wide (or four full-wide) nodes, all the fans must be installed, as shown in Figure 5-15. 13 13 14 14 11 11 12 12 99 10 10 77 88 55 66 33 44 11 22 Node Bays 9 4 8 3 7 2 6 1 Cooling zone Front View Cooling zone Rear View Figure 5-15 Eight 80 mm fan modules support from 7 to 14 nodes 5.7.
Active Energy Manager uses agent-less technology, so no agents must be installed on the endpoints. Monitoring and management functions apply to all IBM systems that are enabled for IBM Systems Director Active Energy Manager. Monitoring functions include power trending, thermal trending, IBM and non-IBM PDU support, support for facility providers, energy thresholds, and altitude input. Management functions include power capping and power savings mode.
The key element for planning your partitioning is knowing the hardware you have in your Power Systems compute node, as that hardware is the only limit that you have for your virtual servers. Adding VIOS to the equation solves much those limitations. Support for IVM: IVM is not supported on the Power Systems compute nodes in IBM Flex System. 5.8.
Sample Configuration 2: One IBM Flex System p460 Compute Node, with two IBM Flex System EN2024 4-port 1Gb Ethernet Adapters, 32 GB of memory, and two IBM Flex System FC3172 2-port 8Gb FC Adapters.
Virtual Server 2 consists of: – 0.
Sample configurations for VIOS installations are: Sample Configuration 1: One IBM Flex System p460 Compute Node, with two IBM Flex System EN2024 4-port 1Gb Ethernet Adapters, 32 GB of memory, and two IBM Flex System FC3172 2-port 8Gb FC Adapters.
6 Chapter 6. Management setup The IBM Flex System Enterprise Chassis brings a whole new approach to management. This approach is based on a global management appliance, the IBM Flex System Manager (FSM), which you can use to view and manage functions for all of your Enterprise Chassis components. These components include the Chassis Management Module, I/O modules, computer nodes, and storage.
6.1 IBM Flex System Enterprise Chassis security The focus of IBM on smarter computing is evident in the improved security measures implemented in the IBM Flex System Enterprise Chassis. Today’s world of computing demands tighter security standards and native integration with computing platforms. For example, the virtualization movement increased the need for a high degree of security, as more mission-critical workloads are consolidated to fewer and more powerful servers.
The Enterprise Chassis ships with secure settings by default, with two security policy settings supported: Secure: This setting is the default one.
The centralized security policy makes the Enterprise Chassis easy to configure. In essence, all components run the same security policy that is provided by the Chassis Management Module. This configuration ensures that all I/O modules run with a hardened attack surface, as shown in Figure 6-1.
Figure 6-2 shows a sample configuration of HTTPS access to the Chassis Management Module. Figure 6-2 HTTPS setup 6.2 Chassis Management Module The Chassis Management Module manages hardware elements within a single chassis. As such, the Chassis Management Module is central to chassis management and is required in the Enterprise Chassis. Chapter 6.
The Chassis Management Modules are inserted in the back of the chassis, and are vertically oriented. When looking at the back of the chassis, the Chassis Management Module bays are on the far right. The Chassis Manager tab in FSM shows this configuration clearly, as shown in Figure 6-3.
For a hardware overview of the CMM, see IBM PureFlex System and IBM Flex System Products & Technology, SG24-7984. 6.2.1 Overview of the Chassis Management Module The Chassis Management Module (CMM) is a hot-swap module that provides system management functions for all devices installed in the Enterprise Chassis. An Enterprise Chassis comes with at least one CMM and supports module redundancy. Only one module is active at a time.
If a DHCP response is not received within 3 minutes of the CMM Ethernet port being connected to the network, the CMM uses the factory default IP address and subnet mask. During this 3-minute interval, the CMM is not accessible. The CMM has the following default settings: IP address: 192.168.70.100 Subnet: 255.255.255.
To perform the initial configuration, complete the following steps: 1. Open a browser and go to the IP address of the CMM, either the DHCP-obtained address or the default IP settings. The Login window opens, as shown in Figure 6-6. Figure 6-6 CMM login Chapter 6.
2. Log in with the default user ID and password, and a window opens that shows the system status information and a graphic view of the chassis (see Figure 6-11 on page 169). Across the top of this window are groups that can be expanded for specific functions. The initial setup wizard is contained in the Mgt. Module Group in the Configuration function, as shown in Figure 6-7. Figure 6-7 CMM management group 3. Several options are shown that can be use to manage the Chassis Management Module configuration.
When the wizard starts, the first window shows the steps that are performed on the left side of the window, and a basic description of the steps in the main field. Figure 6-9 shows the Welcome window of the setup wizard. This wizard is similar to other IBM wizards. Navigation buttons for the wizard are in the lower left of each window. Figure 6-9 Chassis Management Module initial setup wizard Welcome window 4. Proceed through each step of the wizard by clicking Next, entering the information as required.
6.2.2 CMM functions The Chassis Management Module web interface has a menu structure at the top of each page that gives you easy access to most functions, as shown in Figure 6-10.
System Status tab The System Status tab shows the System Status window, which is the default window when you enter the CMM web interface (Figure 6-11). You can also access this window by clicking System Status. This window shows a graphical systems view of a selected chassis, active events, and general systems information. 4. System Information 1. Active Events 5. Selected Component Actions 2. Selected Active Component 3.
2. Selected active components: All major components of the chassis can be clicked for more information. Select a component of interest (in Figure 6-11 on page 169, we select the IBM Flex System p460 Compute Node), and a dialog box opens with information about that component, for example, serial number, name, bay, and so on. You can power a component on or off in this dialog box, or view other details about the component. 3.
Multi-Chassis Monitor tab In the Multi-Chassis Monitor tab, you can manage and monitor other IBM Flex System chassis, as shown in Figure 6-12. Click the chassis name to show details about that chassis. Click the link to start the CMM interface for that chassis. 4. Extended Chassis properties 1. Discover new chassis. 3. Manage other chassis. 2. Chassis properties Figure 6-12 Multi-Chassis Monitor tab The following selections are available (with the numbers that match callouts in Figure 6-12): 1.
4. Chassis properties: After you select a chassis from the Chassis Information tab, click the Events Log dialog box and the Chassis Properties tab opens, showing your IP address, location, computer nodes, and so on. Events tab This tab (Figure 6-13) has two options, Event Log (shown in Figure 6-14), and Event Recipients, which provide options to send an SNMP alert or send an email using Simple Mail Transfer Protocol (SMTP). Figure 6-13 Events tab 3. Event Actions Menu 4. Event Search and filters 2.
The callouts shown in Figure 6-14 on page 172 are described as follows: 1. Event overview: This grid shows general information about the event listing, including severity, source, sequence, date, and message. 2. Event detail: Click the More link to show detailed information about the selected event. 3. Event actions menu: Several options are available to manage logs: a. You can use the Export option to export your event log in various formats (.csv, XML, or PDF). b.
3. Advanced Status: This menu item provides advanced service information and additional service tasks. You might be directed by IBM Support staff to review or perform tasks in this section. 4. Download Service Data: Using this menu item, you can download Chassis Management Module data, send management module data to an email recipient (SMTP must be set up first), and download blade data. Chassis Management tab This menu is used for reviewing or changing the properties of the components of the chassis.
Chassis tab Clicking Chassis from the menu shows a window where you can view or change chassis-level data (Figure 6-17). Figure 6-17 Chassis tab Chapter 6.
Compute Nodes tab Clicking Compute Nodes from Figure 6-16 on page 174 shows a window that lists the servers installed in the chassis. Clicking one of the names in the Device Name column opens a window with details about that server, as shown in Figure 6-18. 1. Click for Compute node information 2.
I/O Modules tab The I/O Modules window is similar to the Compute Nodes window. A grid opens and shows the I/O modules. Clicking a module name opens other panes with the properties of that module (Figure 6-19). Figure 6-19 I/O Modules tab Chapter 6.
Fans and Cooling tab The Fans and Cooling window shows the fans and their operational status. Select an item in the list to show information about it (Figure 6-20). Figure 6-20 Fans and Cooling tab Power Modules and Management tab In the Power Modules and Management window (Figure 6-21), you can manage the power subsystem.
The Power Modules and Management window has the following features: The Policies tab shows the power polices that are currently enabled. If you click Change in Figure 6-21 on page 178, you can modify the current policy in the window that opens (Figure 6-22). Figure 6-22 Power Management Policies window Chapter 6.
The Hardware tab lists the power supply details (Figure 6-23).
The Input Power and Allocation tab shows charts and details of energy use on the chassis. Figure 6-24 shows an example of one of these charts. Figure 6-24 Chassis power allocation window The Power History tab graphically shows historical power consumption at selected intervals. You can use the Power Scheduling tab to configure schedules to power off or on or power cycle one or more compute nodes based on location in the chassis, serial number, or machine-type-model number.
Hardware topology You can use this menu item to view all the hardware installed in the chassis, right down to the individual component level, such as a DIMM. Figure 6-25 shows an example. Figure 6-25 Hardware topology Reports This menu item shows reports that list all MAC addresses or unique IDs used by components in the chassis.
Mgmt Module Management tab This tab, shown in Figure 6-26, has options for performing user management tasks, firmware upgrades, security management, network management, and so on. The tab is shown in Figure 6-26. Figure 6-26 Mgmt Module Management tab Chapter 6.
User Accounts This option provides access to user accounts and permission groups, for which you can add users, change passwords, and create groups for access to specific functions. Click a user name to view additional information, as shown in Figure 6-27. 2. Permission Groups 3. Selected user properties 1. User Management Figure 6-27 User and group management Firmware menu Enables firmware upgrades and views of current firmware state.
Security You can use this menu to configure security policies and set up a certificate authority (CA), enable HTTPS or SSH access, and configure an LDAP for logins. Figure 6-28 shows the Security Policies tab of this window. Figure 6-28 Security Policies tab Chapter 6.
Network All network setup options for your chassis are available in this menu. The Chassis Management Module supports IPv4 and IPv6. You can set up SMTP, Domain Name System (DNS), Lightweight Directory Access Protocol (LDAP), Hypertext Transfer Protocol (HTTP), and so on, as shown on Figure 6-29. Figure 6-29 Network Protocol Properties window Configuration You can use this menu to back up and restore your Chassis Management Module configuration.
Properties You can use this window to set up your Chassis Management Module name, time and date, and standby Chassis Management Module management details. Figure 6-30 shows an example. Figure 6-30 Management Module Properties License Key Management You can use this window to manage all of your chassis licensed features. Figure 6-31 shows an example. Figure 6-31 License Key Management 6.2.
Important: When a Power Systems compute node is discovered and managed by a Flex System Manager, Serial Over LAN (SOL) must be disabled. Given that the Power Systems compute node is ordered as part of one of the IBM PureFlex System configurations, it is discovered and managed by a Flex System Manager. Therefore, the Chassis Management Module access is disabled in most cases.
2. Log in with your user ID and password. The System Status window of the Chassis Management Module opens, as shown in Figure 6-33, with the Chassis tab active. If not, click System Status from the menu bar at the top of the window. Figure 6-33 Chassis Management Module with node in bay 3 selected 3. Select the Power Systems compute node image of the chassis. Figure 6-33 shows the node in bay 3 selected. The Actions menu to the right of the graphics is useful when working with the node. Chapter 6.
4. Click More Actions Launch Blade Console to access the option to launch a console (Figure 6-34). 3 Figure 6-34 Launch console on Power Systems compute node from Chassis Management Module 5. Enter the IP address of the node or select it from the menu. 6. Power on the node using the Power On option in the Actions menu. The resulting progress indicator is shown in Figure 6-35. Figure 6-35 Power on the Power Systems compute node You interact with the node as it boots.
6.3 Management network In an IBM Flex System Enterprise Chassis, you can configure separate management and data networks. The management network is a private and secure Gigabit Ethernet network used to complete management-related functions throughout the chassis, including management tasks related to the compute nodes, switches, and the chassis itself. The management network is shown in Figure 6-36 (it is the blue line). It connects the CMM to the compute nodes, the switches in the I/O bays, and the FSM.
The yellow line in the Figure 6-36 on page 191 shows the production data network. The FSM also connects to the production network (Eth1) so that it can access the Internet for product updates and other related information. Important: The management node console can be connected to the data network for convenience of access. One of the key functions that the data network supports is discovery of operating systems on the various network endpoints.
The management node comes standard without any entitlement licenses, so you must purchase a license to enable the required FSM functionality. As described in Chapter 2, “IBM PureFlex System” on page 15, there are two versions of IBM Flex System Manager: base and advanced.
Figure 6-37 shows a front view of the FSM.
Figure 6-38 shows the internal layout and major components of the FSM. Cover Heat sink Microprocessor Microprocessor heat sink filler SSD and HDD backplane I/O expansion adapter ETE adapter Hot-swap storage cage SSD interposer SSD drives SSD mounting insert Air baffles Hot-swap storage drive Storage drive filler DIMM filler DIMM Figure 6-38 Exploded view of the IBM Flex System Manager node showing major components The FSM comes preconfigured with the components described in Table 6-1.
Feature Description Memory 8x 4GB (1x 4GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM SAS Controller One LSI 2004 SAS Controller Disk 1x IBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 2x IBM 200GB SATA 1.
Front controls The FSM has similar controls and LEDs as the IBM Flex System x240 Compute Node. Figure 6-40 shows the front of an FSM with the location of the control and LEDs.
Management network adapter The management network adapter is a standard feature of the FSM and provides a physical connection into the private management network of the chassis. The adapter is shown in Figure 6-38 on page 195 as the everything-to-everything (ETE) adapter. The management network adapter contains a Broadcom 5718 Dual 1GbE adapter and a Broadcom 5389 8-port L2 switch. This card is one of the features that makes the FSM unique compared to all other nodes supported by the Enterprise Chassis.
6.4.2 Software features The main features of IBM Flex System Manager management software are: Monitoring and problem determination – A real-time multichassis view of hardware components with overlays for additional information. – Automatic detection of issues in your environment through event setup that triggers alerts and actions. – Identification of changes that might impact availability. – Server resource utilization by a virtual machine or across a rack of systems.
– Inventory of physical storage configuration. – Health status and alerts. – Storage pool configuration. – Disk sparing and redundancy management. – Virtual volume management. – Support for virtual volume discovery, inventory, creation, modification, and deletion. Virtualization management (base feature set) – Support for VMware, Hyper-V, KVM, and IBM PowerVM. – Create virtual servers. – Edit virtual servers. – Manage virtual servers. – Relocate virtual servers.
– Group storage systems together using storage system pools to increase resource utilization and automation. – Manage storage system pools by adding storage, editing the storage system pool policy, and monitoring the health of the storage resources. Additional features – A resource-oriented chassis map provides an instant graphical view of chassis resources, including nodes and I/O modules.
6.5 FSM initial setup FSM is an appliance that is delivered with all the required software preinstalled. When this software stack is started for the first time, a startup wizard starts that steps through the required configuration process, such as licensing agreements and Transmission Control Protocol/Internet Protocol (TCP/IP) configuration for the appliance. When configuration is complete, the FSM is ready to manage the chassis it is installed in and other chassis (up to four).
To initiate an IMMv2 remote console session, complete the following steps: 1. Start a browser session, as shown in Figure 6-42, to the IP address of the FSM IMMv2. Important: The IP address of the IMMv2 of Intel compute nodes can be determined by using the Chassis Management Module or CLI. By default, the interface is set to use DHCP.
2. After logging in to the IMMv2, click Server Management from the navigation options, as shown in Figure 6-43.
3. In the Remote Control window, click Start remote control in single-user mode, as shown in Figure 6-44. This action starts a Java applet on the local desktop that is used as a console session to the FSM. Figure 6-44 Starting remote console from IMMv2 Figure 6-45 shows the Java console window opened to the FSM appliance before power is applied. Figure 6-45 FSM console in power off state Chapter 6.
4. The FSM can be powered in several ways, including the physical power button on the FSM, or from the Chassis Management Module. For this example, using the Tools/Power/On option from the remote console menu, as shown in Figure 6-46, is the most convenient.
As the FSM powers up and boots, the process can be monitored, but no input is accepted until the License Agreement window, shown in Figure 6-47, opens. Figure 6-47 FSM license agreement 5. Click I agree to continue, and the startup wizard Welcome window opens, as shown in Figure 6-48. Figure 6-48 FSM Welcome window Chapter 6.
6. Click Data and Time from the wizard menu to open the window shown in Figure 6-49. Set the time, date, time zone, and Network Time Protocol server, as needed. Figure 6-49 Setting the FSM date and time Click Next.
7. Create a user ID and password for accessing the GUI and CLI. User ID and password maintenance, including creating additional user IDs, is available in IBM Flex System Manager after the startup wizard completes. Figure 6-50 shows the creation of user ID USERID and entering a password. Figure 6-50 FSM system level user ID and password step Click Next to continue. Chapter 6.
8. Network topology options include separate networks for management and data, or a single network for both data and management traffic from the chassis. The preferred practice is to have separate management and data networks. To simplify this example, a combined network is configured, using the topology on the right side of Figure 6-51. Figure 6-51 FSM network topology options Click Next to continue to the actual network configuration. 9. The LAN adapter configuration is shown in Figure 6-52 on page 211.
The second LAN adapter represents one of the integrated Ethernet ports or LAN on motherboard (LOM). Traffic from this adapter flows through the Ethernet switch in the first I/O switch bay of the chassis, and is used as a separate data connection to the FSM. The radio button for the first adapter is preselected (Figure 6-52). Figure 6-52 FSM LAN adapter configuration Click Next to continue. Chapter 6.
10.The Configure IP Address window, shown in Figure 6-53, allows the selection of DHCP or static IP options for IPv4 and IPv6 addressing. Select the wanted options, enter the information as required, and then click Next.
After completing the previous step, the wizard cycles back to the Initial LAN Adapter window and preselects the next adapter in the list, as shown in Figure 6-54. Figure 6-54 FSM LAN adapter configuration continue option In our example, we are using a combined network topology and a single adapter, so additional IP addresses are not needed. 11.Select No by the question, “Do you want to configure another LAN adapter?”, as shown in figure Figure 6-54. Click Next to continue. 12.
Click Next to continue. Important: It is expected that the host name of the FSM is available on the domain name server. 13.You can enable the use of DNS services and add the address of one or severs and a domain suffix search order. Enter the information, as shown in Figure 6-56, and click Next.
14.The final step of the setup wizard is shown in Figure 6-57. This windows shows a summary of all configured options. To change a selection, click Back. If no changes are needed, click Finish. Figure 6-57 FSM startup wizard summary window Chapter 6.
After Finish is clicked, the final configuration and setup proceeds automatically without any further input, as shown in Figure 6-58 through Figure 6-61 on page 217.
Figure 6-60 FSM startup Figure 6-61 FSM startup status 15.With startup completed, the local browser on the FSM also starts. A list of untrusted connection challenges opens. Chapter 6.
Accept these challenges by clicking I Understand the Risks and Add Exception, as shown in Figure 6-62 and Figure 6-63.
16.Click Confirm Security Exception, as shown in Figure 6-64. Figure 6-64 FSM security exception confirmation 17.With the security exceptions cleared, the Login window of the IBM Flex System Manager GUI opens. Chapter 6.
Enter the user ID and credentials that were entered in the startup wizard, and click Log in, as shown in Figure 6-65.
A Getting Started window opens and reminds you that initial setup tasks must be completed (Figure 6-66). Figure 6-66 FSM Getting Started reminder The startup wizard and initial login are complete. The FSM is ready for further configuration and use. Our example uses a console from the remote console function of the IMMv2. At this time, a secure browser session can be started to the FSM. 6.5.
Direct Internet connection To set up and test the Internet connection, complete the following steps: 1. Starting from the Home page, click the Plug-ins tab. The Plug-ins window lists all of the managers that are available on the FSM, as shown in Figure 6-67.
2. From the list of managers, click Update Manager to open the window shown in Figure 6-68. Figure 6-68 FSM Update Manager Chapter 6.
3. In the Common task box, click Configure settings to open the window shown in Figure 6-69. You can use this window to configure a direct Internet connection, or use the configuration settings to use an existing proxy server. Figure 6-69 FSM Update Manager Internet connection settings 4. With the settings complete, click Test Internet Connection to verify the connection.
The test attempts to make a connection to a target IBM server. During the test, a progress indicator opens, as shown in Figure 6-70. Figure 6-70 FSM testing Internet connection for Update Manager Chapter 6.
A successful completion message opens (Figure 6-71). Figure 6-71 Successful Internet connect test for Update Manager After the test succeeds, the Update Manager can obtain update packages directly from IBM. If a direct Internet connection is not allowed for the FSM, complete the steps described in “Importing update files” to import the update files into Update Manager. Importing update files This section describes how to import files into Update Manager.
The scp command is used to copy the update files from a local workstation to the FSM. The update files on the local workstation are obtained from IBM Fix Central. From an ssh login, you have access only to the /home/userid directory. Additional subdirectories can be created and files copied and removed from these subdirectories, but running cd to the subdirectory is a restricted operation. To import the update files using the GUI, complete the following steps: 1.
Figure 6-73 shows the window that opens. Two options are available: Check for updates using an Internet connection, or Import updates from the file system. For this example, we import the updates. Figure 6-73 Update Manager Acquire Updates selection 2. Select Import updates from the file system.
3. Enter the path for the updates that were manually copied to the IBM Flex System Manager, as shown in Figure 6-74. Figure 6-74 Import path to update files 4. Click OK, and the IBM Flex System Manager job scheduler opens, as shown in Figure 6-75. Figure 6-75 Starting the import updates job on FSM For this example, we left the default Run Now selected. Chapter 6.
5. Click OK at the bottom of the window to start the job. When the import updates job starts, the Acquire Updates window refreshes with a message that indicates the new job. The status of the running job can be monitored by clicking Display Properties, as shown in Figure 6-76. Figure 6-76 Update Manager 6.5.3 Initial chassis management with IBM Flex System Manager Most tasks in the IBM Flex System Manager can be accomplished by more than one method using the GUI.
3. Click IBM Flex System Manager Domain - Select Chassis to be Managed (Figure 6-77). Figure 6-77 FSM initial setup window Chapter 6.
A window with a list of available chassis opens, as shown in Figure 6-78. Figure 6-78 FSM chassis selection for management 4. Select the box in front of the wanted chassis. 5. Click Manage. The Manage Chassis window opens.
The Manage Chassis window, shown in Figure 6-79, lists the selected chassis. A drop-down menu lists the available IBM Flex System Manager systems. Figure 6-79 FSM - manage chassis options 6. Ensure that the chassis and IBM Flex System Manager selections are correct. 7. Click Manage. This action updates the Message column from Waiting to Finalizing, then Managed, as shown in Figure 6-80 and Figure 6-81 on page 234. Figure 6-80 FSM manage chassis Step 1 Chapter 6.
Figure 6-81 FSM manage chassis Step 2 8. After the successful completion of the manage chassis process, click Show all chassis, as shown in Figure 6-82.
The resulting window is the original IBM Flex System Manager Management Domain window, with the target chassis as the managing IBM Flex System Manager (Figure 6-83). Figure 6-83 FSM with management domain updated With the Enterprise Chassis now managed by the IBM Flex System Manager, the typical management functions on a Power Systems compute node can be performed. 6.
More advanced functions, such as VMControl functionality, are also available in the IBM Flex System Manager, but are not described in this book. 6.6.1 Managing Power Systems resources The starting point for all of the functions is the Manage Power Systems Resources window. This part of the IBM Flex System Manager GUI can be started by the following steps. Most operations in the IBM Flex System Manager use the Home page as the starting point.
A new tab opens that shows a list of managed chassis (Figure 6-85). Figure 6-85 FSM Chassis Manager view 2. Click the name of the wanted chassis in the chassis name column (in this case, modular01). A window with a graphical view of the chassis opens (Figure 6-86). Figure 6-86 FSM Chassis Manager graphical view 3. Click the General Actions drop-down menu and click Manage Power Systems Resources. A new tab is created along the top edge of the GUI, and the Manage Power Systems Resources window opens.
Important: Readers who are familiar with the Systems Director Management Console will recognize this part of the IBM Flex System Manager GUI, as it is nearly identical in form and function for both applications. Requesting access to the Flexible Service Processor Typically, a Power Systems compute node is automatically discovered, but access must be requested to the Flexible Service Processor on these nodes. The following example shows a discovered node in a No Access state (Figure 6-87).
To request access, complete the following steps: 1. Right-click the wanted server object, as shown in Figure 6-88, and click Request Access. Figure 6-88 Requesting access to a Power Systems compute node Chapter 6.
Figure 6-89 shows the next window, which steps you through the process. Notice that the User ID box is prepopulated with the Hardware Management Console (HMC) ID and is disabled. The Password box accepts any password for the HMC user ID and essentially sets the password with this first use. Important: Remember this password set for initial access, as it is needed if access to the node is requested again. Figure 6-89 Initial password set to Flexible Service Processor 2.
Figure 6-91 Completed access request 3. With the access request complete, click Close to exit the window and return to the Manage Power Systems Resources window, as shown in Figure 6-92. Many of the columns now contain information obtained from this limited communication with the Flexible Service Processor. Figure 6-92 Updated Power Systems resources - now with access Chapter 6.
Inventory collection In order for the FSM to accurately manage a Power Systems compute node, inventory information must be collected. To accomplish this task, perform the following steps: 1. Right-click the server object in the list, as shown in Figure 6-93.
2. Click Inventory/View and Collect Inventory to start the collection. In Figure 6-94, notice that, to the right of the Collect Inventory button, a time stamp of the last collection is displayed. In this case, inventory has never been collected for this node. Figure 6-94 Starting inventory collection 3. Click Collect Inventory to start the process. Nearly all processes in the IBM Flex System Manager application are run as jobs and can be scheduled. The scheduling can be immediate or in the future.
Figure 6-95 shows the job scheduler window that opens when the inventory collection process is started. Figure 6-95 Scheduling inventory collection job 4. Select Run Now and click OK at the bottom of the window. When the job starts, a notification is sent to the originating window with options to Display Properties or Close Message (Figure 6-96).
Clicking Display Properties opens the window shown in Figure 6-97. The job properties window has several tabs that can be used to review additional job details. The General tab shown indicates that the inventory collection job completed without errors. Figure 6-97 Inventory job status The Active and Scheduled Jobs tab and the View and Collect Inventory tabs near the top of the window can be closed.
To open a virtual console, complete the following steps: 1. Open one of the windows that lists the virtual servers. You can accomplish this task in many ways. In the example that follows, we use Resource Explorer. Figure 6-98 shows how to open the console. Figure 6-98 Open a console on a virtual server from the FSM 2. Enter the password of the login ID used to access the FSM. 3. Enter the password to open the console.
4. The Terminal Console tab opens and shows a message and an OK button. Click OK to return to the Resource Explorer tab (or the tab you started the console from) (Figure 6-99). 1. Type your password 2. Click the OK button Figure 6-99 Validating with the FSM Chapter 6.
If the virtual server ID for the console you are launching is the number 1, the console opens as shown in Figure 6-100.
If Serial Over LAN (SOL) is not disabled, you receive the error shown in Figure 6-101. To learn the process to disable SOL, see 6.6.3, “Disabling Serial Over LAN (SOL)” on page 249. Figure 6-101 Console open failure on virtual server ID 1 with SOL enabled 6.6.3 Disabling Serial Over LAN (SOL) When a Power Systems compute node is managed by an IBM Flex System Manager, you must disable SOL on the chassis. Important: There is an option to disable SOL at the individual compute node level.
The Login window opens (Figure 6-102). Figure 6-102 Chassis Management Module Login window 2. Log in using a valid user and password. If this is the first time you are logging in to the Chassis Management Module, the System Status window of the Chassis Management Module opens the Chassis tab. If this is not the first time you are logging in, you are returned to the place where you were when you logged off. 3. Click Chassis Management Compute Nodes from the menu bar in the CMM interface.
Disabling SOL To disable SOL on the chassis, complete the following steps, which are also shown in Figure 6-103: 1. Click the Settings tab. 2. Click the Serial Over LAN tab. 3. Clear the Serial Over LAN check box. 4. Click OK. The change takes effect as soon as the window closes. 1. 2. 3. 4. Figure 6-103 Disable SOL for all compute nodes from the Chassis Management Module Chapter 6.
6.7 IBM Flex System Manager options and tasks In this section, we describe a subset of FSM options that you can use to manage your chassis, and the options available in FSM. 6.7.1 Initial setup tab After logging in, the Home page opens and shows the Initial setup tab (which is selected). This window has options for managing your environment, as shown in Figure 6-104.
This window provides access to the functions listed in the following sections. FSM - Check and update With this option, you can upgrade your IBM Flex System Manager code. If the FSM has access to the Internet, the upgrade codes can be downloaded directly and installed. If you do not have access to the Internet in the FSM, you can download the code package using another system, then upload it manually to the FSM.
After you select the chassis (as shown in Figure 6-105 on page 253), you can start managing it, and a window similar to Figure 6-106 opens. You see a front and back graphical view of the chassis. Click a component, and click Action to select actions applicable to that component (see the area marked “Selected component properties” in Figure 6-106). Actions include restart, power off, access to a command line or console, and so on.
Compute Nodes - Check and Upgrade Firmware After your compute node is discovered, you have several actions that you can take, as shown on Figure 6-107. Figure 6-107 Compute Nodes - Management You can choose the following actions: Deploy: Deploy an operating system. Discover: Discover operating systems, components, and I/O modules. After systems are discovered, you can request access and start managing them. Chapter 6.
Request Access: After your systems and components are discovered, you can request access to them with this option, as shown in Figure 6-108.
Collect inventory: After you discover your system and request access, you can collect and review the systems inventory. The systems inventory shows you information about hardware and operating systems for the systems you select. There are several filter and export options available, as shown in Figure 6-109. Figure 6-109 Collect inventory management Check for Updates: If your FSM is connected to the Internet, you can update your firmware and operating system directly from the Internet.
I/O modules - Check and Upgrade Firmware After your I/O modules are discovered, you can perform several operations on them: Request Access: In Figure 6-108 on page 256, you can request access to the discovered I/O modules. Collect Inventory: After you have access to your I/O modules, you can start collecting inventory on them, as shown in Figure 6-109 on page 257. Check for updates: If the FSM is connected to the Internet, you can update your I/O module firmware directly from the Internet.
6.7.2 Additional setup tab In this window, you have access to settings such as the IBM Electronic Service Agent™ (ESA) setup, LDAP setup, user setup and more, as shown in Figure 6-110. Figure 6-110 Additional Setup window Set up Electronic Service Agent (ESA) ESA is an IBM monitoring tool that reports hardware events to a support team automatically. You can use this setting to set up ESA on your IBM Flex System Manager. Chapter 6.
Configure FSM User Registry This setting connects the IBM Flex System Manager to an external LDAP server. Manage Users and Groups This setting opens the FSM access management area. From here, you can create, modify, and delete FSM accounts and groups. Automatic Checking for Updates Using this setting, your IBM Flex System Manager checks periodically for upgrades through the Internet and informs you when new upgrades are available.
Manage System Storage As part of the new total management approach, storage management is integrated into the FSM, as shown in Figure 6-111. After you discover your storage appliance and request access to it through the FSM, you can start managing it. Figure 6-111 Flex System Manager Storage Management Chapter 6.
Manage Network Devices You can use IBM Flex System Manager to manage your network and network devices while the network devices are discovered and have full access. The Network Control window is shown in Figure 6-112.
6.7.3 Plug-ins tab The plug-ins tab has options for managing the FSM, managing virtual servers, checking status, managing discovery, and more, as shown in Figure 6-113. Figure 6-113 shows only a portion of the available entries. Several of the plug-ins require licensing and are included on a trial basis. Figure 6-113 Plug-ins tab options Chapter 6.
Flex Systems Manager You can use this tab to monitor and manage the IBM Flex System Manager itself. This tab shows a graphic overview of all resources, indicating the state of selected resources (critical, warning, informational, or OK messages). Below the graphic is general information about the IBM Flex System Manager regarding uptime, processor use, last backup, active events, and so on.
You can also create shortcuts for functions you frequently use for IBM Flex System Manager, chassis management, managing Power System resources, the IBM Flex System Manager management domain, event log, backup and restore, and high availability settings (Figure 6-114). Resources status overview Commonly used functions General information Active event status Figure 6-114 Flex System Manager - Management Overview Chapter 6.
IBM Flex System Manager Server This plug-in manages the server side of the IBM Flex System Manager. It shows information about systems discovered, processor use, ports used, and general user information. Shortcuts are available for common management tasks, such as System Discovery, Collect Inventory, Find a task, Find a resource, Resource Explorer, and User/Group management. Discovery Manager You can use Discovery Manager to discover and connect to the systems at your site.
Status Manager This window shows a tactical overview with a pie chart of all resources and systems managed by IBM Flex System Manager, dividing the chart into critical, warning, informational, and OK statuses. As with the other plug-ins, it has quick access menus for frequently used functions, for example, health summary, view problems, monitors, and event logs.
Show and install updates: After obtaining the updates (using the Internet or manual download), you can view and install them using this setting. A firmware upgrade example for a Power Systems compute node, including captures, is shown in 8.1.5, “Firmware update using IBM Flex System Manager” on page 328.
Power Systems Management You can use this feature to assume the role of the Hardware Management Consoles and Systems Director Management Consoles to manage the Power Systems servers in your data center. From here you can create partitions, manage virtual servers, set up dual VIOS in an IBM Flex System environment, and access features, such as live partition mobility. The Power System management overview is shown in Figure 6-117. 1. Power Systems overview 3. Quick access menu 2.
Manage Resources: This option shows a menu with options ordered by hardware, virtualized environment, and operating systems. Select one (in our example, the IBM Flex System p460 Compute Node, as shown in Figure 6-118). The actions menu provides options for managing your power server, such as create virtual servers, manage virtual servers, manage systems plans, and power on.
System z Management You can use this feature to manage the System z systems in your data center. It is similar to the Power Systems Management feature. VMControl Enterprise Edition With VMControl, you can manage all of your virtualized environment, from KVM to VMware servers. All of these items can be managed from this centralized interface in IBM Flex System Manager (Figure 6-120). Figure 6-120 VMControl Management Chapter 6.
Chassis Management You can use this feature to show a tactical overview of the chassis and chassis components with problems. (For information about chassis that are not in compliance, see “Update Manager” on page 267.) Shortcuts are available for systems discovery, view, collect inventory, and so on. Systems x Management You can use this feature to manage the System x servers in your data center. It is similar to the Power Systems Management feature.
6.7.4 Administrator tab From the Administration tab, you can access all IBM Flex System Manager configurations for tasks, such as shut down, restart, power off, upgrade firmware, set up network, set up users, and backup and restore. See Figure 6-121. Figure 6-121 Administration Chapter 6.
6.7.5 Learn tab From the Learn tab, you can access IBM Flex System Manager online manuals.
7 Chapter 7. Virtualization If you create virtual servers (also known as logical partitions (LPARs)) on your Power Systems compute node, you can consolidate your workload to deliver cost savings and improve infrastructure responsiveness. As you look for ways to maximize the return on your IT infrastructure investments, consolidating workloads and increasing server use becomes an attractive proposition.
7.1 PowerVM PowerVM delivers industrial-strength virtualization for AIX, IBM i, and Linux environments on IBM POWER processor-based systems. Power Systems servers, coupled with PowerVM technology, are designed to help clients build a dynamic infrastructure, which reduces costs, manages risk, and improves service levels. 7.1.
Shared storage pools You can use VIOS 2.2 to create storage pools that can be accessed by VIOS partitions that are deployed across multiple Power Systems servers. Therefore, an assigned allocation of storage capacity can be efficiently managed and shared. The December 2011 Service Pack enhances capabilities by enabling four systems to participate in a Shared Storage Pool configuration. This configuration can improve efficiency, agility, scalability, flexibility, and availability.
IBM PowerVM Workload Partitions Manager™ for AIX Version 2.2 has the following enhancements: When used with AIX V6.1 Technology Level 6, the following support applies: – Support for exporting a VIOS SCSI disk into a Workload Partition (WPAR). There is compatibility analysis and mobility of WPARs with VIOS SCSI disk. In addition to Fibre Channel devices, VIOS SCSI disks can be exported into a WPAR. – WPAR Manager Command-Line Interface (CLI).
POWER Hypervisor technology is integrated with all IBM POWER servers, including the Power Systems compute nodes. The hypervisor orchestrates and manages system virtualization, including creating logical partitions and dynamically moving resources across multiple operating environments. POWER Hypervisor is a basic component of the system firmware that is layered between the hardware and the operating system.
The minimum amount of physical memory to create a partition is the size of the system logical memory block (LMB). The default LMB size varies according to the amount of memory configured in the system, as shown in Table 7-1.
Virtual Ethernet has the following major features: Virtual Ethernet adapters can be used for both IPv4 and IPv6 communication and can transmit packets up to 65,408 bytes in size. Therefore, the maximum transmission unit (MTU) for the corresponding interface can be up to 65,394 (65,408 minus 14 for the header) in the non-VLAN case, and up to 65,390 (65,408 minus 14, minus 4) if VLAN tagging is used. The POWER Hypervisor presents itself to partitions as a virtual 802.1Q-compliant switch.
Enabling NPIV: To enable NPIV on a managed system, you must have VIOS V2.1 or later. NPIV is only supported on 8 Gb Fibre Channel and Converged Network (Fibre Channel over Ethernet, FCoE)) adapters on a Power Systems compute node. You can configure only virtual Fibre Channel adapters on client logical partitions that run the following operating systems: AIX V6.1 Technology Level 2, or later AIX 5L V5.3 Technology Level 9, or later IBM i V6.1.1, V7.
Figure 7-1 shows the connections between the client partition virtual Fibre Channel adapters and external storage.
Each partition must have access to a system console. Tasks such as operating system installation, network setup, and certain problem analysis activities require a dedicated system console. The POWER Hypervisor provides the virtual console using a virtual TTY or serial adapter and a set of Hypervisor calls to operate on it. Virtual TTY does not require the purchase of any additional features or software, such as the PowerVM Edition features.
Virtual server name Processor/UnCap/Weight Memory lpar2 .5/N/- 1 GB lpar3 .5/N/- 1 GB vios3 1/Y/200 2 GB vios4 1/Y/200 2 GB lpar1 3/Y/100 4 GB lpar2 2/Y/50 2 GB lpar3 2/Y/50 2 GB lpar4 1.5/N/- 1 GB lpar5 1.5/N/- 1 GB lpar6 1.5/N/- 1 GB node2 Physical adapters For the VIOS partitions, planning for physical adapter allocation is important, because the VIOS provides virtualized access through the physical adapters to network or disk resources.
Virtual adapters Assigning and configuring virtual adapters requires more planning and design. For virtual Ethernet adapters, the VLANs that the virtual servers require access to must be considered. The VIOS provides bridging from the virtual Ethernet adapter to the physical. Therefore, the virtual Ethernet adapter in the VIOS must be configured with all of the VLANs that are required for the virtual servers in the node. For virtual storage access, either virtual SCSI or NPIV can be used.
7.2.1 Using the CLI Many integrators and system administrators make extensive and efficient use of the CLI, rather than use a graphical interface for their virtual server creation and administration tasks. Tasks can be scripted, and often the tasks are completed faster using the command line. Scripts: In many cases, existing scripts that were written for usage on a Systems Director Management Console can run unchanged on FSM.
Verification of success A successful command produces a prompt with no message displayed.
We access the FSM remotely using a browser. Complete the following steps: 1. Open a browser and point the browser to the following URL (where system_name is the host name or IP address of the FSM node): https://system_name:8422 Port number: The port you use may be different than the port we use in our examples. A login window opens, as shown in Figure 7-2. Figure 7-2 IBM Flex System Manager login window 2. Enter a valid FSM user ID and password, and click Log in. The Welcome window opens. Chapter 7.
3. Click Home opens the main window, as shown in Figure 7-3.
4. Click the Plug-ins tab to display the list of installed plug-ins. The list of installed plug-ins opens, as shown in Figure 7-4. Figure 7-4 IBM Flex System Manager Plug-ins tab - highlighting the Power Systems Management plug-in Chapter 7.
5. Click the Power Systems Management plug-in to display the Power Systems Management main window, shown in Figure 7-5. Figure 7-5 Power Systems Management main window Creating the VIOS virtual server using the GUI When you open the Power Systems Management main window shown in Figure 7-5, you see choices to manage hosts and virtual servers. In this section, we describe how to create the VIOS virtual server. Creating the virtual server To create the virtual server, complete the following steps: 1.
2. Select the compute node. If more hosts are managed by this Flex System Manager, select the one on which the VIOS virtual server is created. 3. Click Actions System Configuration Create Virtual Server to start the wizard (Figure 7-6). Figure 7-6 Create a virtual server menu option Chapter 7.
The window shown in Figure 7-7 opens. Figure 7-7 Setting the VIOS virtual server name and ID – Enter the virtual server name. We use the name vios1. – Enter the server ID. We give our VIOS an ID of 1. – Specify the Environment option to identify this environment as a VIOS. 4. Click Next. Memory and processor settings The next task is to choose the amount of memory for the VIOS virtual server.
1. Change the value to reflect the amount of wanted memory in gigabytes. Decimal fractions can be specified to assign memory in megabyte increments. This memory is the amount of memory the hypervisor attempts to assign when the VIOS is activated. We assign the VIOS 2 GB of memory. Minimum and maximum values: You cannot specify minimum or maximum settings. The value specified here is the wanted value. Minimum and maximum values can be edited after the virtual servers are created, as described in 7.
No memory or processing resources are committed. In this step, and in the rest of the steps for defining the virtual server, we are defining only the resources that are allocated to this virtual server after it is activated. 3. Click Next to move to the virtual adapter definitions. Virtual Ethernet In this task, the process is repeated for each virtual adapter to be defined on the VIOS, but the characteristics differ with each adapter type. The order in which the adapters are created does not matter.
Complete the following steps: 1. Define the bridging virtual Ethernet adapter. Click Create Adapter, which opens the window where you create the bridging virtual Ethernet adapter, as shown in Figure 7-11. Figure 7-11 Create Adapter window Chapter 7.
2. Enter the characteristics for the bridging virtual Ethernet adapter as follows: – It is standard practice to skip the first 10 adapter IDs. Start by defining the bridging virtual Ethernet adapter with an ID of 11. – Assume that all the packets are untagged, so leave the Port Virtual Ethernet option set to 1, and leave the IEEE 802.1Q capable adapter option unset. – This adapter is used in a Shared Ethernet Adapter definition, so update that section.
4. Click Add to add more virtual Ethernet adapters, and a new virtual Ethernet adapter window opens. Figure 7-13 Create Adapter window 5. Create an additional virtual Ethernet adapter to use as the control channel for shared Ethernet adapter failover: a. Make the adapter ID 12 and the VLAN 99, leaving all other fields as they are, to create the control channel virtual Ethernet adapter. b. Click OK to return to the virtual Ethernet adapter main window. Chapter 7.
6. Review the virtual Ethernet adapters that are defined, and click Next to save the settings and move on to the Virtual Storage Adapters window. Virtual storage Here we show an example of creating a virtual SCSI adapter for the VIOS virtual server. When creating a virtual Fibre Channel adapter, the same windows shown in “Virtual Ethernet” on page 296 are shown. However, change the Adapter type field to Fibre Channel. Complete the following steps: 1. Click Create adapter...
2. Complete the fields in Figure 7-14 on page 300 as follows: – Specify 13 as the Adapter ID. – To create a virtual SCSI relationship between this VIOS and a client virtual server, specify SCSI as the Adapter type. Either choose an existing virtual server and supply an ID in the Connecting adapter ID field, or enter a new ID and connecting adapter ID for a virtual server that is not defined.
Figure 7-15 shows the physical location codes on a p460. The locations codes shown in the configuration menus contain a prefix as follows: Utttt.mmm.ssssss, where tttt:Machine Type, mmm:Model, ssssss: 7-digit Serial Number For example, an EN4054 4-port 10Gb Ethernet Adapter in a p460 is represented as: U78AF.001.ssssss-P1-C34 An FC3172 2-port 8Gb FC Adapter is represented as: U78AF.
Figure 7-16 shows the expansion card location codes for the p260. 1 Un-P1-C18 2 Un-P1-C19 Figure 7-16 p260 adapter location codes The storage controller, if disks were ordered, has a location code of P1-T2 on both models. The USB controller has a location code of P1-T1 on both models. For our VIOS, we assign all four ports on an Ethernet expansion card and the storage controller. Complete the following steps: 1. Choose the expansion card and storage controller from the list in Figure 7-17.
2. Click Next to proceed to the Summary window. Review the summary to ensure that the VIOS virtual server is created as you expect. If you need to make corrections, go back to the section where the correction must be made and change the option. 3. Click Finish to complete the definition of the VIOS virtual server. 4. To verify that the virtual server was defined, return to the Power Systems Management tab and click the Virtual I/O Server link under Virtual Servers in the Manage Resources section.
7.3.1 Using the IBM Flex System Manager To change the values using the web interface, complete the following steps: 1. Select the newly created VIOS and click Actions System Configuration Manage Profiles, as shown in Figure 7-18. Figure 7-18 Manage VIOS profiles to change settings A window opens and shows all of the profiles for the selected virtual server. 2. Select the profile to edit and click Actions Edit. Chapter 7.
3. Click the Processors tab to access the processor settings that were made by the wizard. The window shown in Figure 7-19 opens. Options can be changed in this window to the values planned for the VIOS virtual server.
Note the values that were made by the wizard: – The desired virtual processor count is 10 (as specified when creating the virtual server). This count translates to a desired processing unit setting of 1.0. – The maximum virtual processor count is 20. The maximum count is always the desired count plus 10. The maximum processing units setting is also set to 20. – The minimum virtual processors setting is set to 1, with the processing units set to 1. – The sharing mode is set to uncapped with a weight of 128.
7.4 Creating an AIX or Linux virtual server Creating an AIX or Linux virtual server is similar to creating a VIOS virtual server. Use the same process shown in 7.2, “Creating the VIOS virtual server” on page 286, but with some differences. The differences between creating a VIOS and an AIX or Linux virtual server are: The Environment option in the initial window is set to AIX/Linux. No physical I/O adapters must be defined, if the virtual server is virtualized.
Creating the virtual server for an IBM i installation is similar to the process for creating a VIOS. Complete the following steps: 1. Set the Environment option to IBM i, as shown in Figure 7-20. Figure 7-20 Create an IBM i virtual server 2. Click Next to go to the Memory settings. The window shown in Figure 7-21 opens. Figure 7-21 IBM i virtual server memory Chapter 7.
3. Specify the wanted quantity of memory. Click Next to go to the processor settings. The window shown in Figure 7-22 opens. Figure 7-22 IBM i virtual server processor settings 4. Choose a quantity of processors for the virtual server and click Next to create the virtual Ethernet adapters. The window shown in Figure 7-23 opens. Figure 7-23 IBM i virtual server settings for virtual Ethernet With the VIOS already defined, the FSM defines a virtual Ethernet on the same VLAN as the SEA on the VIOS.
Important: These steps are critical, because the IBM i virtual server must be defined to use only virtual resources through a VIOS. At the least, a virtual Ethernet and a virtual SCSI adapter must be defined in the IBM i virtual server. 5. Click Next to proceed to the Virtual Storage definitions, as shown in Figure 7-24. Figure 7-24 IBM i virtual server manual virtual storage definition 6.
7. Click Create Adapter. The window shown in Figure 7-26 opens. Figure 7-26 IBM i virtual server - create virtual SCSI adapter 8. Complete the fields in this window as follows: – Choose an adapter ID. – Specify SCSI Client for the adapter type. – Specify a virtual SCSI adapter on the VIOS as the Connecting virtual server.
9. Click OK to create this virtual SCSI adapter and return to the main Virtual Storage adapter window, as shown in Figure 7-27. Figure 7-27 IBM i virtual server settings for virtual SCSI adapter 10.This adapter is the only virtual SCSI adapter we create, so click Next to proceed to the physical adapter settings, as shown in Figure 7-28. Figure 7-28 IBM i virtual server physical adapter settings Chapter 7.
Important: Do not forget to configure the virtual SCSI server adapter on the VIOS that this virtual SCSI client adapter refers to. In addition, disks must be provisioned to the virtual SCSI server adapter in the VIOS to be used by the IBM i virtual server (operating system and data). To use a virtual optical drive from the VIOS for the IBM i operating system installation, the installation media ISO files must be copied to the VIOS, and the virtual optical devices must be created. 11.
7.6 Preparing for a native operating system installation If you need the entire capacity of the Power Systems compute node, an operating system can be installed natively on the node. The configuration is similar to the setup for a partitioned node, but all of the resources are assigned to a single virtual server. The operating system can then be installed to that single virtual server, using the methods described in Chapter 8, “Operating system installation” on page 317. 7.6.
– Set the Environment to AIX/Linux. – Select Assign all resources to this virtual server. This is the key selection. 2. Click Next. All the resources are assigned to this virtual server. The Summary window opens, as shown in Figure 7-31. Figure 7-31 Summary window when creating full node server 3. Click Finish to complete the creation of the single partition.
8 Chapter 8. Operating system installation In this chapter, we describe how to update firmware and install various operating systems on the compute node. We cover the following topics in this chapter: Firmware updates Methods to install operating systems Installation procedures Installing AIX Installing Red Hat Enterprise Linux Installing SUSE Linux Enterprise Server Installing IBM i © Copyright IBM Corp. 2012. All rights reserved.
8.1 Firmware updates IBM periodically makes firmware updates available for the compute node, the management module, or expansion cards in the compute node. In a compute node or chassis environment, there are multiple points to consider when considering firmware updates. In some cases, the chassis and infrastructure components can be updated concurrently, without disrupting virtual server operations. In this chapter, we describe methods for updating the Power Systems compute nodes.
Figure 8-1 shows the update firmware menu in IBM Flex System Manager. Firmware updates done using IBM Flex System Manager can be non-destructive or concurrent with respect to server operations, so that a server reboot is not required. Only updates within a release can be, but are not guaranteed to be, concurrent.
By using the firmware update function of AIX diagnostic tests. The firmware update function of the stand-alone diagnostics boot image. Installation of firmware: Before the installation of the new firmware to the temporary side (firmware backup area) begins, the contents of the temporary side are copied to the permanent side. After firmware installation begins, the previous level of firmware on the permanent side is no longer available. Firmware updates can take time to load.
– Install the firmware by running update_flash (on AIX): cd /tmp/fwupdate /usr/lpp/diagnostics/bin/update_flash -f 01EA3xx_yyy_zzz – Install the firmware by running update_flash (on Linux): cd /tmp/fwupdate /usr/sbin/update_flash -f 01EA3xx_yyy_zzz – Install the firmware by running ldfware (on VIOS): cd /tmp/fwupdate ldfware -file 01EA3xx_yyy_zzz 8. Verify that the update installed correctly, as described in 8.1.4, “Verifying the system firmware levels” on page 325. 8.1.
Figure 8-2 shows the diagnostic post-boot system console definition. ******* Please define the System Console. ******* Type a 1 and press Enter to use this terminal as the system console. Pour definir ce terminal comme console systeme, appuyez sur 1 puis sur Entree. Taste 1 und anschliessend die Eingabetaste druecken, um diese Datenstation als Systemkonsole zu verwenden. Premere il tasto 1 ed Invio per usare questo terminal come console.
2. Accept the copyright notice, and then choose the task selection menu entry shown in Figure 8-3. FUNCTION SELECTION 1 Diagnostic Routines This selection will test the machine hardware. Wrap plugs and other advanced functions will not be used. 2 Advanced Diagnostics Routines This selection will test the machine hardware. Wrap plugs and other advanced functions will be used. 3 Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.
3. Select Microcode Tasks, as shown in Figure 8-4. TASKS SELECTION LIST 801004 From the list below, select a task by moving the cursor to the task and pressing 'Enter'. To list the resources for the task highlighted, press 'List'. [MORE...
4. Select Download Latest Available Microcode, as shown in Figure 8-5. Microcode Tasks 801004 Move cursor to desired item and press Enter. Display Microcode Level Download Latest Available Microcode Generic Microcode Download F1=Help F3=Previous Menu F4=List Esc+0=Exit Enter Figure 8-5 Download (install) the latest available microcode 5. Insert the CD-ROM with the microcode image, or select the virtual optical device that points to the microcode image.
To verify the system firmware levels, complete the following steps: 1. Start the in-band diagnostics program by running the following command: diag 2. From the Function Selection menu, select Task Selection and press Enter, as shown in Figure 8-3 on page 323. 3. From the Tasks Selection List menu, select Microcode Tasks Display Microcode Level and press Enter, as shown in Figure 8-6. Microcode Tasks 801004 Move cursor to desired item and press Enter.
4. Select the system object sys0 and press F7 to commit, as shown in Figure 8-7. RESOURCE SELECTION LIST 801006 From the list below, select any number of resources by moving the cursor to the resource and pressing 'Enter'. To cancel the selection, press 'Enter' again. To list the supported tasks for the resource highlighted, press 'List'. Once all selections have been made, press 'Commit'. To avoid selecting a resource, press 'Previous Menu'.
The Display Microcode Level menu opens. The top of the window shows the system firmware level for the permanent and temporary images and the image that the compute node used to start (Figure 8-8). DISPLAY MICROCODE LEVEL IBM,7895-42X 802811 The current permanent system firmware image is AF740_051 The current temporary system firmware image is AF740_051 The system is currently booted from the temporary firmware image. Use Enter to continue.
Figure 8-9 shows window where you select the type of update to search for, download, and apply. For this procedure, we use Power System Firmware. Complete the following steps: 1. Select PowerlOFW from the list of available update types, as shown in Figure 8-9. Figure 8-9 Select the type of update 2. Click Add to add your selection to the list of selected update types. Chapter 8.
3. Select Power System Firmware from the list of selected update types. Figure 8-10 shows the firmware update that is ready to install. Figure 8-10 List of selected firmware 4. Review and confirm the list of updates that will be installed on the selected systems, as shown in Figure 8-11. After you confirm this list, the update begins and is concurrent (the system does not require a restart to activate the new firmware).
5. If necessary, review the installation log, as shown in Figure 8-12, to determine the status of the installation. Figure 8-12 Log for installation update Chapter 8.
8.2 Methods to install operating systems The Power Systems compute node provides several methods for installing and deploying your operating system images. We cover the following methods in this section: NIM installation Optical media installation TFTP network installation Cloning methods Installation method compatibility among operating systems is shown in Table 8-1.
3. In the next window, respond to the prompt for a machine name and the type of network connectivity you are using. The system populates the remaining fields and displays the screen shown in Figure 8-13. Define a Machine Type or select values in entry fields. Press Enter AFTER making all desired changes.
4. In the screen shown in Figure 8-13 on page 333, enter the remainder of the information required for the node. There are many options in this window, but you do not need to set them all to set up the installation. Most importantly, set the correct gateway for the machine. With your machine created in your NIM server, assign it the resources for the installation.
6. Select Allocate Network Install Resources, as shown in Figure 8-14. A list of available machines opens. Manage Network Install Resource Allocation Move cursor to desired item and press Enter. List Allocated Network Install Resources Allocate Network Install Resources Deallocate Network Install Resources F1=Help F9=Shell F2=Refresh F10=Exit F3=Cancel Enter=Do F8=Image Figure 8-14 Select Allocate Network Install Resources Chapter 8.
7. Choose the machine you want to install (in this example, we use 7989AIXtest). A list of the available resources to assign to that machine opens, as shown in Figure 8-15. Manage Network Install Resource Allocation Mo+--------------------------------------------------------------------------+ | Target Name | | | | Move cursor to desired item and press Enter.
9. Confirm your resource selections by running smit nim_mac_res and selecting Select List Allocated Network Install Resources, as shown in Figure 8-16. Manage Network Install Resource Allocation Move cursor to desired item and press Enter.
13.Select the option to perform a Base Operating System (BOS) installation by selecting bos_inst - perform a BOS installation, as shown in Figure 8-17. +--------------------------------------------------------------------------+ | Operation to Perform | | | | Move cursor to desired item and press Enter. Use arrow keys to scroll.
14.Confirm your machine selection and option selection in the next window, and select additional options to further customize your installation, as shown in Figure 8-18. Perform a Network Install Type or select values in entry fields. Press Enter AFTER making all desired changes.
15.Reboot the server and, during reboot, press the 1 key to access SMS mode, as shown in Figure 8-19.
16.Select option 1 (SMS Menu) to open the SMS Main Menu, as shown in Figure 8-20. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Main Menu 1. Select Language 2. Setup Remote IPL (Initial Program Load) 3. Change SCSI Settings 4. Select Console 5.
18.Select the adapter to use for the installation, as shown in Figure 8-21. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------NIC Adapters Device Location Code Hardware Address 1. Interpartition Logical LAN U7895.42X.
20.Select option 1 (BOOTP) as the network service to use for the installation, as shown in Figure 8-23. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Network Service. 1. BOOTP 2.
22.Perform system checks, for example, ping or adapter speed, to verify your selections, as shown in Figure 8-25. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------IP Parameters Interpartition Logical LAN: U7895.42X.1058008-V5-C4-T1 1. Client IP Address [9.27.20.216] 2. Server IP Address [9.42.241.191] 3. Gateway IP Address [9.27.20.1] 4. Subnet Mask [255.255.252.
24.Select option 5 (Select boot options) to display the Multiboot screen, as shown in Figure 8-26, and select option 1 (Select Install/Boot Device). Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Multiboot 1. Select Install/Boot Device 2. Configure Boot Device Order 3. Multiboot Startup 4. SAN Zoning Support 5.
26.After selecting this option, you are prompted again for the network service as you were in Figure 8-23 on page 343. Make the same selection here (option 1, (BOOTP)). 27.Select the same network adapter that you selected for Figure 8-21 on page 342), as shown in Figure 8-28. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Device Device Current Device Number Position Name 1.
28.On the Select Task screen, select option 2 (Normal Mode Boot), as shown in Figure 8-29. SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Task Interpartition Logical LAN ( loc=U7895.42X.1058008-V5-C4-T1 ) 1. 2. 3.
30.Respond to the prompt to confirm the exit. In the next screen, select Yes. Your installation displays a screen similar to the one shown in Figure 8-30. chosen-network-type server IP client IP gateway IP device MAC address loc-code = = = = = = = ethernet,auto,none,auto 9.42.241.191 9.27.20.216 9.27.20.1 /vdevice/l-lan@30000004 42 db fe 36 16 4 U7895.42X.1058008-V5-C4-T1 BOOTP request retry attempt: 1 TFTP BOOT --------------------------------------------------Server IP.....................9.42.241.
Note: IBM i installation can be performed from optical media. The IBM i process is different from what is described here for AIX and Linux. For more information, see Section 5 of Getting Started with IBM i on an IBM Flex System compute node, available at: http://www.ibm.com/developerworks/ To perform Optical media installation, you need an external USB drive (not provided with either the chassis nor the Power Systems compute node) attached to your Power Systems compute node.
The window shown in Figure 8-32 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Main Menu 1. Select Language 2. Setup Remote IPL (Initial Program Load) 3. Change SCSI Settings 4. Select Console 5.
5. Select option 1 (Select Install/Boot Device). The window shown in Figure 8-34 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Device Type 1. Diskette 2. Tape 3. CD/DVD 4. IDE 5. Hard Drive 6. Network 7.
6. Select the device type, in this case, option 3 (CD/DVD). The window shown in Figure 8-35 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Media Type 1. SCSI 2. SSA 3. SAN 4. SAS 5. SATA 6. USB 7. IDE 8. ISA 9.
7. Select option 6 (USB) media type. The window shown in Figure 8-36 opens and shows the list of available USB optical drives. In our example, a virtual optical drive is shown as item 1.What you see depends on the drive you have connected. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Media Adapter 1. U7895.42X.1058008-V6-C2-T1 /vdevice/v-scsi@30000002 2.
8. Select your optical drive. The window shown in Figure 8-37 opens. SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Task Interpartition Logical LAN ( loc=U7895.42X.1058008-V6-C4-T1 ) 1. 2. 3.
SUSE Linux Enterprise Server 11 The following steps pertain to SLES 11: 1. Obtain the distribution ISO file, and copy it to a work directory of the installation server. We configure a Network File System (NFS) server (this server can be the installation server itself or another server) and mount this shared directory from the target virtual server to unload the software. 2. On the installation server, install the tftp and the dhcpd server packages (we use dhcpd only to run bootp for a specific MAC address).
The MAC address shown in Figure 8-39 is the Hardware Address. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------NIC Adapters Device Location Code Hardware Address 1. Interpartition Logical LAN U8406.71Y.
5. On the installation server, configure the dhcpd.conf file and, assuming it is the NFS server too, the /etc/exports file. The dhcpd.conf file is shown in Figure 8-40, where we must replace XX.XX.XX.XX.XX.XX and the network parameters with our MAC and IP addresses. always-reply-rfc1048 true; allow bootp; deny unknown-clients; not authoritative; default-lease-time 600; max-lease-time 7200; ddns-update-style none; subnet 10.1.0.0 netmask 255.255.0.0 { host sles11 { fixed-address 10.1.2.
8. On the installation server or virtual server, start the dhcpd and nfsd services. 9. On the target virtual server, start netboot, as shown in the Figure 8-43. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Main Menu 1. Select Language 2. Setup Remote IPL (Initial Program Load) 3. Change SCSI Settings 4. Select Console 5.
10.Select option 5 (Select Boot Options). The window shown in Figure 8-44 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Multiboot 1. Select Install/Boot Device 2. Configure Boot Device Order 3. Multiboot Startup 4. SAN Zoning Support 5.
11.Select option 1 (Select Install/Boot Device). The window shown in Figure 8-45 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Device Type 1. Diskette 2. Tape 3. CD/DVD 4. IDE 5. Hard Drive 6. Network 7.
12.Select option 6 (Network) as the boot device. The window shown in Figure 8-46 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Network Service. 1. BOOTP 2.
14.Select the network adapter and the normal mode boot, and the installation starts loading the yaboot.ibm boot loader through the network, as shown in Figure 8-47.
Tip: The yaboot executable is named simply yaboot. We can rename it, for example, to yaboot.rh61, to avoid conflicts in the tftpboot directory. 4. The netboot image is larger than 65,500 512 bytes blocks and cannot be used due to a limitation of tftpd. We must boot the vmlinuz kernel and use the ramdisk image. Copy the two files from the ppc/ppc64 directory of the DVD to the tftpboot directory of the installation server. 5.
8.2.4 Cloning methods There are two cloning methods available for an AIX installation. The most common method of cloning is to create a mksysb image on one machine and restore it in the cloned machine. This method clones all of your OS (rootvg) but no non-rootvg vg OSes or file systems. This method is a fast way of cloning your AIX installation, and it can be performed using tape devices, DVD media, or a NIM installation. Ensure that the IP address is not cloned in this process.
To install AIX using the NIM lpp_source method, complete the following steps: 1. The first part of the process, setting up the environment for installation, is covered in 8.2.1, “NIM installation” on page 332, and we follow up after exiting to the normal boot part of the process. 2. After you exit to normal boot, a screen opens that shows the network parameters for BOOTP, as shown in Figure 8-24 on page 343. 3. Next, a screen opens that shows the AIX kernel loading.
4. After selecting the language, the installation options are displayed, as shown in Figure 8-51. Welcome to Base Operating System Installation and Maintenance Type the number of your choice and press Enter. Choice is indicated by >>>.
You can install the OS using option 1 or 2: – Option 1 (Start Install Now with Default Settings) begins the installation using the default options. – Option 2 (Change/Show Installation Settings and Install) displays several options, as shown in Figure 8-52. Installation and Settings Either type 0 and press Enter to install with current settings, or type the number of the setting you want to change and press Enter. 1 System Settings: Method of Installation.............
• Migration installation: Use this method when you are upgrading an older version of AIX (AIX 5L V5.3 or AIX V6.1) to a newer version, such as AIX V7.1. This option retains all of your configuration settings. The tmp directory is erased during installation. • Preservation installation: This method is similar to the New and Complete Overwrite option, except that it retains only the /home directory and other user files. This option overwrites the file systems.
Install Options 1. 2. 3. 4. Graphics Software................................................ System Management Client Software................................ Create JFS2 File Systems......................................... Enable System Backups to install any system...................... (Installs all devices) >>> 5. 0 88 99 yes yes yes yes Install More Software Install with the settings listed above. Help ? Previous Menu >>> Choice [5]: Figure 8-54 Install Options screen 5.
8.3.2 Installing Red Hat Enterprise Linux This section describes the installation of Red Hat Enterprise Linux (RHEL). Detailed information about supported operating systems is listed in 5.1.2, “Software planning” on page 119. We install the virtual servers using a virtual optical media and the ISO image of the RHEL distribution as the boot device. Figure 8-56 shows the Virtual Optical Media window in IBM Flex System Manager.
To install RHEL, complete the following steps: 1. After the virtual media is set up, boot the server and enter SMS. The screen shown in Figure 8-57 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Main Menu 1. Select Language 2. Setup Remote IPL (Initial Program Load) 3. Change SCSI Settings 4. Select Console 5.
2. Select option 5 (Select Boot Options). The screen shown in Figure 8-58 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Multiboot 1. Select Install/Boot Device 2. Configure Boot Device Order 3. Multiboot Startup 4. SAN Zoning Support 5.
3. Select option 1 (Select Install/Boot Device). The window shown in Figure 8-59 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Device Type 1. Diskette 2. Tape 3. CD/DVD 4. IDE 5. Hard Drive 6. Network 7.
4. We want to boot from a virtual optical drive, so we select option 3 (CD/DVD). The window shown in Figure 8-60 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Media Type 1. SCSI 2. SSA 3. SAN 4. SAS 5. SATA 6. USB 7. IDE 8. ISA 9.
5. For the virtual optical media, select option 1 (SCSI). The window shown in Figure 8-61 opens. Version AF740_051 SMS 1.7 (c) Copyright IBM Corp. 2000,2008 All rights reserved. ------------------------------------------------------------------------------Select Device Device Current Device Number Position Name 1. SCSI CD-ROM ( loc=U7895.42X.
It is possible to stop the boot process by pressing the Tab key, allowing you to enter optional parameters on the command line: – To use VNC and perform an installation in a graphic environment, run linux vnc vncpassword=yourpwd. The password must be at least six characters long. – To install Red Hat Enterprise Linux 6.
Figure 8-64 shows the VNC graphical console start. Running anaconda 13.21.117, the Red Hat Enterprise Linux system installer - please wait. 21:08:52 Starting VNC... 21:08:53 The VNC server is now running. 21:08:53 You chose to execute vnc with a password. 21:08:53 Please manually connect your vnc client to ite-bt-061.stglabs.ibm.com:1 (9.27.20.114) to begin the install. 21:08:53 Starting graphical installation. Figure 8-64 VNC server running 7.
11.Select either Fresh Installation (a new and complete overwrite) or Upgrade an Existing Installation, as shown in Figure 8-66.
12.Select a disk layout, as shown in Figure 8-67. You can choose from a number of installations or create a custom layout (for example, you can create a software mirror between two disks). You can also manage older RHEL installations if they are detected. Figure 8-67 Disk space allocation selections 13.Select the software packages to install, as shown in Figure 8-68. Figure 8-68 RPM packages selection The software installation process starts. Chapter 8.
When the VNC installation is complete, the window shown in Figure 8-69 opens. The virtual server reboots, the console returns to alphanumeric mode, and you can connect to the server using SSH or Telnet. Figure 8-69 End of VNC installation As the system boots, the operating system loads, as shown in Figure 8-70.
The basic installation is complete. You might choose to install additional RPMs from the IBM Service and Productivity Tools website found at: http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html 8.3.3 Installing SUSE Linux Enterprise Server In this section, we describe the installation of SUSE Linux Enterprise Server 11 (SLES 11). We prefer to do the installation using VNC (in graphic mode) because many of the panels are complex, and it is easier to accomplish this task in graphic mode.
We do not show the initial SMS steps here, as they are described in 8.3.2, “Installing Red Hat Enterprise Linux” on page 370. Follow step 1 on page 371 to step 7 on page 377 before completing the following steps: 1. The first window is the installation mode window, shown in Figure 8-71.
2. Select New installation and click Next. The Installation Settings window opens (Figure 8-72). Figure 8-72 Installation settings 3. Either accept the default values or click Change to change the values for: – – – – Keyboard layout Partitioning Software Language Chapter 8.
Click Next to continue. The Perform Installation window opens (Figure 8-73) and shows the progress of the installation.
The final phase of the basic installation process is shown in Figure 8-74. Figure 8-74 Finishing Basic Installation window At the end of the installation, the system reboots and the VNC connection is lost. Chapter 8.
4. Figure 8-75 shows the console while rebooting. After reboot, VNC restarts with the same configuration, and we must reconnect the VNC client.
6. Other installation screens open. Enter values as needed for your environment. After the installation is complete, you see the window shown in Figure 8-76. Figure 8-76 Installation Completed window Chapter 8.
7. The virtual server reboots, the VNC server is shut down, and we can connect to the text console, through a virtual terminal, using Secure Shell (SSH) or Telnet, as shown in Figure 8-77.
Abbreviations and acronyms AAS Advanced Administrative System DPM distributed power management AC alternating current DRTM ACL access control list Dynamic Root of Trust Measurement AME Active Memory Expansion DSA Digital Signature Algorithm Advanced Management Module DVD digital video disc EMC electromagnetic compatibility AMS access method services ESA Electronic Service Agent AS Australian Standards ESB error status block ASIC application-specific integrated circuit ETE every
IBM International Business Machines MLC multi-level cell MPIO multi-path I/O ID identifier MSI message signaled interrupt IDE integrated drive electronics MTM machine-type-model IEC International Electrotechnical Commission MTS Microsoft Transaction Server IEEE Institute of Electrical and Electronics Engineers MTU maximum transmission unit NASA IMM integrated management module National Aeronautics and Space Administration NFS network file system IP Internet Protocol NIC networ
ROM read-only memory RPM Red Hat Package Manager RSA SWMA Software Maintenance Agreement Remote Supervisor Adapter TB terabyte RSS receive-side scaling TCB Transport Control Block RTE Remote Terminal Emulator TCG Trusted Computing Group SAN storage area network TCP Transmission Control Protocol SAS Serial Attached SCSI TCP/IP SATA Serial ATA Transmission Control Protocol/Internet Protocol SCP System Control Process TFTP Trivial File Transfer Protocol SCSI Small Computer Syst
392 IBM Flex System p260 and p460 Planning and Implementation Guide
Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book. IBM Redbooks The following publications from IBM Redbooks provide additional information about IBM Flex System. They are available at: http://www.redbooks.ibm.
IBM Flex System FC3052 2-port 8Gb FC Adapter, TIPS0869 IBM Flex System FC3172 2-port 8Gb FC Adapter, TIPS0867 IBM Flex System FC5022 2-port 16Gb FC Adapter, TIPS0891 IBM Flex System IB6132 2-port FDR InfiniBand Adapter, TIPS0872 IBM Flex System IB6132 2-port QDR InfiniBand Adapter, TIPS0890 ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884 You can search for, view, download or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the fo
Help from IBM IBM Support and downloads ibm.com/support IBM Global Services ibm.
396 IBM Flex System p260 and p460 Planning and Implementation Guide
Index A Active Energy Manager 151 Active Memory Expansion 92 adapter feature codes 101 adapter slots 99 AIX cloning 364 diagnostics 320 Disk Array Manager 98 firmware updates 320 installing 364 NIM installation 364 PowerVM 278 supported versions 120 upgrade 368 virtual Ethernet 280 virtual server 308 anchor card 110 architecture 74 audit logs 158 B bare metal install 315 BIOS Bootblock 158 blades See compute nodes block diagrams 74 BOS installation 338 breakout cable 198 C cache 85 capping 144 channels 83
Console Breakout Cable 198 cooling 59, 149 Chassis Management Module interface 178 cores 81 cover 98 CRTM 158 D disabling SOL 249 disks 95 DNS 186 DRTM 158 dual VIOS 135 E EN2024 4-port 1Gb Ethernet Adapter 104 EN2092 1Gb Ethernet Scalable Switch 53 EN4054 4-port 10Gb Ethernet Adapter 102 EN4091 10Gb Ethernet Pass-thru 53 EN4093 10Gb Scalable Switch 53 Energy Estimator 138 energy management 87 EnergyScale 111 Enterprise Chassis Chassis Management Module 55 Console Breakout Cable 198 cooling 59, 149 dimens
hardware 193 importing update files 226 inital setup 202 Intial Setup 252 inventory collection 242 Java 205 local storage 197 Manage Power Systems Resources window 236 management network 210 management network adapter 198 motherboard 196 network adapter 198 NTP setup 208 open a console 245 overview 7, 9, 54, 192 partitioning 284 planar 196 plugins tab 263 power control 206 Power Systems 236 Power Systems Management 292 remote access 268 remote control 205 setup 202 SOL disable 249 solid-state drives 197 spe
rules 89 N N+1 redundancy 147 N+N redundancy 145 native installation 315 network planning 118, 124 network redundancy 129 network topology 52 networking 8 teaming 131 NIC teaming 131 NIM installation 332 NPIV 281 O operating environment 152 operating systems 119, 317, 364–388 AIX install 364 AIX support 120 cloning 364 DVD install 348 IBM i 388 IBM i support 121 installing 332 Linux support 122 native install 315 NIM installation 332 optical media 348 Red Hat Enterprise Linux 370 SUSE Linux Enterprise Ser
storage 96 supported adapters 101 USB port 69 p460 architecture 75, 79 block diagram 75 board layout 67 chassis support 73 cover 97 deconfiguring 77 dual VIOS 136 Ethernet adapters 102 expansion slots 99 features 66 Fibre Channel adapters 105 front panel 69 I/O slots 99 InfiniBand adapters 107 labeling 71 light path diagnostic panel 70 local storage 96 memory 87 installation sequence 90 memory channels 83 operating systems 119 overview 64, 66 PCIe expansion 99 power button 69 power requirements 146 processo
power policies 143 remote access 188 remote presence 9 S SAN connectivity 127 SAS storage 95 security 158 Chassis Management Module interface 185 policies 159 Serial over LAN 109 serial port cable 198 services 115 single sign-on 158 slots 99 SmartCloud Entry 44 SMS mode 340 SMT 81 SMTP 186 SOL disabling 249 solid-state drives 96 sound level 49 Spanning Tree Protocol 130 specifications Enterprise Chassis 48 Standard, PureFlex System 3 standard, PureFlex System 26 storage 95 overview 8 planning 118 SUSE Linu
IBM Flex System p260 and p460 Planning and Implementation Guide (0.5” spine) 0.475”<->0.
Back cover ® IBM Flex System p260 and p460 Planning and Implementation Guide Describes the new POWER7 compute nodes for IBM PureFlex System Provides detailed product and planning information Set up partitioning and OS installation To meet today’s complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources that is simple to deploy and can quickly and automatically adapt to changing conditions.