Front cover Logical Partitions on System i5 A Guide to Planning and Configuring LPAR with HMC on System i Understand the new Logical Partitions for IBM Power5 architecture Learn how to install, configure, and manage LPAR with the latest HMC Discover how to implement OS/400 logical partitions Nick Harris L.R Jeyakumar Steve Mann Yogi Sumarga William Wei ibm.
International Technical Support Organization Logical Partitions on System i5 A Guide to Planning and Configuring LPAR with HMC on System i May 2006 SG24-8000-01
Note: Before using this information and the product it supports, read the information in “Notices” on page ix. Second Edition (May 2006) This edition applies to i5/OS Version 5, Release 3, and the System i5™ system products. © Copyright International Business Machines Corporation 2005, 2006. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.1 Model 520/550 CEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Model 570 CEC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Connecting to a 5250 console remotely . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 5250 console remotely configured . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 36 37 37 Chapter 3.
5.2.11 Tagging partition resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.12 Opticonnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.13 Specifying power control partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.14 Miscellaneous profile settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.15 Review profile summary . . . . . . . . . . .
7.3 HMC User Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 HMC Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Add, modify, copy, or remove user profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Customizing user task roles and managed resource roles . . . . . . . . . . . . . . . . . 238 238 239 242 Chapter 8. HMC duplication and redundancy. . . . . .
11.3 Using the HMC from i5/OS with OpenSSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Running DLPAR scripts from i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Scheduling the DLPAR function from i5/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Scheduling the i/o movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Scheduling the DLPAR function from Windows . . . . . . . . . . . . .
14.2.5 Updating firmware through an i5/OS service partition (in-band) . . . . . . . . . . . . 457 Chapter 15. HMC Access Password Reset Using Advanced System Management Interface (ASMI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 15.1 Accessing the ASMI using the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 15.2 Accessing the ASMI using a Web browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX® AIX 5L™ AS/400® Electronic Service Agent™ Eserver® Eserver® eServer™ IBM® iSeries™ i5/OS® Lotus® OpenPower™ OS/400® Power PC® PowerPC® POWER™ POWER4™ pSeries® Redbooks™ Redbooks (logo) ™ System i5™ System p5™ Tivoli® Virtualization Engine™ Wake on LAN® WebSphere® xSeries® The following terms are trademarks of other companies: Java, Javadoc, JDBC, and all Ja
Preface This IBM Redbook gives a broad understanding of the new System i5™ architecture as it applies to logically partitioned System i5 systems. This functionality is delivered through a new configuration and management interface called the Hardware Management Console (HMC). Reading this redbook will help you design your server partition scheme from scratch. We also discuss the requirements to create a solution to migrate from existing iSeries™ servers with and without logical partitions.
Yogi Sumarga is an Account Product Services Professional working for IBM Global Services in Indonesia. He specializes in LPAR design and configuration, IT Maintenance, OS support and Hardware System Service for IBM System i5, iSeries, and AS/400. He has planned and implemented LPAR and OS/400 V5R3 installation at three customer sites using System i5 model 520s. He has also planned and installed System i5 model 520, Model 570, and iSeries Model 870, all with LPAR.
Become a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability.
xiv Logical Partitions on System i5
Summary of changes This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-8000-01 for Logical Partitions on System i5 as created or updated on May 17, 2006. May 2006, Second Edition This revision reflects the addition, deletion, or modification of new and changed information described below.
xvi Logical Partitions on System i5
1 Chapter 1. Introduction to LPAR on IBM System i5 This chapter provides an overview of the System i5 and its LPAR capabilities as explained in the following topics: HMC, Hypervisor, and partitions Software requirements Processor use in System i5 LPARs Memory use in System i5 LPARs © Copyright IBM Corp. 2005, 2006. All rights reserved.
1.1 HMC, Hypervisor, and partitions The IBM ^ System i5 systems provide a new system architecture for logical partitioning (LPAR) and Capacity Upgrade on Demand (CUoD): The LPAR Hypervisor is now shipped as a firmware part of all ^ System i5 models. It is stored in the non-volatile random access memory (NVRAM) of the Service Processor. Previously, it was a part of the System Licensed Internal Code (SLIC) shipped with OS/400. Once loaded, the LPAR hypervisor runs in main memory.
The HMC is ordered as a required priced feature of any LPAR or CuOD configuration for new orders or upgrades (MES), or shipped as a mandatory part of all high-end models. The new System i5 systems have a new scheme storing and managing partition information. In this new scheme you have Partition Profiles and System Profiles. Partition Profiles: A partition profile is used to allocate resources such as processor units, memory and I/O cards to a partition.
System Profile - Day System Profile - Night Production Partition Southern Region Processors 4 Memory 10GB Tape Drive N Production Partition Southern Region Processors 6 Memory 15GB Tape Drive Y Production Partition Northern Region Processors 3 Memory 8GB Tape Drive N Production Partition Northern Region Processors 6 Memory 13GB Tape Drive N Development Partition Processors 2 Memory 5GB Tape Drive Y Development Partition Processors 0 Memory 0GB Tape Drive Y Test Partition Processors 2 Memory 5
This allows one system to adopt multiple personalities. In this type of scenario we are assuming the partitions will be reloaded at the disaster recovery site with no data on the disk. 1.2 Software requirements The IBM ^ System i5 systems require one of the following levels of operating system: OS/400 version 5 release 3 or later. AIX 5L™ version 5.2 with native IO support. AIX 5L version 5.3 will support hosted IO as well as native IO support. Linux version 2.
1.2.3 Simple scenario with dedicated and shared capped partitions Here is a simple example of a 4-way System i5 system with two OS/400 partitions using the shared processors pool and one AIX partition using 2 dedicated processors. Table 1-1 System i5 system with two OS/400 partitions Partition ID - OS Partition Type Processing Units Virtual Processors Licenses P1 - OS/400 Shared capped 1.5 1.5 + 0.5 P2 - OS/400 Shared capped 0.
1.2.5 Complex scenario with shared uncapped partitions Here is a complex example of an 8-way System i5 system with two OS/400 uncapped partitions and one AIX uncapped partition, all using the shared processors pool. Table 1-3 lSystem i5 system with two OS/400 uncapped partitions and one AIX uncapped partition Partition ID - OS Partition Type Processing Units Virtual Processors Licenses P1 - OS/400 Shared uncapped 4.0 - VP=7 7 + 3 = 10 but P2 - OS/400 Shared uncapped 1.
1.3 Processor use in System i5 LPARs In this section we discuss the basic concepts of dedicated processor, shared capped processor, and uncapped processor; how they work, how to configure in logical partition configuration, and several aspects regarding the use of shared capped and uncapped processors. Additionally, we also describe memory allocation for i5 OS logical partitions.
If there are three processors in the System i5 system unit, with all three processors in the shared processors pool, if a logical partition is allocated 0.8 processing units, then the logical partition will have 8 milliseconds of processing time from 30 milliseconds of processing time. For every 10 millisecond time slot, you give 8 milliseconds out of 30 milliseconds of processing time available. Figure 1-5 illustrates the processing time assignment from the shared processors pool to a logical partition.
x x x Figure 1-6 Basic flow of data from disk to processor Next we show an example of two shared capped logical partitions sharing a single processing unit from the shared processors pool. There are two logical partitions, logical partition A and logical partition B. Logical partition A has two jobs, A1 and A2. Logical partition B has two jobs, B1 and B2. Logical partition A is assigned 0.8 processing units. Logical partition B is assigned 0.2 processing units.
However, job B1 must stop after 2 milliseconds in the second processor cycle again because the processing units assigned are very small. In the next processor cycle, job B1 is resumed again, and has to stop after 2 milliseconds. In the next processor cycle, job A2 is submitted and finishes after 6 milliseconds. Jobs B1 is resumed again, being processed for 2 milliseconds, and is complete. This example shows two logical partitions sharing a single processing unit from a shared processors pool.
for the partition to the tasks submitted for processing. The Power Hypervisor attempts to put a partition back onto the last processor it used. However, if that processor is currently busy or not available, it will go to any available one. Resources of one partition are isolated from other partition resources of other logical partitions, but there are ways for these active logical partitions to share their processing power, memory, and I/O resources to another active logical partition.
When a partition with dedicated processors is powered down, their processors will be available to the shared processors pool. This capability is enabled from the partition profile setting. Check the Allow idle processors to be shared option to enable this feature as shown in Figure 1-8. If this feature is not selected, the dedicated processors from inactive logical partitions will not be available in the shared processors pool.
The shared processors pool is created from the processors left over after the dedicated processors are assigned to logical partitions that use dedicated processors. This will typically be spread over multiple nodes. These processors are shared among all the partitions that use shared processors. You can allocate at least 0.1 of a shared processor or up to the total number of processors in the system. One single physical processor is equal to 1.00 processing units, two physical processors are equal to 2.
It is possible for uncapped partitions to have their processing capacity exceed the current processing capacity or more than 100% CPU utilization when they use the unused processing units from the shared processors pool. This usually happens when the uncapped partitions demand more processing power to complete their tasks. It is different in the case of a partition with a capped processor. A capped partition will never exceed the assigned processing capacity.
The server firmware distributes the processing units evenly among all virtual processors assigned to a logical partition. For example, if logical partition A has 1.4 processing units and 2 virtual processors assigned, then each virtual processor will be equal to 0.7 physical processing units. These 2 virtual processors will support the logical partition workload. The number of processing units available for each virtual processor is limited.
Figure 1-11 Shared processor processing units setting for logical partition In the foregoing example, we give the minimum processing units of 0.3, desired processing units of 1.6, and maximum processing units of 4. Click the Advanced... button to display shared processor mode and virtual processor setting window as shown in Figure 1-12.
Configuring the virtual processors setting is similar to telling the logical partition how many processors it can run jobs on simultaneously. Figure 1-13 shows an illustration of the Virtual Processor configuration. The figure shows the difference of the implementation of virtual processor configuration of one shared logical partition with 2 virtual processors and with 4 virtual processors. There are 4 processors available in the shared processors pool. The logical partition is assigned 1.
Figure 1-14 Logical partition profile processing unit configuration Enter the number of virtual processor you want to add, and click OK to add new virtual processors. Figure 1-15 shows how to add 2 virtual processors to the logical partition. Click OK to add virtual processors. Figure 1-15 Add new virtual processors to logical partition As a result, now the logical partition has 6 virtual processors (Figure 1-16). Chapter 1.
Figure 1-16 New number of current virtual processors after addition Attention: When you try to add more processing units for a shared processor logical partition, the number of desired physical processing units as the result of the addition must be equal to or less than the desired number of virtual processors. 1.3.8 Configuring dedicated processors for the logical partition Processor configuration is stored in the logical partition profile.
Click the Next button to continue. Enter minimum, desired, and maximum processors for the logical partition. Figure 1-18 shows an example of minimum, desired, and maximum processors for the dedicated logical partition. Adjust these values with the logical partition workload. Figure 1-18 Minimum, desired, and maximum processors for dedicated logical partition 1.3.9 Configuring shared capped processors for logical partition The shared processors configuration is stored in the logical partition profile.
Figure 1-19 Shared processing mode for logical partition with shared processors Next, fill in the amount of minimum, desired, and maximum processing units for the logical partition. The example in Figure 1-20 shows an example of minimum, desired, and maximum processing units for the logical partition. Adjust these processing unit values with the logical partition workload.
Figure 1-21 Capped sharing mode for shared capped processor configuration HMC will automatically calculate the minimum, desired, and maximum number of Virtual Processors for logical partition. You may change the virtual processors setting now or later using Dynamic Logical Partitioning (DLPAR). 1.3.10 Configuring shared uncapped processors for logical partition The shared processors configuration is stored in the logical partition profile.
Figure 1-22 Uncapped sharing mode for shared uncapped processor configuration Adjust the uncapped weight for this logical partition. The uncapped weight will be used to determine the portion of free processing units that will be distributed to this logical partition among all shared uncapped logical partitions when two or more shared uncapped logical partitions demand more processing units from the shared processors pool.
If you could determine the processor requirement for your partition workload and predict the workload growth in the future that demands more processing power, we recommend that you create a logical partition with dedicated processors for the best logical partition performance. If you have a limited number of physical processors and want to create several logical partitions with flexibility in processor resources usage among all logical partitions, then a shared processor could be your choice. 1.3.
So rather than a memory granularity of 1MB, the System i5 memory granularity will be between16 MB and 256 MB. Again, this is determined by the system setting for the memory region size. With the June 2004 level of code, you can view the LMB size from the HMC by displaying the Managed Server properties, selecting the memory tab, and you will see the current LMB setting. In a future code release, you will be able to change the LMB size through the ASM interface.
1.4.5 Memory allocation for the i5 OS logical partition The logical partition is assigned a minimum, desired, and maximum memory as defined in the logical partition profile. The memory assigned to i5 OS logical partition is only for the logical partition. It does not include the memory reserved for Hardware Page Table (HPT). For example, the logical partition will receive full 1024 GB of memory as configured in the logical partition profile.
When the partition is activated, the configured memory is assigned to them and the Hardware Page Table (HPT) is created by Power Hypervisor. Memory allocation must be calculated precisely, in order to avoid lack of memory, which could have an impact to the logical partition performance. If the memory allocation for all logical partitions is not calculated precisely, the last powered on logical partition will receive less memory than the configured amount of memory.
Figure 1-25 Logical partition memory configuration The Hardware Page Table for this logical partition is calculated by dividing the total memory available for the logical partition by 64. The result is then rounded to the next whole number. For the configured memory of the logical partition in Figure 1-25, the HPT size is 10752 MB / 64 = 168 MB. The Power Hypervisor will then allocate 256 MB memory for this logical partition HPT. Another memory allocation in System i5 system is for Power Hypervisor memory.
Memory can be moved between logical partitions dynamically. However, this may cause configurations to become less optimal, because the memory moved/removed will probably be spread across all nodes. Determining an optimum memory configuration is very important to achieve logical partition best performance.
2 Chapter 2. i5/OS consoles under System i5 In this chapter we provide details of console options and rules on the Power 5 platform as it relates to iSeries for stand-alone (non-partitioned) systems and for LPAR systems. For stand-alone systems, the console IOA placement requirements are very specific and must be followed. For LPAR systems, the placement rules do not apply, but there are some concerns.
2.1 Console history There are now a few types of consoles that can be used on the iSeries. Originally, in the System 3X world, the Twinax console was the one and only kind of console that could be used. The interface was a 5250 green screen. When the AS/400 was announced, Twinax was still the only console. Async Console was added, which used a 5250 session in Client Access via a serial port on a PC. This is no longer supported.
For example, if you have an IOP tagged for console and the IOP has a WAN and LAN card in the IOP, you can change modes between direct cable or LAN connected Operations Console. If you also tag an IOP for ALTERNATE console, it must have a Twinax card. Then you can switch between Direct, LAN, or Twinax. The ALTERNATE console cannot install the operating system or LPPs and may have limited DST functions. See Figure 2-1, Figure 2-2, and Figure 2-3 for an illustration of these situations.
Figure 2-3 Twinax system adapter is needed 2.3 Console for partitioned (LPAR) systems On System i5 systems, the LPAR structure is not as we have known it on iSeries. Most of the basic concepts are the same, but there are significant changes. System i5 systems have no Primary (controlling) or any Secondary partitions. All partitions are “equal” and independent in terms of their relationship to each other. Linux partitions can still use virtual resources from an OS/400 partition or be entirely independent.
2.4 Console IOA placement rules for stand-alone systems In the following sections we list the various rules. Note: These rules apply to LAN Console and Twinax Console in a stand-alone system. If using the Direct Operations Console, the ECS card/slot will be used rather than any of the other slots. 2.4.1 Model 520/550 CEC The 520/550 will first look for the console in slot 5. If an appropriate IOA is not there, it will look in slot 2 (which requires a second IOP).
2.5 Console IOA placement rules for stand-alone systems, including IXS considerations Pre-System i5 IXSs that will be migrated from another system must be placed in an expansion tower. They are not supported in the system unit. The new IXS available during 2004 can be placed in the system unit and will use the slots listed in the following tables. 2.5.1 Model 520/550 CEC The 520/550 will first look for the console in slot 5.
2.6 Connecting to a 5250 console remotely This section covers connecting to a 5250 console remotely. The remote support for HMC 5250 can use the same SSL configuration as the System Manager Security on the HMC. For more information about configuring System Manager Security, see “System Manager Security” on page 41. 2.6.
2. Select Link Parameter. When the window in Figure 2-5 prompts you, type the HMC host name or IP address in the Host Name field and indicate the port number. Type 2300 into the port number field if you are using non-SSL or 2301 if you are using SSL. Then select OK to finish the configuration. Figure 2-5 Configure 5250 IP address for remote console 3. When the window in Figure 2-6 prompts you, select the correct language and press Enter.
4. When the window in Figure 2-7 prompts you, type the correct HMC user ID and password, then press Enter. Figure 2-7 Remote 5250 console - HMC user ID and password 5. When the window in Figure 2-8 prompts you, select the management server that you want to access. Figure 2-8 Remote 5250 console - select management server Chapter 2.
6. When the window in Figure 2-9 prompts you, select connect modes and press Enter. Figure 2-9 Remote 5250 console- connect modes 7. When the window in Figure 2-10 prompts you, type the correct i5/OS user ID and password to manage this system.
Configure your Linux product To configure your Linux product, do the following steps: 1. Create a new session by using the setup5250 configuration program. 2. In the 5250 Emulator Connection window, type the HMC TCP system name or IP address in the AS/400 Host Name field. 3. Select Advanced 5250 Connection.... The Advanced 5250 Emulator Connection window is displayed. 4. Type 2300 into the Telnet Port number field. 5. Type Emulator User ID and Emulator Password fields. 6.
A server is an HMC you want to access remotely. In Figure 2-11, HMCs 1, 3, and 4 are servers. A client is a system from which you want to access other HMCs remotely. In Figure 2-11, Web-based System Manager Remote Clients A, B, and C, and HMCs 1, 2, and 5 are clients. As shown in Figure 2-11, you can configure multiple servers and clients in your private and open networks. An HMC can be in multiple roles simultaneously. For example, an HMC can be a client and a server like HMC1 in Figure 2-11.
The following list is an overview of tasks involved in installing and securing the remote client: 1. Configure one HMC as a Certificate Authority (CA). 2. Use this HMC to generate private keys for the servers. 3. Install the private keys on the servers. 4. Configure the servers as secure System Manager servers. 5. Distribute the CA's public key to the servers or clients. Note: Tasks 3 and 5 are completed by copying the keys to diskette and installing them on the servers or clients. Chapter 2.
44 Logical Partitions on System i5
3 Chapter 3. HMC overview and planning This chapter provides an overview of the application functions of the Hardware Management Console (HMC). We also discuss some of the planning needed to install an HMC.
3.1 HMC concepts and initial setup The main functions of the HMC are to perform logical partitioning functions, service functions, and various system management functions. The partitioning functions and some of the servicing functions, which were previously in the iSeries service tools, will now be performed by functions in the HMC for partitioned systems. The HMC can be used to manage from one to two partitioned systems.
iSeries Recommended Fixes - Server Firmware: Update Policy Set to Operating System: http://www-912.ibm.com/s_dir/slkbase.nsf/c32447c09fb9a1f186256a6c00504227/604992740f846a 4986256fd3006029b5?OpenDocument iSeries Recommended Fixes - Server Firmware: Update Policy Set to HMC: http://www-912.ibm.com/s_dir/slkbase.nsf/ibmscdirect/E58D7BBF0EAC9A2786256EAD005F54D8 3.1.2 Types of HMC The HMC runs as an embedded OS on an Intel® based workstation that can be desktop or rack mounted.
The desktop HMC can use a number of IBM displays as shown in the e-config. The desktop HMC does not have to use the monitor shown in the above figure. There is no ability to add device drivers to the embedded OS. So you should test any proposed OEM display before running in production. Rack mounted HMC The supported rack mounted models are the 7310-CR2/3, the older version 7315-CR2 which can be migrated to System i5 HMC code. Figure 3-3 shows a picture of the 7310-CR3.
The eServer Hardware Information Center guides the user through cabling up the HMC and then through a checklist to gather information needed to configure the HMC. The information needed includes: – Network settings for the HMC.
3.2 Installing the HMC In this section we discuss the physical setup of the HMC. Attention: When installing a new System i5 system, do not power on the system before connecting it to an HMC. The Server processor (SP) on a System i5 system is a DHCP client and will search for a DHCP server to obtain its IP address. If no DHCP server can be found, then the SP will assign a default IP address. If this occurs, you will have to use ASM to manually change the IP setting of the SP.
Figure 3-6 Rear view of rack mounted HMC ports Chapter 3.
The vertical USB port to the right of the HMC ethernet ports is not available for use. The HMC 1 and 2 ports on the rear of the HMC system unit connect to the Service Processor HMC ports 1 and 2. But only one cable should be connected at one time. The service process would have major problems if both cables were to be connected between the ports. See Figure 3-7. Notice that on the rear of the 570, there is a pair of SPCN cables. These are connected to external towers for power control.
Figure 3-8 shows a closer rear view of a model 570 with the cables identified. HMC Ports SPCN Ports 2x USB 2x LAN Figure 3-8 Ports on rear of i570 3.3 HMC networking options When you first install your HMC, you will have a number of networking options available to you. You will have to decide which types of network are best for your environment.
Figure 3-9 Private direct networking If you need to make changes to the network, these can be done manually within the HMC interface. 3.3.2 Private indirect networking A private indirect network shown in Figure 3-10 is effectively the same as private network, but the signals pass through one or many hubs/switches. In Figure 3-10 we show two servers connected to a hub and two HMCs connected to the same hub/switch.
3.3.3 Private and open networking As there are two network connections on an HMC, you can connect it to both a private network and a public network. The HMC shown in Figure 3-11 is the DHCP server to the indirect connected servers. Figure 3-11 HMC connect to both private and public networks There is also a connection to the local public network as a DHCP client. In this network we have Web-based System Management Remote Clients (WSMRC) and a remote HMC.
3.4 Initial tour of the desktop You will now be viewing the HMC desktop as shown in Figure 3-12. The desktop comprises the following components: Fluxbox desktop. This is a standard Linux desktop supplied with the embedded OS. There is a task bar at the bottom of the Fluxbox desktop. HMC Window. This panel displays the HMC Management Environment. As shown in Figure 3-12, a managed server has been discovered, but the right-hand navigation bar has not been expanded.
ibm5250 (5250 Emulator): This is a 5250 emulator session that can be used as the console session for any i5/OS partition you choose to connect to it. You should be aware that a limited number of 5250 emulation sessions should be started. The performance of the HMC could be degraded by starting too many sessions. All sessions traverse the same private LAN and this has limited bandwidth. See Figure 3-13. Figure 3-13 Fluxbox Terminals menu selection 3.4.2 Net menu Here we describe the items on the Net menu.
3.4.3 Lock menu Here we describe the items on the Lock menu. Lock menu option: This locks the HMC console session. To return to the HMC console, you will need to supply the user ID and password that were used to initiate the previous session. The locked out session is expecting the user ID and password of the previously logged in user. See Figure 3-15. Figure 3-15 Fluxbox Lock menu selection 3.4.
Figure 3-16 Exit the HMC console 3. Click Exit, and you are presented with a pull down menu that has Logout as default (Figure 3-17). Figure 3-17 Exit HMC 4. Click the pull down arrow. You will see the three options, Shutdown, Reboot, and Logout. See Figure 3-18. To shut down the HMC, highlight Shutdown Console. Figure 3-18 Exit pull down menu Chapter 3.
5. Click OK. See Figure 3-19. Figure 3-19 Accept shutdown The HMC will now shut down. Next we discuss the basic functions and terminology of the HMC. 3.5 Server and partition If an ^ i5 is to be partitioned, then an HMC is needed to create and manage the partitions. The partition configurations are implemented using profiles that are created on the HMC and that are stored on the HMC and on the service processor. A profile defines a configuration setup for a managed system or partition.
System profiles Using the HMC, you can create and activate often-used collections of predefined partition profiles. This list of predefined partition profiles is called a system profile. The system profile is an ordered list of partitions and the profile to be activated for each partition. The system profile is referred to when the whole system is powered off and back on.
System properties - General These properties relate to general information about the server. The Name of the processor is a shipped value that can be user defined. The Serial Number and Type/Model are fixed by manufacturing. Type/Model may be changed by an MES upgrade. State is the value shown on the overview panel and indicates the current server status. Possible values for “State” are shown in Table 3-1.
The value of Service Partition displays the name of the service partition if one has been allocated. In our example, we have not yet created a partition, so a service partition has be assigned. A service partition is an i5/OS partition that has been designated to report and monitor errors when the HMC undergoing maintenance or is not available. If there are updates that affect the service processor, they will be in the form of MHxxxx PFTs.
See Figure 3-22 for an example of managed system IPL properties. Figure 3-22 Managed system - IPL properties The Power On parameters tab (Figure 3-22) shows information related to how the partition will be booted up or powered on. The drop down boxes for Power On have the following options: Keylock position — Normal or Manual. These are the same condition states as the existing 8xx or earlier model keylock states. Power-on type — Fast or slow.
The following three attributes reflect physical hardware boot conditions, not the partition: – Power-on type — permanent or temporary; these are similar to the i5/OS IPL A and B values. – Power-on speed — fast or slow; this indicates how much hardware/microcode checking is done during IPL. – Power-on speed overrides — fast or slow; indicates how many hardware diagnostics are carried out during the power-on phase. System Properties - Processors Figure 3-23 shows a very useful informational panel.
System Properties - IO resources When expanded, Figure 3-24 shows the I/O resources across the managed system. The example shown has two units or frames, a 5094 expansion tower, and a 5088 IO expansion tower. Unit 5094 contains bus 23 to 25. We have not shown the buses in the other unit. The cards in the card slots are shown as the generic card types. For example, card slot C05 in bus 25 contains a PCI I/O controller and the type indicates that this is a #2844 card.
Important: When planning to partition a new system that has no operating system on it, or when adding new hardware to an existing system, this new hardware will initially only be seen by the HMC, and the IO resources pane will only show a view of the hardware down to the card slot positions.
System Properties - Memory resources The memory properties are shown in Figure 3-26. The available memory is the unassigned memory that is available to allocate to partitions.The configureable memory is the total amount of memory on the managed system. The memory region size determines the smallest increments in which the memory can be allocated to a partition.
System Properties - System Reference Codes (SRCs) The tab under properties shows the SRCs for the managed system (Figure 3-27). This will show up to the last 25 System Reference Codes (SRC) as reported through the Hypervisor to the HMC. There is an option to show the details of each of these SRCs. The pull down list allows the user to change the number of the SRC history to be displayed. Highlighting a particular SRC code and selecting “Details” will display more information on the code, when available.
System Properties - Highlight a Host Channel Adapter (HCA) The last tab under properties shows the HCA for the managed system (Figure 3-28). This page will highlight and display the channel adapter’s current partition usage.
3.5.2 Other system-wide options In this section we look at the other options that are available system-wide by right-clicking the managed system name. Figure 3-29 shows the other options. Figure 3-29 System wide functions The following list provides information on each selection: The properties selection has already been discussed in previous sections of this chapter. Reset or Remove connection — This task works on the actual server itself.
The Create option is taken to create a new logical partition. This is covered in detail in Chapter 5, “Partition creation using the HMC” on page 139. The Capacity on demand option is provided as the management panels for the CoD function. The power off option will power off the entire managed system, which includes all the partitions. This option needs to be used with caution. Very Important: Powering off an i5/OS partition from the HMC should only be used if all other options fail.
Profile data is the system profile, logical partition, and partition profile information for the managed server that is highlighted. Within this option, you can: – Initialize all the profile data, which deletes all existing profile data.
3.5.3 Available partition options We have covered the functions that are available from a system wide perspective; now we take a look at the functions available at a partition level. To show the functions available for each partition, right-click the partition. Alternatively, select the i5/OS partition and then right-click to see the options available as shown in Figure 3-32.
Work with Dynamic Logical Partitioning Resources This takes you to further selections as shown in Figure 3-33. These allow you to work with IO, processor, memory, and virtual adapter resources. Figure 3-33 Dynamic Logical Partitioning selections You have the option to add, remove, or move resources with the exception of Virtual Adapters, where you can only add or remove adapters. Open Terminal Window This selection allows you to start a 5250 terminal session with a managed server.
You can choose Open shared 5250 console or Open dedicated 5250 console. Figure 3-35 shows you an example to select Open dedicated 5250 console. Figure 3-35 Open 5250 dedicated Console You may need to wait a short time until the console status shows Connecting, then the sign on screen prompts as shown in Figure 3-36. Sign On System . . . . . : Subsystem . . . . : Display . . . . . : User . . . . . . Password . . . . Program/procedure Menu . . . . . . Current library . . . . . . . . . . . . . . . . . .
Figure 3-37 shows where you select Open shared 5250 console. Figure 3-37 Open shared 5250 console After you select Open shared 5250 console, the next panel prompts as shown in Figure 3-38, asking you to enter the session key, then press Enter. Figure 3-38 Open HMC 5250 share console -enter session key Chapter 3.
Then a panel like the one in Figure 3-39 shows the shared 5250 console connection status. You may need to wait a short time until the console status shows Connecting, then the sign on screen prompts as shown in Figure 3-36 on page 76. In Figure 3-39 you can press F12 if you want to cancel the sign on. Figure 3-39 Open HMC share console Then the next panel prompts as shown in Figure 3-40; it shows management system status.
In this panel you can select Command → New 5250 to open another 5250 console. Figure 3-41 shows an example to open a new 5250 session. Figure 3-41 Open shared 5250 console a new session Then the next panel prompts as shown in Figure 3-42, asking for a new 5250 host name. Enter the name and press OK. Figure 3-42 Open shared 5250 console -new 5250 host name Chapter 3.
Then the next panel prompts as shown in Figure 3-43, asking for the HMC userid and password to open a new 5250 session. Figure 3-43 Open shared 5250 console -HMC user ID and password After you have entered a valid HMC user ID and password, then the next panel, which looks like Figure 3-44, shows a management server on another window. Select the management system that you want to access, then press Enter.
Then the next panel, which looks like Figure 3-45, shows session connection status. You may need to wait a short time until the console status shows Connecting, then the sign on screen prompts as shown in Figure 3-36 on page 76. Figure 3-45 Open HMC shared 5250 session-connection status Next, Figure 3-46 shows opening multi-5250 consoles or terminal sessions.
Figure 3-46 Open HMC multi-5250 sessions The following steps show you another way to access 5250 screen. In the HMC desktop, right-click the prompt menu, then select Terminal → IBM5250. Figure 3-47 shows an example to open a 5250 terminal.
Then the next panel prompts as shown in Figure 3-48, asking you to set up a 5250 session. Figure 3-48 Open 5250Terminal -setup You can select Preferences → 5250 as in Figure 3-49, to check the preference parameters. Figure 3-49 Open 5250 Terminal -setup preference Chapter 3.
Then the next panel prompts as shown in Figure 3-50. You can change it or use the default value, then click OK. Figure 3-50 Open 5250Terminal - setup preference parameters In Figure 3-48 on page 83, select Connection → New, then the next panel prompts as shown in Figure 3-51. Enter the Connection Description and correct i5/OS Host Name or IP Address, then click OK.
Then the next panel, which looks like Figure 3-52, shows a new connection added. Figure 3-52 Open 5250Terminal new connection Then, highlight the connection and select Connection → connect from the menu. Figure 3-53 shows an example to select a connection. Figure 3-53 Open 5250Terminal connect Then the sign on screen prompts as shown in Figure 3-36 on page 76. Chapter 3.
Restart Partition As explained in the text shown in Figure 3-54, this option should only be used with caution. Restarting a partition will result in an abnormal IPL of i5/OS, which is not what you want to do. Only use this option if all else fails, or under the direction of Support personnel. As the text goes on to explain, this would primarily be used for an i5/OS hang situation.
Shut Down Partition This option is equivalent to a power failure (or pulling the power plug) on the server, or as if the partition were a separate server, experiencing a power failure. When you select this option, you are presented with a selection panel as shown in Figure 3-55, and this offers similar options to those available with the Power Down System command — Delayed and Immediate power down: Delayed shut down — Shuts the system down in a predetermined time.
Partition properties - General tab This provides basic information about the operating system running in the partition and its current state, as we can see in Figure 3-56. You can see the partition ID and profile name. You can also see that the partition is running i5/OS and the version — in our example, the OS version is depicted as a series of zeroes (0s). You can display the general properties by selecting the partition and right clicking the partition.
Partition properties - Hardware tab Under the Hardware tab, the slot level detail can be drilled down to by clicking the “twistie” on the left hand side of the pane. This is similar to the i5/OS options to display hardware resources. But it does not go down to the detail of device level, for example, no disks are displayed. You must manually view the disk or look in the OS running on the system. See Figure 3-57. Figure 3-57 Partition properties tab - hardware IO view Chapter 3.
Hardware IO - Advanced options button By clicking the Advanced Options button, you will see the panel shown in Figure 3-58. The IO pool options will allow the devices selected to participate in a switchable configuration.
Partition properties - Processors and Memory The panel displayed in Figure 3-59 shows a similar panel to the System properties, and shows the resources associated with the partition being viewed. In the panel shown, we have created this partition as shared. The shared partition indicator is showing “enabled”. The resources can only be viewed, they cannot be changed or deleted. You would need to use the DLPAR functions to add or reduce the processing or memory.
Partition properties - virtual devices There are three tabs within this panel, virtual ethernet, virtual serial, and virtual SCSI. The panel shown in Figure 3-60 is the general panel. Any adapter type configured will be shown. Figure 3-60 Virtual Serial display If you select any of the radio buttons (ethernet, serial, or SCSI) you can add or remove adapters.
In Figure 3-61 you can also see that we have selected the first serial adapter and displayed its properties. You can change the access/usage of the virtual port. Figure 3-61 Virtual Adapter properties Chapter 3.
In Figure 3-62 we have added an ethernet adapter by selecting the ethernet radio button and then clicking the Create button. Figure 3-62 Add virtual ethernet adapter Once you have clicked the OK button, you will be returned to the Virtual Adapters main panel, (Figure 3-63) and the new virtual ethernet adapter will have been added.
The panel shown in Figure 3-64 is how VLAN was configured on 8xx servers. Simply clicking the check boxes with the same number on two or more partitions creates a virtual LAN connection between the partitions. Figure 3-64 OS/400 Virtual Lan configuration on 8xx servers If the LPAR migration tool is used to move partition configuration information from a 8xx server to a new System i5 system, the VLAN information will be migrated along with the partition information. Chapter 3.
Partition properties - Settings The Settings panel is where the IPL source, keylock, and automatic reboot functions can be set. This is the functionality that used to be performed by the primary partition on the earlier implementations of partitioning. This panel reinforces the fact that there is no longer a concept of a primary partition. The HMC performs these functions on the new System i5 system. See Figure 3-65.
Figure 3-66 Partition properties miscellaneous tab The HSL opticonnect and virtual opticonnect are the same as implemented in previous versions of partitioning; the power controlling partition is a new concept. The power controlling partition is used in the context of i5/OS partitions hosting Linux. An i5/OS partition can host the console and the disk for the Linux partition and hence needs to control the power for the hosted Linux partition.
Customize date and time You use this option to change the date and time and the time zone. View console events This is a log that allows you to view recent HMC activity. Each event is time stamped and the events that are logged include: When a partition is activated. When a system is powered on. When a user logs on. When a partition is shut down. Customize network settings Like the date and time option, the network settings would have been set up when the guided setup wizard was run.
Enable or disable remote command execution This option allows you to enable or disable the ability to run remote commands to the HMC from a remote client using the SSH protocol. PuTTY would be an example of a remote client using the SSH protocol. The commands that can be executed are restricted. Enable or disable remote virtual terminal This option allows you to enable or disable the ability to run a remote virtual terminal on the HMC.
3.6.2 Inventory Scout services The Inventory Scout is a tool that surveys managed systems for hardware and software information. Inventory Scout provides an automatic configuration mechanism and eliminates the need for you to manually reconfigure Inventory Scout Services. Depending on the levels of your HMC and partition software, you might be required to manually configure partitions that you create in order to perform Inventory Scout tasks.
Repair serviceable event This option allows the user or the service representative to view a serviceable event and then initiate a repair against that service event. In the following paragraphs, we give an example of the steps taken to view an event and initiate a repair. Click Service Focal Point → Repair Serviceable Event and select the managed system (in Figure 3-67 the system name is called “Unknown”).
The window’s lower table displays all of the errors associated with the selected serviceable event. The information is shown in the following sequence: Failing device system name Failing device machine type/model/serial Error class Descriptive error text Details To initiate a repair on the serviceable event, highlight the serviceable event, click Selected → Repair.
Figure 3-70 shows the options that are available for a serviceable event. Highlight a service event, and click Selected. Figure 3-70 Options available to manage a serviceable event The view option and repair option have been covered in the section on repair serviceable events. The three extra options are: Call home. You can force the HMC to report the serviceable event to IBM. Manage problem data.
Manage Dumps. Use this option to show any dumps associated with a service event. You can save them to DVD, send them to IBM or delete them. Edit MTMS. Use this option to modify the MTMS or the configuration ID of a selected enclosure. System attention LED. Use this option to look at the status of a system or partition system attention LED. You can select to turn off the attention LED. Identify LED processing.
4 Chapter 4. HMC Guided Setup This chapter provides an overview of the Guided Setup function included in the Information Center component of the HMC. This chapter is divided into the following sections: Guided Setup planning and checklist User ids and authority HMC Networking setup HMC Service setup © Copyright IBM Corp. 2005, 2006. All rights reserved.
4.1 HMC Guided Setup The HMC Guided Setup wizard guides you through the main tasks needed to help you set up and tailor the many functions of the HMC. The wizard will launch automatically the first time the HMC is started. Using this wizard is the simplest way to configure your HMC. Before using the Guided Setup wizard, you must understand the main concepts of HMC and decide which functions are relevant to your environment.
4.1.2 Using the Guided Setup wizard This section walks you through an example of setting up an HMC via the Guided Setup wizard. Ensure that you have completed the HMC Guided Setup wizard checklist before continuing with the next section. Important: If you cancel or exit the Guided Setup wizard at any time before you click the Finish button, all your inputs will be lost. Once you have been through the Guided Setup wizard, you cannot “rerun” this function.
Figure 4-2 shows the HMC with the first panel of the Guided Setup wizard in French. Figure 4-2 HMC Guided Setup with French locale You can flip-flop between language locales at anytime, but you must reboot the HMC. If you use this function and cannot read the language, you must just remember where the change locale option is located. Note: You can change the locale to Japanese, but you cannot enter information in DBCS. In some languages, not all words are translated.
Figure 4-3 HMC Guided Setup 4. The Guided Setup Wizard Welcome page appears (Figure 4-4). Click Next to continue with the wizard. Figure 4-4 HMC Guided setup welcome page Chapter 4.
5. On the Guided Setup wizard - Change HMC Date and Time panel (Figure 4-5 on page 110), enter the correct date/time and time zone for your environment. This is typically the time zone of the server, assuming the HMC is local to the machine. For remote machines, you must decide which is the correct time zone for your environment. Figure 4-5 HMC Guided setup - Time and date setting Click Next to continue with the Guided Setup wizard.
6. The Guided Setup Wizard - Change hscroot Password panel is now displayed as shown in Figure 4-6. Enter the current hscroot password (normally this should be the default password of abc123) and then the new password you would like. The hscroot user ID is the i5/OS QSECOFR equivalent profile for the HMC, this user ID has full rights to all functions available on the HMC.
7. The Change root Password panel is now displayed as shown in Figure 4-7. The root user ID is used by the authorized service provider to perform maintenance procedures and cannot be used to directly log in to the HMC. Enter the current root password (normally this should be the default password of passw0rd - where 0 is the number zero rather than the letter o. Enter the new password you would like for the root user ID.
8. The Create additional HMC users panel is now shown (see Figure 4-8). You can now optionally create new HMC users at this stage. In our example, we decided to create a new hscoper user ID with a role of hmcopertor to allow our operation staff access to HMC and work with partitions. See 7.3, “HMC User Management” on page 238 for further information on creating users and their roles. You can also skip this section and create users manually later on if you prefer.
10.This completes the first part of the Guide Setup wizard. The Guided Setup - The Next Steps panel is displayed (see Figure 4-10). Figure 4-10 Guided Setup wizard - The next steps The next section will configure the HMC network settings. You will need to have planned your network environment for HMC before continuing with these next tasks. You should use the values entered in the HMC Guided Setup wizard checklist. Click Next to continue with the Guided Setup wizard. 11.
12.The Guided Setup - Configure DNS panel is now shown (see Figure 4-12). A DNS server is a distributed database for managing host names and their IP addresses. By adding a DNS server IP address to our HMC will allow us to find other hosts in our open network by their host name rather than by their IP addresses. Enter the IP address of your DNS server or servers in the DNS server address field and click Add to register the IP address.
13.Now the Guided Setup - Specify Domain Suffixes panel is shown (see Figure 4-13). Enter a domain suffix in the Domain suffix field and click Add to register your entry. You can enter multiple domain suffixes for your organization if you have them. The order that the addresses are entered will be the order in which they are searched when trying to map the host name to a fully qualified host name.
14.The Guided Setup Wizard - Configure Network Setting panel is then displayed (Figure 4-14). In our example we see two LAN adapters available (eth0 and eth1), however, you may only see one adapter in your HMC system. We will configure eth0 for a private network and then will return to this panel to configure eth1 for an open network. The private network will be used to connect to our managed systems and other HMC’s. The second LAN adapter will be used to connect to our existing open network.
16.The Guided Setup Wizard - Configure eth0 panel is now shown (Figure 4-16). As previously mentioned, we are setting the first LAN adapter to be our link to our private network of HMCs and managed systems. Figure 4-16 Guided Setup wizard - Configure eth0 We select the Private network radio button and click Next to continue. 17.Now the Guided Setup Wizard - Configure eth0 panel appears (Figure 4-17). As this is the first HMC on our private network, we have to define the HMC as a DHCP server.
The HMC provides DHCP services to all clients in a private network. These clients will be our managed systems and other HMC’s. You can configure the HMC to select one of several different IP address ranges to use for this DHCP service, so that the addresses provided to the managed systems do not conflict with addresses used on other networks to which the HMC is connected. We have a choice of standard nonroutable IP address ranges that will be assigned to its clients.
18.The Guide Setup Wizard - configure eth0 panel is now shown (Figure 4-18). We can specify that one of our LAN adapters can act as a gateway device to our open network (if required). Figure 4-18 Guided Setup wizard - Default gateway In our configuration LAN adapter, eth1 will be our open network device, so we will set that card to be our default gateway device later on. In our example eth0 is the private network LAN adapter, so we can just click Next to continue.
19.The Guided Setup Wizard - Configure Network Settings panel is now displayed (Figure 4-19). This completes the network configuration of the private network interface eth0. We can now proceed with the configuration of the second network card (eth1) for our open network. Figure 4-19 Guided Setup wizard - Configure Network Settings Select the Yes radio button if it is not already flagged. The second ethernet card should be highlighted in grey (as in Figure 4-19).
21.The Guided Setup Wizard - Configure eth1 panel is now shown (Figure 4-21 on page 122). This time we select the Open network radio button and click Next to continue. Figure 4-21 Guided Setup - Configure eth1 open network 22.The Guided Setup Wizard - Configure eth1 panel is now shown (Figure 4-22). You can configure the eth1 interface to use a fixed IP address or obtain one automatically from your open network DHCP server.
Figure 4-23 Guided Setup - eth1 gateway selection In our example we enter our gateway address of 9.5.6.1. and click Next to continue. 24.The Guided Setup Wizard - Configure HMC Firewall for eth1 panel is now displayed (Figure 4-24). Usually there is a firewall that controls outside access to your company’s network. As the HMC is connected to the open network we can also restrict remote access to this device by using the HMC built in firewall.
25.The Guided Setup Wizard - Configure HMC firewall panel appears next (see Figure 4-25 on page 124). In the top panel (current applications) are listed all the available applications that are on the HMC. In the bottom pane (Applications allowed through firewall) are all the applications available to the open network through the HMC firewall. You can decide to remove applications completely from the firewall by selecting the relevant application from the bottom panel and clicking the Remove button.
Figure 4-26 Guided Setup - eth1 firewall by IP address In our example we enter the remote IP address 9.5.6.124 and mask 255.255.255.0 and click Add and then click OK. When we return to the HMC firewall configuration panel, we click Next to continue with the Guided Setup wizard. 26.The Guided Setup Wizard - Configure Network Settings panel is shown (Figure 4-27). If you have more network adapters available you can configure them now by selecting the relevant adapter and selecting the Yes radio button.
27.The Guided Setup - The Next Steps display is shown (Figure 4-28). This completes the network configuration section of the Guide Setup wizard. We now continue with the next part of the wizard, which enables the service and support functions within the HMC. Figure 4-28 Guided Setup - End of network configuration Click Next to continue with the HMC Guided Setup.
28.The Guide Setup Wizard - Specify Contact Information panel is presented (see Figure 4-29). This is the first of three panels which contain the contact details for your company (this information will probably be similar to the WRKCNTINF information stored in OS/400 if you have previous iSeries systems. The information entered here is used by IBM when dealing with problems electronically reported (calling home), as well as software updates. You should enter valid contact information for your own location.
30.The last panel for the Contact Information is now shown (Figure 4-31).You should enter the location details of this HMC here. If the location address is the same as the contact address used in the previous step, then click Use the administrator mailing address. Otherwise fill in the correct HMC location address details.
31.The Guided Setup Wizard - Configure Connectivity to Your Service Provider panel is now displayed (Figure 4-32). You can select by which communications method you wish to connect to IBM (call home) for service and support related functions. There are four service applications available on the HMC: – Electronic Service Agent™ - Monitors your managed systems for problems and if enabled, reports them electronically to IBM.
32.The Agreement for Service Programs panel is now shown (see Figure 4-33). Read the agreement details carefully and click Accept or Decline. Figure 4-33 Guided Setup - Agreement for Service Programs In our example configuration we click Accept to accept the terms and conditions of the IBM Agreement for Service Programs. We then return to the previous panel. Click Next to continue with the Guided Setup Wizard. 33.
34.The Add Phone Number window is launched (see Figure 4-35). Use the drop down menus to select your Country/region and then your State/province. Figure 4-35 Guided Setup wizard - Add Phone Number For our example we select United States (of America) for our Country/region and Minnesota for our State/province. You should select the relevant values for your location. After you have selected your Country/region and State/province, a list of available IBM support service numbers are listed.
35.We return to the Guided Setup Wizard - Configure Dial-up from the Local HMC panel (see Figure 4-37). You can add additional phone numbers by repeating the same procedure again and selecting a different number. Figure 4-37 Guided Setup wizard - Dial-up configuration This finishes our configuration for the Dial-up connection for the HMC. We click Next to continue. 36.The Guided Setup wizard - Use VPN using an Existing Internet Connection panel is displayed (see Figure 4-38).
37.The Guided Setup Wizard - Configure Connectivity using a Pass-Through System panel is shown (Figure 4-39). The HMC can use another system in your network which already has a VPN or dial-up connection to IBM service and support. This system could be an i5/OS V5R3 partition or another HMC. Figure 4-39 Guided Setup wizard - Pass-Through connectivity part 1 Click the Add button and enter the IP address or host name of your pass-through system.
38.The Guided Setup Wizard - Authorize Users for Electronic Service Agent panel is now displayed (see Figure 4-41). The information collected and sent to IBM by the HMC can be seen on the IBM Electronic Service Agent Web sit: http://www.ibm.com/support/electronic To access this data on the Web, you must have a registered IBM ID and authorized that ID through the HMC. You can register IBM IDs via the Web site: https://www.ibm.
Figure 4-42 Guide Setup Wizard - Notification of Problem Events In our example, we enter the SMTP server IP address/port and our administrator’s e-mail address. We will only alert our administrator when a call-home problem event is generated. Click Next to continue with the Guided Setup wizard. 40.The Guided Setup wizard - Summary panel is displayed (Figure 4-43). You can see all the changes that the Guided Setup wizard is about to make. Figure 4-43 Guided Setup wizard - Summary panel - top Chapter 4.
Important: At this stage nothing has actually been changed on the HMC. If you press the Cancel button, all changes made through the Guided Setup will be lost. In our example, we click the Finish button to apply all our HMC changes. 41.The Guide Setup Wizard - Status panel is displayed (Figure 4-44). As each task completes, its status is automatically updated.
If you have configured any network settings during the Guided Setup Wizard, then you will probably receive a message asking you whether you wish to reboot the HMC (see Figure 4-46). Figure 4-46 Guided Setup wizard - Reboot message In our example, we click Yes to reboot the HMC and activate our new network settings. This completes the HMC Guided Setup Wizard.
138 Logical Partitions on System i5
5 Chapter 5. Partition creation using the HMC In this chapter we discuss the following topics: System and partition profiles Creating an i5/OS logical partition using the HMC Creating additional partition profiles for an existing logical partition Changing the default profile for a partition © Copyright IBM Corp. 2005, 2006. All rights reserved.
5.1 System and partition profiles This section discusses the concept of system and partition profiles and how they are used. 5.1.1 System profiles A system profile is a collection of one or more partition profiles. System profiles can be used to specify which partition profiles are activated at the same time. 5.1.2 Partition profiles A partition profile represents a particular configuration for a logical partition. A partition profile contains information about the resources assigned to the partition.
5.2 Creating an i5/OS logical partition through the HMC Creating a partition is a multiple step process. Partition creation can be accomplished through either the HMC GUI (graphical user interface) or CLI (command line interface). We will focus on using the HMC GUI. Important: Typically all partition creation and management is performed through the Hardware Management Console (HMC). The CLI is an advanced option and still requires an HMC. 5.2.
5.2.2 Starting the create partition wizard If initial setup and configuration of the HMC is required, refer to Chapter 3, “HMC overview and planning” on page 45. Attention: An HMC can manage multiple servers. Before creating a logical partition, make sure that the correct server is selected. 1. If you are not already connected to the HMC, sign in with an HMC user profile that has either System Administrator or Advanced Operator authority. The capabilities of various HMC roles are discussed in 7.
3. To start the partition creation process, right-click Partitions, and from the pop up menu shown in Figure 5-2, select: Create → Logical Partition Alternatively, with server management selected in the Navigation Area, the logical partition creation wizard can be accessed from the menu bar as follows: Selected → Create → Logical Partition Either way, once selected, the Create Logical Partition wizard starts. Figure 5-2 Invoking the create partition wizard Chapter 5.
It may take a number of seconds before the wizard pane opens (Figure 5-3), since the HMC has a lot of information gathering to perform. Figure 5-3 Partition wizard first pane 5.2.
Figure 5-4 shows a sample partition name and partition ID for an i5/OS. Once the fields are filled in, click Next > to proceed to the next panel. Figure 5-4 Specifying the partition name, partition ID, and partition type Chapter 5.
5.2.4 Workload management group If the partition will participate in a workload management group, select Yes and specify the GroupID (Figure 5-5). Figure 5-5 Workload management group Once complete, click Next > to proceed to the next panel.
5.2.5 Partition profile name Each logical partition requires at least one partition profile. The profile name can contain up to 31 characters. The profile name should be descriptive — month end processing, for example. Once complete, click Next > to proceed to the next panel (Figure 5-6). Figure 5-6 Specifying the partition profile name Chapter 5.
5.2.6 Partition memory Figure 5-7 shows the initial partition memory panel that is displayed. Memory can be allocated in a combination of megabytes (MB) or gigabytes (GB). Megabyte allocations are restricted to multiples of the logical memory block size. Currently, the value for the logical memory block size is 16 MB. Notice that the default MB values do not automatically disappear when you add values into the GB window. These values must be removed manually.
5.2.7 Partition processors The next resource to configure for a partition profile is processing capacity. The choices are Dedicated processors Capped shared processors Uncapped shared processors Figure 5-8 shows the processor selection panel. The two main choices are Dedicated and Shared. Dedicated processors are intended for use solely by the partition to which they are assigned. Shared processors allow for fractional processors to be assigned.
For any dedicated processor partition profile, three values are required (Figure 5-9). Desired processors: This is the requested amount of processors for the partition. On profile activation, the partition will receive a number of processors between the minimum and desired amounts depending on what is available. Minimum processors: This number of processors is required for the partition. The profile will fail to activate if the minimum number of processors is not met.
Figure 5-10 shows a sample completed dedicated processor configuration. This partition profile would require at least 1 dedicated processor in order to start. Depending on whether or not processor resources are over committed, this partition profile will be allocated between 1 and 4 processors when activated. As configured, this profile will not allow for more than 7 processors to be in the partition.
Capped shared processor partition To use shared processors for a partition profile, select the Shared radio button as shown in Figure 5-11 and click Next >. Figure 5-11 Choosing shared processors for a partition profile There are several pieces of information required for a shared processor partition. The first three of these are: Desired processing units: This is the requested amount of processing units for the partition.
Once these are filled in, click the Advanced button to bring up the sharing mode dialog (Figure 5-12). Figure 5-12 Initial shared processor panel Chapter 5.
In the sharing mode properties, click the Capped radio button (Figure 5-13). The desired, minimum, and maximum number for virtual processors need to be specified. At a minimum, use the values for desired, minimum, and maximum processor units rounded up to the next whole number. For example, for 1.25 processor units, use at least 2 for the number of virtual processors. Once complete, click OK to close the Sharing Mode Properties dialog, and then click Next > to proceed to the next panel.
Figure 5-14 Sharing mode properties for an uncapped shared processor partition 5.2.8 Interactive (5250 OLTP) capacity Depending the particular system model, assigning interactive capacity for the partition profile may or may not be required. Zero interactive and enterprise edition systems do not require the interactive feature to be assigned to the profile. The 5250 OLTP screen will not display on models other than the i520. 5.2.
Once selected, add the resources to the profile by clicking either Add as Required or Add as Desired. Once complete, click Next > to proceed to the next panel. Figure 5-15 Allocating physical I/O to a partition profile Location codes Hardware may or may not be identified in the HMC by resource type and model. This information is provided by the converged Hypervisor only if this information has been provided to it by the operating system. In the absence of resource type and models, location codes are used.
Logical path location codes An example of a logical path location code is as follows: U970305010ABCDE-P3-C31-T2-L23 The first portion (through the T prefix) of the logical path logical code is the physical location code for the resource that communicates with the desired resource. The string that follows after the T prefix identifies the particular resource. Note: It is possible for a device to have more than one logical path location code.
Creating Virtual IO Adapters The following virtual IO adapters can be created: Virtual Ethernet Virtual Serial Virtual SCSI All virtual IO adapters reside on a single virtual system bus (Figure 5-17). The maximum number of virtual adapters is a user editable field that specifies how many virtual IO adapters can connect to the virtual system bus. Clicking Next > will advance to the next panel. In order to create a virtual IO adapter, select the adapter type radio button and click the Create button.
Migrating an existing VLAN or virtual ethernet configuration requires special consider. Refer to the info center article, “Convert your preexisting virtual Ethernet configuration”, at: http://publib.boulder.ibm.com/eserver/ Figure 5-18 Creating a virtual ethernet resource Virtual Serial Virtual serial (see Figure 5-19) allows for the creation of an internal point-to-point connection. This connection is between the partition and either the HMC or another partition.
To create a virtual serial server adapter, select Server for the adapter type (Figure 5-20). The connection information also needs to be specified. Connection information determines who can connect to the resource. Figure 5-20 Creating a virtual serial server resource Virtual SCSI Virtual SCSI (see Figure 5-21) allows for a partition to use storage resources that physically reside in another partition. Storage resources include disk, tape, and optical.
To create a virtual SCSI server adapter, select Server for the adapter type (see Figure 5-22). The connection information also needs to be specified. Connection information determines who can connect to the resource. For a virtual disk that contains data that only needs to be accessed in a read only fashion, allowing Any remote partition and slot can connect would be okay. For write access, specifying Only selected remote partition and slot can connect may be a better choice.
Load source The load source is used for IPLs from either the A or B side of the Licensed Internal Code. Selecting the load source IOP resource specifies what is to be used for regular IPLs and where to place Licensed Internal Code during an install. See Figure 5-23.
Alternate IPL (restart) device The Alternate IPL device (Figure 5-24) is used for D mode IPLs when Licensed Internal Code needs to be installed or restored. Figure 5-24 Selecting the Alternate IPL resource Chapter 5.
Operation console device Selecting the operations console device resource is optional, this was previously called Electronic Customer Support (ESC). See Figure 5-25. Figure 5-25 Selecting the Operations console device Some support functions, such as RSSF (Remote Service and Support Facility), require that an ECS resource is selected. Additional information regarding RSSF can be found in the registered software knowledge base at: https://techsupport.services.ibm.com/as400.
Console The console provides a display (Figure 5-26) to interact with the partition. Certain functions, such as full system saves and dedicated service tools (DSTs), need to be initiated at or from the console. Chapter 2, “i5/OS consoles under System i5” on page 31 has more information regarding the console options that are available on System i5 hardware.
Specifying console resource If some device other than the HMC is to provide console function, select the radio button labeled No, I want to specify a console device and click Next > as shown in Figure 5-27.
As when selecting other partition resources, you are now presented with a dialog similar to the one in Figure 5-28. Figure 5-28 Selecting the console resource Chapter 5.
Alternate console An alternate console can provide console functions if the primary console is not functioning or not available. Some functions such as operating system install cannot performed from an alternate console. Selecting an alternate console for a partition profile is optional. See Figure 5-29.
5.2.12 Opticonnect If the partition profile will be using either Opticonnect or HSL Opticonnect, this can be specified by selecting the appropriate check box as shown in Figure 5-30. Figure 5-30 Specifying the opticonnect participation Chapter 5.
5.2.13 Specifying power control partitions For a hosted guest partition, the HMC by default gets power control, the ability to power the partition on and off. Specifying a power control partition allows another partition to have the same capability. Click Add to add another partition to the power control list for the partition, and click Next > to advance to the next panel (Figure 5-31). Figure 5-31 Specifying power control partitions 5.2.
Figure 5-32 Miscellaneous partition profile settings 5.2.15 Review profile summary Before the partition is created, the profile is displayed for final review as shown in Figure 5-33. If no changes are required, click Finish to have the partition profile created. Otherwise, use the < Back button to find the desired panel and make the required changes. Figure 5-33 Partition profile is displayed for review Chapter 5.
5.2.16 New partition profile has been created Figure 5-34 shows the partition and partition profile. This partition is brand new and requires that an operating system be installed before the partition can be functional.
5.3 Creating another profile for an existing partition Figure 5-35 shows the flows for creating a partition profile for an existing logical partition. This is similar to creating a brand new logical partition. Select the partition and right-click. From the pop up menu, select: Create → Profile The Create Partition Profile wizard loads and guides you through the remainder of the process. Figure 5-35 Starting the create profile process for an existing logical partition Chapter 5.
Each partition profile needs to have a name. After that, the creation of another partition profile follows the same flow as presented earlier (Figure 5-36). Figure 5-36 Specifying a name for the partition profile Figure 5-37 shows a logical partition that has more than one partition profile. The indicates which profile for the logical partition is the default profile. The default profile specifies which profile is automatically started when a partition is activated.
5.4 Changing the default profile for a partition Changing the default profile is a relatively straightforward operation. From the Servers and Partitions: Server Management pane, right-click the desired partition. From the pop up menu, select Change Default Profile (Figure 5-38). Figure 5-38 Changing the default profile for a partition Figure 5-39 shows a dialog box that displays the list of profiles that are associated with the selected partition.
176 Logical Partitions on System i5
6 Chapter 6. Working with the HMC In addition to providing an interface for creating logical partitions, the Hardware Management Console (HMC) is used to manage logical partitions once they are created.
6.1 Accessing LPAR functions Attention: An HMC can manage multiple servers. Before performing an operation on a logical partition, make sure that the correct server is selected. On the HMC, LPAR functions for a logical partition can be accessed in two main ways. Throughout this chapter, the menu will be referred to as the LPAR functions menu.
Figure 6-2 Accessing LPAR functions via the Selected menu bar item 6.2 Viewing partition properties Some of the information concerning a partition is common to the properties of both the partition and the partition profile. In other cases, specific information can only be found in one of the two places. 6.2.1 Partition properties To access the properties for a partition, access the LPAR functions menu and select Properties. Ceasing the LPAR functions menu is discussed in 6.
Table 6-1 Partition states and their meaning State Description Off The partition is powered off. Power On The partition is in the process of powering on. On The partition’s operating system is running. Power Off The partition is in the process of powering off. Failed The partition has encountered an error in the early IPL path. Unit Attention The partition encountered a run time failure. Check the reference code for the partition and take the appropriate action.
Hardware The Hardware tab shows which hardware resources are currently in use by the partition. There are two sub-tabs on this dialog. One lists the I/O resources, and the other lists both processors and memory. I/O The I/O sub-tab (Figure 6-4) shows which I/O resources are currently assigned to the partition. The hierarchical view can be expanded to display which buses within a given unit are available to a partition.
Allow shared processor utilization authority indicates if the partition has the authority to view utilization information of the entire shared processor pool. Without this authority, the partition can only obtain shared processor pool information about itself. Under Memory, the minimum, maximum, and current values are displayed.
Settings The Settings tab (Figure 6-18 on page 192) shows information about partition boot, service support and tagged I/O. For partition IPL source and mode settings, refer to 6.3.1, “Changing IPL side and mode” on page 191. Automatically start with managed system specifies if a partition to perform an IPL when the entire managed system is IPL’d. The default partition profile is started in this case. If unchecked, the partition would need to be manually started after the managed system is IPL’d.
6.2.2 Partition profile properties This section covers the properties available for partitions through the tabs at the top of each properties pane. General As with the General tab for partition properties in Figure 6-3 on page 180, the partition profiles General tab (Figure 6-8) displays some basic information about a partition profile. The System Name is the managed system that contains this partition and partition profile.
Figure 6-9 Partition profile Memory tab Processors The Processors tab displays information about the profile’s processor configuration. There are two distinct views, depending on processing mode: Dedicated (Figure 6-10) and Shared (Figure 6-11). Depending on the processing mode, either Total managed system processors or Total managed system processing units reflects the total processing capacity that the physical system can provide.
Figure 6-10 Partition profile Processors tab for dedicated processors The processor properties of a shared processor partition profile is somewhat different. As with dedicated processors, changes to the profile’s processing configuration can be made here. These are not dynamic LPAR (DLPAR) changes and will take affect when this profile is next restarted. For DLPAR processor changes, refer to the processor portion of 6.4, “Performing dynamic LPAR (DLPAR) functions” on page 195.
Physical I/O The Physical I/O tab (Figure 6-12) identifies what physical I/O is available on the entire system and what is assigned to profile. Changes to the profile’s allocation of physical I/O adapters can be performed here. These are not dynamic LPAR (DLPAR) changes and will take affect only when this profile is next restarted. For DLPAR physical adapter changes, refer to the physical adapter portion of 6.4, “Performing dynamic LPAR (DLPAR) functions” on page 195.
Tagged I/O resources The Tagged I/O tab (Figure 6-13) identifies which resources are selected to perform partition critical functions. Some of these, like load source and console, are required for the profile to start. Others, such as alternate restart device, alternate console, and operations console, are only required in particular circumstances. Any of these resources can be changed by clicking Select and choosing a new resource. These changes take effect the next time the profile is restarted.
Virtual I/O resources For details on the Virtual I/O tab, see 6.4.4, “Virtual IO adapters” on page 197. Opticonnect The Opticonnect tab displays the profile’s opticonnect settings (Figure 6-14). Changing the virtual opticonnect setting takes effect on the next restart of the profile. Figure 6-14 Profile opticonnect settings Power Controlling The Power Controlling tab (Figure 6-15) shows if this partition profile has power control for a hosted guest partition.
Figure 6-15 Partition profile Power Controlling tab Settings The Settings tab (Figure 6-16) of the partition profile dialog has the following options for a partition profile: Enable Service connection monitoring Automatically start when the managed system is powered on Figure 6-16 Partition profile Settings tab 190 Logical Partitions on System i5
6.3 Starting and stopping partitions One of the more common partition tasks involves starting and powering down partitions. In this section we discuss the following tasks: Changing the IPL side and mode for a logical partition Manually starting a powered off partition Restarting an operational partition Powering down a partition 6.3.1 Changing IPL side and mode In order to change the IPL side (A,B, or D) and mode (manual or normal) for a logical partition, perform the following steps: 1.
Figure 6-18 Partition settings 4. In the Boot section of the dialog, the IPL Source and Keylock position can be changed from their respective pull down menus. 5. Click OK once the desired changes have been made. 6.3.2 Starting a powered off partition To IPL a partition involves activating that partition profile: 1. If needed, set the desired IPL side and mode as referenced in 6.3.1, “Changing IPL side and mode” on page 191. 2. From the LPAR functions menu, select Activate.
Note: Activating a second profile for an active partition is not allowed and results in an error. To switch profiles for an active partition, the first profile needs to be deactivated (powered off) before the second profile can be activated (powered on). 6.3.
6.3.4 Stopping (powering down) a running partition The preferred method of powering down an active partition is to issue the following command. PWRDWNSYS OPTION(*CNTRLD) DELAY(user specified delay) RESTART(*NO) The default delay time for a controlled power down is 3600 seconds or one hour. Depending on the particular circumstances, this value may need to be changed. In cases where a command line is not available due to a partition loop or wait state, the partition can be brought down as follows.
4. After selecting the desired shutdown type, press OK to start the shutdown process. See Figure 6-21. Figure 6-21 Partition shutdown options Attention: Both delayed and immediate shutdown types are considered abnormal system ends and longer IPL times may result. Damaged objects are also possible. An immediate shutdown is more likely to result in an abnormal IPL. 6.
In order to add, move, and remove physical adapters using DLPAR, perform the following steps: Add Dynamic Logical Partitioning → Physical Adapter Resources → Add Remove Dynamic Logical Partitioning → Physical Adapter Resources → Remove Move Dynamic Logical Partitioning → Physical Adapter Resources → Move 6.4.2 Processors In this section we discuss dynamic LPAR (DLPAR) operations on processors. Figure 6-23 shows how to access the processor DLPAR options.
6.4.3 Memory In this section we discuss dynamic LPAR (DLPAR) operations on memory. Figure 6-24 shows how to access the memory DLPAR options. Figure 6-24 DLPAR functions for Memory Resources In order to add, move, and remove memory resources using DLPAR, perform the following steps: Add Dynamic Logical Partitioning → Memory Resources → Add Remove Dynamic Logical Partitioning → Memory Resources → Remove Move Dynamic Logical Partitioning → Memory Resources → Move 6.4.
Figure 6-26 Virtual I/O properties for a partition profile Create Before creating a virtual IO adapter, increase the Number of virtual adapters, if required, to accommodate the additional adapters. To create a virtual IO adapter, select the desired adapter type and click Create. For additional discussion on creating virtual IO adapters, refer to the following headings under 5.2.10, “Virtual IO adapters” on page 157: Virtual Ethernet - 5.2.10, “Virtual IO adapters” on page 157 Virtual Serial - 5.2.
6.5.1 Displaying reference code information In order to display reference code information for a partition, select Properties from the LPAR functions menu. Select the Reference Code tab on the properties dialog, as shown in Figure 6-27. Figure 6-27 Displaying reference code information for a partition 6.5.2 Posting DST to the console Dedicated Service Tools (DST) can be posted to the partition console during normal runtime.
You will need to signon to DST with the appropriate service tools user ID and password, as shown in Figure 6-28. Figure 6-28 Posting DST to partition console 6.5.3 Performing main storage dumps Attention: Perform a main storage dump under the direction of your next level of support. Incorrect use of this service tool can result in unnecessary down time and loss of debug information. A main storage dump (MSD) is the contents of main storage, that is, system memory or RAM, from a single moment in time.
Figure 6-29 Activate and Deactivate remote service functions In order to activate the remote service communication, perform the following steps: 1. Expand Service Applications from the Navigation Area on the left side of HMC window, click Service Focal Point. 2. Click Service Utilities. 3. Select desired system unit, then click the Selected pull down menu to select Operator Panel Service Functions.... 4. Select a partition, and click the Partition Functions pull down menu. 5.
To delete a partition, access the LPAR functions menu and select Delete. A confirmation dialog is displayed. Click OK to confirm the partition delete or Cancel to back out the request, as shown in Figure 6-30. Figure 6-30 Confirming the deletion of a logical partition Deleting a partition profile If a partition has more than one partition profile, rather than deleting the entire partition, only the profile that is no longer needed can be deleted.
6.5.6 Working with IOP functions In this section we discuss operational considerations and actions that can be performed on IOPs while the system is running. IOP reset (disk unit IOP reset/reload) An IOP reset is only valid when certain disk unit subsystem error codes are posted. Attention: Perform a disk unit IOP reset under the direction of your next level of support. Incorrect use of this service tool can result in unnecessary down time and loss of debug information.
Manually, you can initiate IOP control storage dump by performing the following steps: 1. Expand Service Applications from the Navigation Area on the left side of HMC window, click Service Focal Point. 2. Click Service Utilities. 3. Select desired system unit, then click the Selected pull down menu to select Operator Panel Service Functions.... 4. Select a partition, and click the Partition Functions pull down menu. 5. Select IOP control storage dump (70) - i5/OS. Figure 6-34 IOP control storage dump 6.5.
Power On I/O Domain To perform power on I/O domain, do the following steps: 1. Expand Service Applications from the Navigation Area on the left side of HMC window, click Service Focal Point. 2. Click Service Utilities. 3. Select desired system unit, then click the Selected pull down menu to select Operator Panel Service Functions.... 4. Select a partition, and click the Partition Functions pull down menu. 5. Select Concurrent Maintenance Power on Domain (69) - i5/OS.
Figure 6-36 Enabling SSH (Secure Shell) SSH client In order to connect to the HMC via SSH, an SSH client needs to be installed on the client PC. One such client is PuTTY. Setup and install of the SSH client is outside of the scope of this document. Connecting SSH client to the HMC Whichever SSH client is used, you will need to connect to port 22 on the HMC. Figure 6-37 shows connecting with the PuTTY SSH client. When connected, you will be presented with a UNIX-like signon screen as shown in Figure 6-38.
Figure 6-37 PuTTY SSH config Figure 6-38 SSH logon to HMC Example SSH command Below is a relatively simple example that lists the partition ids, names, and states of the partitions on a managed system called Default2, as shown in Example 6-1.
HMC commands The HMC command line interface (CLI) is discussed in Appendix A, “HMC command list” on page 473. 6.6.1 Web SM Web SM is short for Web-based System Manager Remote Client. It allows remote access to most HMC functions from a network attached PC client. This function is very useful for remote technical support and planning. Obtaining the client In order to obtain the Web SM client, open a Web browser session to the following URL: http://hostname/remote_client.
The first piece of information required in the signon process is the hostname or IP address of the HMC (see Figure 6-40). At the moment, the HMC user and password cannot be specified. Figure 6-40 Web SM logon dialog Next, the Web-based System Management remote client and the HMC sync up via handshaking as shown in Figure 6-41.
Once the signon process is complete, a Web-based System Management remote client display is shown that is similar to the one in Figure 6-43 below. Except for security functions, the display is practically identical to the local HMC display. A sample of the local HMC display is shown earlier in this chapter in Figure 6-1 on page 178. Figure 6-43 The main Web-based System Management remote client display 6.
iSeries control panel function Description HMC option 20 Machine Type/Model Partition properties - General tab 6.2.1, “Partition properties” on page 179 21 Post DST to Console Enable DST 6.5.2, “Posting DST to the console” on page 199 22 Force MSD 6.5.
Figure 6-44 Licensed Internal Code Maintenance option 6.8.1 HMC Code Update Clicking HMC Code Update presents four options: Backup Critical Console Data. Save Upgrade Data. Install Corrective Service. Format Removable Media. Remote Restore of Critical Console Data.
The option to install corrective service fixes on the HMC is similar to the same option for the Frame. It allows you to update the level of code on the HMC either from removable media or from a remote site. The option to format removable media allows you to format the diskettes with the DOS file system or format the DVD-RAM with the UDF file system. 6.9 Troubleshooting In this section we cover some possible problems with the Management Server. 6.9.
Value Description Error The operating system or the hardware of the managed system is experiencing errors. Error - Terminated Power On, Dump in progress. CoD Click to Accept Power On, operational and waiting for CUoD. Click to Accept. Powering Off Power Off in progress. Standby The managed system is powered on using the Power on Standby option. It will stay in this state until a partition or system profile is activated. You can create and modify profiles while the managed system is in this state.
5. If the power indicator is on, wait 5 minutes for the HMC to attempt to reestablish contact. The service processor in the managed system may be in the process of turning power on. If partitions no longer respond, the system power is off. 6. From a telnet session from another system, attempt to ping or contact active partitions on this managed system. If the partitions are active, perform the following steps: a.
– If the restore failed, reset the service processor. See “Service processor reset” on page 225. Then continue with the next step. 3. If the problem persists, contact your next level of support or your hardware service provider. Error state The Error state automatically generates a call to the service support center if the function is enabled. If the function is not enabled, contact your next level of support or your hardware service provider. You also can follow the next section to correct it. 6.9.
Table 6-4 Progress code list and their meaning Progress codes: Refer to these topics: 4-character codes (including those that begin with a space character or a zero) AIX IPL progress codes C1xx Service processor progress codes C2xx (C2xx) Virtual service processor progress codes C3yx, C500, C5yx, C600, C6xx IPL status progress codes C700 (C700) Server firmware IPL status progress codes C900 (C900) IPL status progress codes CAxx (CAxx) Partition firmware progress codes D1xx Service processor
2. Select System Information, then select either Previous Boot Progress Indicators or Progress Indicator History Selecting Previous Boot Progress Indicators shows the progress codes that the server displayed in the control panel during the last system boot. Progress Indicator History shows the latest progress codes that the server is currently displaying in the control panel. Using the list of progress codes The list of progress codes is in numeric order.
The third column, Failing Item, offers instructions for recovering from a hang on a progress code. Click the link in this column to view the service actions only when you experience a hang condition on a progress code. A hang conditions exists when the code in the control panel display does not change for several minutes and the service processor appears to be stalled (hung). In some cases, you might notice that the server does not power down normally.
220 Progress Code Description/Action Perform all actions before exchanging Failing Items C1009x68 Wire test IPL step in progress C1009x70 Memory size IPL step in progress C1009x78 Long scan initialization IPL step in progress C1009x80 Start clocks IPL step in progress C1009x88 SCOM initialization IPL step in progress C1009x90 Run interface alignment procedure IPL step in progress C1009x98 DRAM initialization IPL step in progress C1009x9B Random data test IPL step in progress C1009xA0 Mem
Progress Code Description/Action Perform all actions before exchanging Failing Items C11120FF Power on: completed Standby-PowerOnTransition transition file (primary) C1122000 Power on: starting PowerOnTransition-PoweredOn transition file (primary) C11220FF Power on: completed PowerOnTransition-PoweredOn transition file (primary) C1132000 Power on: starting PoweredOn-IplTransition transition file (primary) C11320FF Power on: completed PoweredOn-IplTransition transition file (primary) C116C2xx Sy
222 Progress Code Description/Action Perform all actions before exchanging Failing Items C14420FF IPL: completed IdleTransition-Idle transition file (secondary) C1452000 IPL: starting Ipl-StandbyVerificationTransition transition file (secondary) C14520FF IPL: completed Ipl-StandbyVerificationTransition transition file (secondary) C1462000 IPL: starting StandbyVerificationTransition-Standby transition file (secondary) C14620FF IPL: completed StandbyVerificationTransition-Standby transition file (
Progress Code Description/Action Perform all actions before exchanging Failing Items C1F42000 Reset/reload: starting Reset/Ipl-TermTransition transition file (primary) C1F420FF Reset/reload: completed Reset/Ipl-TermTransition transition file (primary) (D1xx) Service processor progress codes (SP dump & Platform dump) Service processor dump status codes use the format of D1yy1xxx, where yy indicates the type of data that is being dumped,xxx is a counter that increments each time the server stores 4K of
Progress Code Description/Action Perform all actions before exchanging Failing Items D1151xxx Dump all /opt/p3 except rtbl D1161xxx Dump pddcustomize -r command D1171xxx Dump registry -l command D1181xxx Dump all /core/core.
Table 6-7 (D1xx) Service processor status progress codes(Platform power off) Progress Code Description/Action Perform all actions before exchanging Failing Items D1xx900C Breakpoint set in CPU controls has been hit D1xxB0FF Request to initiate power-off program has been sent D1xxC000 Indicates a message is ready to send to the server firmware to power off D1xxC001 Waiting for the server firmware to acknowledge the delayed power off notification D1xxC002 Waiting for the server firmware to send the
Figure 6-48 shows an example to reset SP with ASMI. Figure 6-48 ASMI-to reset SP Notes: This feature is available only when the system is powered off. Reset SP with reset button Follow this procedure: 1. Activate the service processor pinhole reset switch on the system's operator panel by carefully performing these steps: a. Using an insulated paper clip, unbend the clip so that it has a straight section about two inches long. b.
4. Choose from the following options: – If there is no firmware update available, continue with the next step. – If a firmware update is available, apply it using the Service Focal Point in the HMC. – Did the update resolve the problem and the system now boots? Yes: This ends the procedure. No: You are here because there is no HMC attached to the system, the flash update failed, or the updated firmware did not fix the hang. Continue with the next step. 5.
Figure 6-49 shows the hardware and software that might require fixes, including the HMC, I/O adapters and devices firmware, server firmware, power subsystem firmware, and operating systems. Figure 6-49 hardware and software that might require fixes Read about each type of fix to learn more about them and to determine the best method to get fixes in your environment.
e. Click Test. f. Verify that the test completes successfully. If the test is not successful, troubleshoot your connectivity and correct the problem before proceeding with this procedure. If you prefer, you can follow the “Without Internet” path in this procedure. You will need to obtain the fix on CD-ROM. 2. Determine existing and available HMC levels. To determine the existing level of HMC machine code: a. In the Navigation Area, open the Licensed Internal Code Maintenance folder. b.
– Download fixes from a Web site to an FTP server that can accept an FTP request from your HMC. To use this method, your HMC must be connected to an open network. This method requires two steps. First, you go to a Web site from which you download the fixes to the FTP server. Second, you use the HMC interface to install the fixes from the FTP server to the HMC. Follow these steps to download the HMC machine code fixes to an FTP server: i. Go to the Fix Central Web site: http://www.ibm.
Firmware (Licensed Internal Code) fixes This topic describes the following types of firmware (Licensed Internal Code) fixes: Server firmware: Server firmware is the part of the Licensed Internal Code that enables hardware, such as the service processor. Check for available server firmware fixes regularly, and download and install the fixes if necessary.
FRU SVCPROC-replace SP The service processor is failing. After you have replaced the part, set the configuration ID for SPCN before powering up, otherwise the machine will not IPL. You can change the processing unit identifier, also referred to as the processing unit SPCN (system power control network) ID. The processing unit SPCN ID is used by the SPCN firmware to identify the system type. It is also used to identify the primary service processor if there are two service processors in the system.
6.11 Determining the HMC serial number For some HMC or Service Processor troubleshooting situations, a Product Engineer PE) will have to sign on to the HMC. The PE password changes daily and is not available for normal customer use. If the PE determines that a local service engineer can sign on to the HMC, the PE may request the HMC serial number. To find the HMC serial number, open a restricted shell window and run the following command, lshmc -v. Figure 6-50 is an example of the information displayed.
234 Logical Partitions on System i5
7 Chapter 7. HMC security and user management In this chapter we discuss security implementation within the HMC environment. The following topics are described: Certificate authority Server security Object manager security HMC User management © Copyright IBM Corp. 2005, 2006. All rights reserved.
7.1 System Manager Security System Manager Security ensures that the HMC can operate securely in the client-server mode. The managed machines are servers and the managed users are clients. Servers and clients communicate over the Secure Sockets Layer (SSL) protocol, which provides server authentication, data encryption, and data integrity.
7.2 Overview and status The overview and status window displays the following information about the secure system manger server: Whether the secure system manager server is configured Whether the private key for this system manager server is installed Whether this system is configured as a Certificate Authority 7.2.1 Certificate Authority (CA) Note: You cannot perform the following function using a remote client.
To download the security package so that the client/server connection, that is, the PC to HMC, is secure, type in the following address in your Web Browser: hostname/remote_client_security.html Once again you choose whether you want the Windows based version or the Linux based version. 7.2.3 Object Manager Security The HMC Object Manager Security mode can be configured as either Plain Socket or Secure Sockets Layer (SSL). By default, the Plain Sockets mode is used.
Service Representative A service representative is the person who installs or repairs the system. Product Engineer The product engineer will assist in support situations, but the profile cannot be used to access user management functions in HMC. Viewer A viewer can view HMC information, but cannot change any configuration information. 7.3.2 Add, modify, copy, or remove user profile This section shows you how to add, modify, copy or remove a user profile with various user roles as described in 7.3.
2. Insert the new user ID, a description of the user ID, the password for the new user ID, and re-type the new password (Figure 7-4). 3. Select hmcsuperadmin from Task Roles in order to create a new user with the System Administrator role. You may select Enforce strict password rules to give the password expiration and type the number of the expiration day as shown in Figure 7-5. Enforce strict password rules will set the password to expire after the number of the day specified.
Add a new user with Viewer role If you want to grant a person permission to sign in to the HMC and view the HMC and system configuration and status, but not to make any changes, you can create a User ID with the Viewer role, which has the capability to view only the HMC and system configuration and status. To add a new user with the Service Representative role, perform the following steps: 1. Select User menu, click Add. to add a new user. The HMC will display the Add User window as shown in Figure 7-3. 2.
The HMC Viewer is only given very limited access to functions in the HMC. Figure 7-9 shows the limited menu for the HMC Viewer. Figure 7-9 Very limited menu available for HMC Viewer user 7.3.3 Customizing user task roles and managed resource roles You can customize HMC Task Roles and Managed Resource Roles via the HMC console. You can add new Task Roles and Managed Resource Roles based on existing roles in the HMC.
To create a new managed resource role, click Edit → Add, and the Add Role window will be displayed. Fill in the name for the new managed resource role, and choose from which resource role the new managed resource role objects will be based on. Select which object will be available for the new managed resource role, then click Add. to add them to the new managed resource role current objects. Click OK to create a new managed resource role.
To create a new user task role, click Edit → Add, and the Add Role window will be displayed. Fill in the name for the new managed resource role, and choose from which task role the new task role objects will be based on. Select which object will be available for the new task role, then click Add. to add them to new task role current objects. Click OK to create a new task role. An example of creating a new task role is shown in Figure 7-13.
8 Chapter 8. HMC duplication and redundancy In this chapter we introduce HMC cloning — attaching redundant HMC devices to a single i5 managed system, multiple managed systems, or a single HMC to multiple managed systems. Although the System i5 systems will function properly with the HMC disconnected, it is the only interface for managing a partitioned system and thus is a key component.
Figure 8-1 Two HMCs directly attached Figure 8-2 shows the same redundant HMCs connected via a hub. This configuration allows attachment of other devices. We recommend using a private network.
8.2 Multiple managed system configuration To save space and to centralize multiple system management control points, you can configure up to 48 managed systems on a single HMC. In Figure 8-3 and Figure 8-4, we show the current HMC management scenario of one HMC managing two servers and then one HMC managing three servers. As you increase the number of managed servers, you will definitely need to introduce hubs/switches into your private network.
Figure 8-4 shows one HMC managing three iSeries servers. Figure 8-4 HMC with three managed systems 8.3 Cloning HMC configurations System Profiles and their associated Partition Profiles are stored in NVRAM of the Service Processor (SP). When a redundant HMC is connected to an System i5 system, with valid partitions, the System Profile and Partition Profile information is automatically downloaded to the redundant HMC when the HMC is powered on.
8.4 Redundant HMC configuration considerations In a redundant HMC configuration, both HMCs are fully active and accessible at all times, enabling you to perform management tasks from either HMC at any time. There is no primary or backup designation. Both HMCs can be used concurrently. You have to consider the following points: Because authorized users can be defined independently for each HMC, determine whether the users of one HMC should be authorized on the other.
250 Logical Partitions on System i5
9 Chapter 9. Migration of existing LPAR profiles to HMC Users of logical partitioning have a new interface for managing LPAR on System i5 systems. The Work with System Partitions option available in V4R4/V4R5/V5R1/V5R2 is no longer available in V5R3 running on the new System i5 hardware. This option has been moved from DST/SST to the Hardware Management Console (HMC). The HMC is required for any System i5 that is running logical partitioning.
9.1 Migration planning This section focuses on the planning activities required to move POWER4™ LPAR configurations to System i5 HMC LPAR profiles. You must understand your current LPAR environment and have accurate documentation of how your system is configured and an inventory of the resources that are presently in use. There are several tools and tasks that can be used to document the current system environment and plan for a successful migration.
4. On the Work with Partitions screen, select Option 1, Display Partition Information. 5. Select Option 5, Display System I/O Resources. 6. Use the F6 Print Key to print a copy of the rack configuration. This printout contains all of the information usually collected from the Hardware Service Manager plus all of the partition information. 9.1.2 Get a printout of your resource allocations To document your resource allocations, perform the following steps: 1. Start SST by entering STRSST. 2.
To download the tool, visit the IBM eServer iSeries Support Web site and look for iSeries Tools. Use the following URL: http://www-912.ibm.com Note: The LVT is not a marketing configurator. It does not automatically add hardware features (except base and primary CD or DVD). It will not prevent inefficient system design so long as the design itself meets manufacturing card placement rule and minimum LPAR recommendations. 9.1.
9.1.9 Order the System i5 Order the necessary hardware and software based on the output from the LVT tool or validated work sheets/configurator output. 9.2 Preparing your system for migration to System i5 and HMC Before you start the migration, complete the following required tasks: Review your LPAR configuration. Clean up non-reporting resources and unsupported hardware, including attached migration towers. Load V5R3 on all OS400 partitions. Update the Linux kernel to 2.
9.3 Migrating Linux partitions This topic describes the options and requirements for migrating a Linux installation from an iSeries server to an ^ System i5 system. The first step in migrating a Linux installation from an iSeries server to an ^ System i5 is to upgrade to a Linux version that supports the ^ System i5 system. Follow these steps to complete the Linux upgrade: 1. On your existing iSeries server, upgrade to a new Linux version that supports the ^ System i5 systems.
9.4.1 Information gathering and pre-sales planning In the first scenario, the current system has two OS400 LPARs (P0 and P1) and consists of the 810 system unit and one 5094 tower. All of the P0 Primary partition resources are located in the 810 system unit. All of the resources for the P1 Secondary Partition are in the 5094 tower. A Primary partition no longer exist in the System i5 systems.
In the second scenario, the current system has three OS400 LPARs (P0, P1and P2) and consists of the 825 system unit, a 5094 tower with a 5088, a 5074. The essential resources for the Primary partition are located in the 825 system unit (includes load source, disk and console IOAs). All of the resources for Secondary Partitions P1 are in the 5074 tower and all of the resources for P2 are in the 5094. The 5088 contains some switchable resources and other Primary partition resources.
The following tasks provide the necessary information to plan the migration. After reviewing this information, you will be able to plan the migration and identify any changes required to the current system configuration as well as the configuration of the proposed system. 1. Print a copy of the current LPAR configuration. 2. Print a copy of the current resource allocation. 3.
b. Complete the System Selection window (each option is described below) and click Next (Figure 9-4): i. System Type: Select the iSeries model (810 in this scenario). ii. Primary Partition OS Level: Select Operating System Version and Release that will be used by the Primary partition (this option will not be available in System i5 models as a Primary partition no longer exists). iii.
c. Complete the Partition Specifications window (each option described below) and click Finish (Figure 9-5). i. Primary Partition Console Type: Select the console type that will be used by the Primary Partition (i.e. 9793 for Operations Console or 4746 for twinaxial, etc.) ii. Shared Processor Pool: Enter amount of processor that will be shared among partitions. iii. Shared checkbox: Check this box if the partition will be using shared processors iv.
d. Select applicable features (IOPs, IOAs, Disk, Towers, etc.). To add a feature first select the IOP, IOA, Drives or Linux tabs. Then select the feature and click Add next to the desired slot or location (Figure 9-6). Figure 9-6 Example window features and towers e. Validate the LPAR configuration by selecting Validate. Correct any errors as they appear in the message window in red (Figure 9-7).
f. The resulting report can be viewed or saved to disk by selecting Report. For more complete information, save both the Detail and Summary versions of the report using the All option (Figure 9-8). Figure 9-8 Example window view or save report g. The following images contain relevant portions of the Detail reports for the existing 810 and 825 in these scenarios (see Figure 9-9 here through Figure 9-16 on page 267). Figure 9-9 LVT report — current system first scenario page 1 Chapter 9.
Figure 9-10 LVT report — current system first scenario page 2 Figure 9-11 LVT report — current system first scenario page 3 264 Logical Partitions on System i5
Figure 9-12 LVT report — current system second scenario page 1 Figure 9-13 LVT report — current system second scenario page 2 Chapter 9.
Figure 9-14 LVT report — current system second scenario page 3 Figure 9-15 LVT report — current system second scenario page 4 266 Logical Partitions on System i5
Figure 9-16 LVT report — current system second scenario page 5 4. Using these reports, you can identify possible ways to simplify the migration. For example: – In the first scenario, notice that there are currently (15) FC 4318 disks units in the 810 System Unit which belong to the Primary partition. The 520 System Unit can accommodate a maximum of 8 disk units (4 base and 4 optional). This means that there is no room to move over intact all of the Primary Partition disks.
– In the second scenario, there are also enough available disk slots and PCI slots in the 5094 to relocate the (15) of the Primary partition during the migration. However, there are other considerations in this scenario. The 825 has three HSL Loops and the 570 System i5 only has one. It is important to consider the HSL cabling between different technology towers to avoid performance degradation. For instance, the 5074 uses HSL1, whereas the 5094 and 5088 uses more recent HSL2 technology.
Figure 9-19 Detail LVT report — proposed system first scenario page 1 Figure 9-20 Detail LVT report — proposed system first scenario page 2 Chapter 9.
Former Primary Disk and Disk/Console IOA Figure 9-21 Detail LVT report — proposed system first scenario page 3 Figure 9-22 Detail LVT report — proposed system second scenario page 1 270 Logical Partitions on System i5
Figure 9-23 Detail LVT report — proposed system second scenario page 2 Figure 9-24 Detail LVT report — proposed system second scenario page 3 Chapter 9.
Former Primary Disk and Disk/Console IOA Figure 9-25 Detail LVT report — proposed system second scenario page 4 Figure 9-26 Detail LVT report — proposed system second scenario page 5 9.4.2 Post-sales customer tasks for both scenarios At this point the current system information has been gathered and reviewed. This information was also used to plan the migration, and the necessary hardware has been purchased. Here are the remaining post-sales tasks to be completed by the customer: 1.
3. Export the LPAR configuration. Perform the following steps: a. Start an iSeries Navigator session and select the system that is partitioned (Figure 9-27). Figure 9-27 iSeries Navigator panel b. Select Configuration and Service (Figure 9-28). Figure 9-28 iSeries Navigator panel Chapter 9.
c. Select Logical Partitions, then right-click and select Configure Partition (Figure 9-29). Figure 9-29 iSeries Navigator panel d. A list of partition configurations is displayed (Figure 9-30).
e. Right-click Physical System and select Recovery. Then select Save All Configuration Data (Figure 9-31). Figure 9-31 iSeries Navigator panel f. Enter a PC filename. The file should have been created prior to this step. You can browse for the filename as well. Click OK; the file is saved to the media of your choice. We recommend CD or diskette; the HMC uses either media (Figure 9-32). Figure 9-32 iSeries Navigator Save Configuration Data Panel Chapter 9.
9.4.3 Post sales tasks: IBM Note: Depending on the model System i5 system being install, some or all of the HMC setup is done by the customer. The smaller System i5 systems are Customer Set Up (CSU). Refer to 3.1.3, “Initial setup of the HMC” on page 48 for more details. The IBM Customer Service Representative (CSR) performs the following tasks: 1. Sets up the new hardware and connects the Hardware Manager Console (HMC) to the new server: – The setup consists of connecting the HMC to the System i5 system.
Figure 9-34 Example of HMC attached to 520 System i5 system now with ECS connection 9.4.4 Customer migration tasks The customer must perform the following tasks: 1. Complete the setup of the HMC: – The new HMC will include a setup document and will come pre-loaded with the Hardware Information Center which contains additional information on many topics related to the HMC. Review 3.1.3, “Initial setup of the HMC” on page 48 for an overview of the HMC setup.
b. Click the right mouse key on the blank screen. The box in Figure 9-35 is displayed. Figure 9-35 HMC window c. Select Terminals and then rshterm. A term session is started (Figure 9-37). Figure 9-36 HMC window d. From the xterminal session (Figure 9-37) you can enter the commands to start the migration process. Figure 9-37 xTerminal session window e. Load your diskette or CD that contains your configuration data into your drive. f.
Figure 9-38 xTerminal session window 4. After the LPAR configurations are migrated (Figure 9-39), correct any resource reallocations resulting from P0/Primary being reassigned. Allocate new hardware resources as required. Validate the new Pn+1 partition (former primary) against the configuration and resource allocation documentation gathered in early steps. Refer to Chapter 5, “Partition creation using the HMC” on page 139, for detailed instructions for allocating resources and creating new partitions.
9.5.1 Backing up Critical Console Data Using your HMC you can back up the following data: User-preference files User information HMC platform-configuration files HMC log files The Back up Critical Console Data function saves the HMC data stored on the HMC hard disk to the DVD-RAM and is critical to support HMC operations. Back up the HMC after you have made changes to the HMC or to the information associated with partitions.
3. In the Contents area, select Back up Critical Console Data (Figure 9-42). Figure 9-42 Back up Critical Console Data window 4. Insert a formatted DVD-RAM media into the drive. 5. Select Backup to DVD in Local System to save your critical console data on the HMC DVD-RAM and click Next (Figure 9-43). Figure 9-43 Backup dialog window Note: This backup could take a significant amount of time to complete, perhaps 1.5 hours if it were over 1 GB in size. Chapter 9.
9.5.2 Scheduling and reviewing scheduled HMC backups You can schedule a backup to DVD to occur once, or you can set up a repeated schedule. You must provide the time and date that you want the operation to occur. If the operation is scheduled to repeat, you must select how you want this backup to repeat (daily, weekly, or monthly). Note: Only the most-recent backup image is stored at any time on the DVD. To schedule a backup operation, do the following steps: 1.
10 Chapter 10. Using the Advanced Systems Management Interface This chapter describes the setup and use of the Advanced Systems Management Interface (ASMI). The Advanced Systems Management Interface provides a terminal interface via a standard Web browser to the service processor that allows you to perform general and administrator level service tasks.
10.1 ASMI introduction All System i5 systems would use an ASMI to communicate with the service processor. The ASMI function provides much of the same function that had been provided in OS/400 DST/SST in all previous releases before i5OS V5R3. Most System i5 systems would also typically be controlled using the Hardware Management Console (HMC) introduced in Chapter 3. Any server that would be divided into a multi-partitioned environment would require the HMC to create and maintain the LPAR environment.
10.3 Initial tour of the interface When you connect to the server using the correct secure link and IP address, you will see an initial Welcome login panel similar to the one shown in Figure 10-1. You need to sign on using the Administrator profile (or a profile with Administrator authority levels) in order to see and execute most of the functions described in the remainder of this chapter. Figure 10-1 ASMI Welcome login panel Enter a valid User ID and Password, and select the Language you want to use.
When you first log in to the server, you will see the panel shown in Figure 10-2. You can choose to expand one or more of the service menus, or you can choose to expand all service menus to begin. Figure 10-2 First ASMI panel after login More than one user can be signed on to ASMI at the same time. You will see one or more Current users on the panel for each User ID that is signed on. You will also see the IP address that the user is working from.
10.3.1 Power/restart control Figure 10-3 shows the expanded Power/Restart Control menu. Using this menu, you can: Power the system on or off. Set the function to allow an auto-power restart of the system if the system has experienced a power interruption. Perform an immediate power off. Perform a system reboot. Set the function to allow the system to be powered on remotely through a local area network (LAN) connection.
Figure 10-4 Powering off the system Firmware boot side for the next boot Select the side from which the firmware will boot: permanent or temporary. Typically, firmware updates should be tested on the temporary side before being applied to the permanent side. This selection is analogous to the previous OS/400 concept of starting the system using micro-code from the “A” side (with permanent PTFs applied only) or using micro-code from the “B” side (using both temporary and permanent PTs applied).
Boot to system server firmware Select the state for the system server firmware: standby or running. System power off policy Select the system power off policy. The system power off policy flag is a system parameter that controls the system's behavior when the last partition (or the only partition in the case of a system that is not managed by an HMC) is powered off. The choices are: 1) Power off. (When the last partition is shut down, the system will power down) 2) Stay on.
Figure 10-6 shows an example of the confirmation you will receive when you successfully power on the system. Figure 10-6 Example: Power on confirm Auto power restart You can set your system to automatically restart. This function is useful when power has been restored after an unexpected power line disturbance had caused the system to be shut down unexpectedly. Select either Enable or Disable from the example shown in Figure 10-7). By default, the auto power restart value is set to Disable.
Figure 10-8 Immediate power off System reboot You can reboot the system quickly using the reboot function shown in Figure 10-9. The operating system is not notified before the system is rebooted. Attention: Rebooting the system will immediately shut down all partitions. To avoid experiencing data loss and a longer IPL the next time the system or logical partitions are booted, shut down the operating system prior to performing a reboot.
10.3.2 System service aids Figure 10-11 shows the expanded System Service Aids menu. Using this menu, you can: Display system error logs. Set the function to allow a serial port snoop. Initiate a system dump. Initiate a service processor dump. Initiate a Partition Dump Set up a serial port for the call-home and call-in function. Configure the modem connected to the service processor’s serial ports. Set up the call-in and call-home policy. Test the call-home function.
Error/event logs From the System Service Aids menu, you can select the option to display the system error/event logs. You can view error and event logs that are generated by various service processor firmware components. The content of these logs can be useful in solving hardware or server firmware problems. You will see a selection panel similar to the one shown in Figure 10-12.
A panel similar to Figure 10-14 will be displayed for each of the events that you may have selected. You would then be able to use the information when working with your hardware service provider. Figure 10-14 Detail of Error/Event logs Serial port snoop You can disable or enable serial port snoop on a serial service port. When enabled, data received on the selected serial port is examined, or snooped, as it arrives.
Figure 10-16 shows the options you can choose relating to a system dump. You can choose to change one or more of the system dump options, or you can choose to change one or more of the options and initiate a system dump. Figure 10-16 Initiating a system dump Dump policy Select the policy to determine when system dump data is collected.
Figure 10-17 Service processor dump Setting Enable or disable the service processor dump function. The default value is enabled. A service processor dump captures error data after a service processor failure, or upon user request. User request for service processor dump is not available when this policy is set to disabled Save settings Click this button to save the setting for service processor dump.
Serial port setup You can configure the serial ports used with the call-home and call-in features with this option. You can also set the baud rate for the serial ports. This function is not available if your system is managed by an HMC. If your system is managed by an HMC, you will see a panel as shown in Figure 10-19.
Call-Home Test You can test the call-home configurations and settings after the modem is installed and set up correctly. This function is not available if your system is managed by an HMC. If your system is managed by an HMC, you will see a panel as shown in Figure 10-22. Figure 10-22 Call-home test not available Reset service processor Use this procedure only under the direction of your service provider.
Factory configuration reset Use this procedure only under the direction of your service provider. In critical systems situations, you can restore your system to the factory default settings. Continuing will result in the loss of all system settings (such as the HMC access and ASMI passwords, time of day, network configuration, and hardware deconfiguration policies) that you have to set once again through the service processor interfaces.
Vital product data From the System Information menu, you can select the option to display the system Vital Product Data. This is the manufacturer’s data that defines the system. This data was stored from the system boot prior to the one in progress now. You will see a selection panel similar to the one shown in Figure 10-28.
A panel similar to Figure 10-30 will be displayed for each of the vital product detail entries that you may have selected. You would then be able to use the information when working with your hardware service provider. Figure 10-30 Details of vital product data Power control network trace You can perform a system power control network (SPCN) trace and display the results. This information is gathered to provide additional debug information when working with your hardware service provider.
After several minutes, you will see a panel similar to Figure 10-31. Your service provider can make use of this data if requested. Figure 10-31 Power control network trace Previous boot progress indicator You can view the progress indicator that displayed in the control panel during the previous boot, if the previous boot had failed. During a successful boot, the previous progress indicator is cleared. If this option is selected after a successful boot, you will see no indicator.
Figure 10-33 shows an example of the progress indicator history selection panel. You can select one or more codes, as directed by your hardware service provider, and click the Show details button as shown in Figure 10-34. Figure 10-33 Progress indicator history selection Figure 10-34 Displaying the selected entries A display similar to Figure 10-35 will be shown for the entries that you had selected. The details can be interpreted by your hardware service provider.
If you have the required authority level, you will see a panel similar to Figure 10-36. Figure 10-36 Real-Time Progress Indicator System configuration Figure 10-37 shows the expanded System Configuration menu. Using this menu, you can: Change the system name. Display the processing unit identifier. Configure I/O Enclosure Change the time of day. Establish the firmware update policy.
System name From the System Configuration menu, you can select the system name option to display the current system name and change the system name if you choose to do so. The system name is a value used to identify the system or server. The system name may not be blank and may not be longer than 31 characters. To change the system name, enter a new value and click the store settings button. See Figure 10-38 for an example of the panel you would use to change the system name.
Figure 10-40 Reset the processing unit identifier Processing unit identifier values The power control network identifier is intended to uniquely identify each enclosure on the power control network. Typically, these identifiers are automatically assigned by firmware. In some cases, a user may wish to assign specific identifiers to specific drawers. This value is 2 hexadecimal digits. Supported processing unit identifiers are shown in Table 10-2.
Refer to Figure 10-41 to see how you can modify the following options for each enclosure that you select. Figure 10-41 Configure I/O enclosures Next we provide a description of these options. Identify enclosure Click this button to turn on the indicator on the selected enclosure. You can then visually inspect the enclosure to see the indicator turned on. Turn off indicator Click this button to turn off the indicator on the selected enclosure.
Time of day You can display and change the system’s current date and time. This function is not available if your system is powered on. If your system is powered on, you will see a panel as shown in Figure 10-42. Figure 10-42 Time of day not available If your system is powered off, you will see a panel as shown in Figure 10-43 allowing you to make changes to the system date or time. Figure 10-43 Reset date or time of day Use the following information to help you change the date or time.
For example, if you choose the Hardware Management Console (HMC) as the source for a firmware update, the HMC must be used to perform the update. You would use the panel shown in Figure 10-44 to select your source for firmware updates. Figure 10-44 Selecting the firmware update policy The default setting of this policy is not to allow firmware updates via the operating system. Note that tis policy only takes effect when a system is HMC managed.
Interposer Plug Count We can track the number of times that a multiple chip module (MCM) has been replaced or re-seated on a given interposer. This interposer plug count provides you with information needed to prevent field problems due to damaged or overused interposers. Whenever a service action is performed on a system that requires the replacement or re-seating of an MCM, service personnel are responsible for updating the plug count for that interposer (Figure 10-46).
I/O Adapter Enlarged Capacity You can increase the amount of I/O adapter memory for specified PCI slots. This option controls the size of PCI memory space allocated to each PCI slots. When enabled, selected PCI slots, including those in external I/O subsystems, receive the larger DMA and memory mapped address space. Some PCI adapters may require this additional DMA or memory space, per the adapter specification.
Hardware deconfiguration policies You can set various policies to deconfigure processors and memory in certain situations. Deconfiguration means that the resource is taken from a state of being available to the system, to a state of being unavailable to the system. This can be automated to some degree though the use of policies.
Figure 10-53 General hardware deconfiguration policies Deconfigure on predictive failure Select the policy for deconfigure on predictive failures. This applies to run time or persistent boot time deconfiguration of processing unit resources or functions with predictive failures, such as correctable errors over the threshold. If enabled, the particular resource or function affected by the failure will be deconfigured.
A processor is marked deconfigured under the following circumstances: If a processor fails built-in self-test or power-on self-test testing during boot (as determined by the service processor). If a processor causes a machine check or check stop during run time, and the failure can be isolated specifically to that processor (as determined by the processor run-time diagnostics in the service processor firmware).
Processor deconfiguration Refer to Figure 10-55 to see how a processor might be deconfigured. You would select whether each processor should remain configured or become deconfigured, and click the Save settings button. State changes take effect on the next platform boot. Memory deconfiguration Most System i5 systems will have several megabytes (MB) of memory. Each memory bank contains two DIMMs (dual inline memory module).
Figure 10-56 Memory deconfiguration, processing unit selection Figure 10-57 Memory deconfiguration, memory bank selection Memory deconfiguration Refer to Figure 10-57 to see how a memory bank block might be deconfigured. You would select whether each memory bank should remain configured or become deconfigured, and click the Save settings button. State changes take effect on the next platform boot.
Program vital product data Figure 10-58 shows the expanded Program Vital Product Data menu, which is a sub-menu under the main System Configuration menu. Using this menu, you can: Display system brand. Display system keywords. Display system enclosures. Figure 10-58 Expanded Program Vital Product Data menu System brand This menu (Figure 10-59) is available only when the system is powered off. Figure 10-59 System brand System brand Enter a 2-character brand type.
System keywords This menu is available only when the system is powered off (Figure 10-60). Figure 10-60 System keywords System unique ID Enter a system-unique serial number as 12 hexadecimal digits. The value should be unique to a given system anywhere in the world. A valid value is required for the machine to boot. Storage facility system type-model Enter a machine type and model in the form TTTT-MMM, where TTTT is the 4-character machine type and MMM is the 3-character model.
Storage facility system unique ID Enter a system-unique serial number as 12 hexadecimal digits. The value should be unique to a given storage facility anywhere in the world. A valid value is required for the machine to boot. Additionally, for storage to be accessible online, this value must match exactly both systems that constitute the storage facility. Storage facility manufacturing ID Enter a storage facility manufacturing ID in the form JJJYYYY, where JJJ is the Julian date and YYYY is the year.
Reserved Reserved — this should be set to blanks unless directed by Level 4 Service. Service indicators Figure 10-62 shows the expanded Service Indicators menu, which is a sub-menu under the main System Configuration menu. Using this menu, you can: Display the system attention indicator Display the enclosure indicators Display indicators by location code Perform a lamp test Figure 10-62 Expanded Service Indicators menu System attention indicator Figure 10-63 shows the system attention indicator.
Enclosure indicators Figure 10-64 shows the enclosure indicators. Figure 10-64 Select enclosure indicators Select enclosure and continue (Figure 10-65 and Figure 10-66). Figure 10-65 Enclosure indicators, part 1 of 2 Figure 10-66 Enclosure indicators, part 2 of 2 Off/Identify = the two options. Chapter 10.
Continue Click this button to display another page of indicators for the selected enclosure. Save settings Click this button to update the state of all the indicators for this enclosure. Turn off all indicators Click this button to turn off all the indicators for this enclosure. Indicators by location code This is the same as the previous section (enclosure indicators) if you already know the location code (Figure 10-67). Figure 10-67 Indicators by location code U7879.001.
10.3.4 Network services Figure 10-70 shows the expanded Network Services menu. Using this menu, you can: Display or change the ethernet port network configurations for the service processor. Display or change the IP addresses that are allowed access to the service processor ethernet ports. Figure 10-70 Network Services menu Network configuration Using this option you can display, or display and change, the system’s ethernet network interfaces to the service processor.
Figure 10-72 Display of network interface port 1 When the system is powered off, you will be able to see the current network settings and you will also be able to make changes to the network configuration. You can select Configure this interface (for eth0, eth1, or both) and then click the Continue button. In Figure 10-73 we will continue with the example of selecting to change the configuration for ethernet service port 1 (eth1).
Host name Enter a new value for the hostname. The valid characters are: hyphen and period [ - . ]; upper and lower case alphabetics [ A - Z ] and [ a - z ]; numeric [ 0 - 9 ]. The first character must be alphabetic or numeric and the last character must not be a hyphen or a period. However, if the hostname contains a period, then the preceding characters must have an alphabetic character. This input is required for the static type of IP address. Domain name Enter a new value for the domain name.
Selecting Save Settings will then cause the network configuration changes to be made and the service processor to be rebooted. As the service processor reboots, your ASMI session will be dropped and you will have to reconnect your session to continue. When you reconnect you will be using the new settings. . Attention: If incorrect network configuration information is entered, you may not be able to use the ASMI after the service processor reboots.
To allow access to the service processor from any IP address, enter “ALL” in the allowed list. “ALL” is a valid IP address that can be entered. An empty allowed list is ignored and access is granted from any IP address. Tip: The IP address of the browser you are currently using to connect to ASMI is shown in the Network Access panel. In our example, Figure 10-75 shows our IP address as: 9.10.136.220 Denied IP addresses Enter up to 16 complete or partial IP addresses to be denied.
Logical memory block size Using this option, you can display or change the Logical Memory Block (LMB) size used by your system. To display or change the memory block size currently used, you would select the Logical Memory Block Size option from the Performance Setup menu. You will be presented with a panel similar to the one in Figure 10-78.
For System i5 systems running as a single server image this has much less impact. You will seldom need to be concerned with granularity of the memory because you have all of the system’s memory assigned to a single partition. Note: All System i5 systems require that some amount of system memory must be allocated for the controlling Hypervisor. Selecting a larger MLB size may have an effect on the amount of memory the system will require to be assigned to the Hypervisor.
Figure 10-79 On Demand Utilities menu CoD order information You can use this option to generate the system information required when you need to order additional processor or memory activation features from IBM or your business partner. When you place an order with IBM or your business partner, you will receive activation codes (keys) that must be entered into the system. Note: This feature is not available to be displayed prior to the system server firmware reaching the state of standby mode.
CoD activation keys You will use the display shown in Figure 10-81 to enter the processor and memory activation keys provided to you by IBM or your business partner. You may have more than one key to enter. Note: This feature is not available prior to the system server firmware reaching the state of standby mode. Figure 10-81 CoD activation Enter the CoD activation key and click Continue. You may need to enter one or more keys.
CoD commands There may be situations where your service provider may need to have CoD commands entered into the system firmware. The service provider will specify the command, which may then be entered onto the panel shown in Figure 10-83. Note: This feature is not available prior to the system server firmware reaching the state of standby mode. Figure 10-83 Enter CoD command (optional) Enter a CoD command. If needed, the command is supplied by hardware provider.
Control panel Selecting Control Panel from the Concurrent Maintenance menu will show the display in Figure 10-85. From this display you can click Continue to remove and replace an existing control panel, or to add a new control panel. You could, for example, remove an existing control panel that has become inoperative and replace it with a new control panel. This option prepares the control panel for concurrent maintenance by logically isolating the control panel.
With the control panel now replaced, you can now return and use the Install action shown in Figure 10-88 to activate the new control panel, completing the concurrent maintenance procedure. Figure 10-88 Example: Control panel install using concurrent maintenance IDE Device Control An IDE device can be either a CD-ROM drive or a DVD-R/W drive. When you select IDE Device Control from the Concurrent Maintenance menu you will see a panel similar to Figure 10-89.
To perform concurrent maintenance, you would identify, by location code, the failing IDE device you want to repair. You would then change the state of the “pair” of devices as shown in Figure 10-90 by selecting a state of Off and clicking the Save settings button. Figure 10-90 Example: IDE device power off using concurrent maintenance You will next see a confirmation panel as shown in Figure 10-91.
10.3.8 Login Profile You must use a Login Profile each time you access the ASMI menus. This Login Profile consists of a User ID and Password set to a specific authority level. Figure 10-92 shows the expanded Login Profile menu. Using this menu, you can: Change the password for a user. Display the successful or failed login attempts to the service processor. Select or change the default language used when accessing the ASMI Welcome panel.
Figure 10-93 Change password User ID to change Select the user ID of the user whose password you wish to change. Choices you can make are general, admin, or HMC. Current password for current user As a security measure, the current password must be supplied. The initial factory default passwords are set to: general for the User ID general admin for the User ID admin abc123 for the HMC user ID of hscroot (may have been changed during HMC guided setup).
Figure 10-94 Show successful login attempts Figure 10-95 Show failed login attempts Change default language Using the display shown in Figure 10-96, you can change the default language for ASMI users. This controls the language that is displayed on the ASMI Welcome panel prior to login, and during your ASMI session if you do not choose an alternative language at the time of login.
From the pull down menu, select the default language to use for the ASMI Welcome panel and click the Save settings button. You will receive a confirmation panel similar to Figure 10-97. The change will take place in just a few minutes with no restart of the service processor firmware required. Note that a user can override the default at ASMI login time. If no override is selected at login, then the default language is used for that session.
340 Logical Partitions on System i5
11 Chapter 11. OpenSSH The licence program 5733-SC1 contains the OpenSSH (Secure SHell), OpenSSL, and zlib open source packages ported to i5/OS using the i5/OS PASE runtime environment. The SC1 licensed program requires i5/OS V5R3 or later and also requires that i5/OS Option 33 (i5/OS PASE - Portable Solutions Application Environment) be installed. TCP/IP connectivity applications such as telnet and ftp transmit data and passwords over the network in plain text.
11.1 Utilities available in Open SSH The following utilities are available in Open SSH: 1. ssh — A secure telnet replacement that allows an i5/OS user to connect as a client to a server running the sshd daemon. An ssh client can also be used to connect to the HMC on the IBM Eserver® 5xx iSeries models. 2. sftp — A secure ftp replacement. As with all implementations of sftp on other platforms, sftp can only transfer data in binary format.
Figure 11-1 Installed licence program -5722SS1-33 The installed licence program 5733SC1 is shown in Figure 11-2. Figure 11-2 Installed licence program -5733SC1 11.3 Using the HMC from i5/OS with OpenSSH Although the i5/OS, with the OpenSSH license program installed, supports the ssh server and client, here we are using OpenSSH as a client. Basically, to work with OpenSSH in i5/OS, you need to create the users on HMC and i5/OS with the same name (Figure 11-3).
Figure 11-3 HMC Users Click the User tab from the User profiles window. Click Add to add the user (Figure 11-4). Figure 11-4 HMC User Add Fill the details of the user you want to create in this Add User window, and also select the role of the user from the Task Roles. For example, create the user fred and select the Task Role as HMC superadmin.
Click OK to continue. See Figure 11-5. Figure 11-5 Add User Details As shown in Figure 11-6, displaying User profiles will now show Fred in the listing. Figure 11-6 User (fred) added. Create a user profile in i5/OS. For example, create the user called fred as shown in Figure 11-7. Figure 11-7 User Profile on i5/OS Chapter 11.
Logon as fred from the i5/OS and run the command call qp2term (Figure 11-8). Figure 11-8 Qp2term Create the directory called fred under /home. Change the ownership of the directory fred using the command chown fred fred. Go to the directory cd /home/fred. Note: The qp2term shell environment is not a true TTY device and this can cause problems when trying to use ssh, sftp, or scp within one of these sessions. Use the -T option to not allocate a TTY when connecting.
The following directory and files will be created under the directory /home/fred: /home/fred/.ssh /home/fred/.ssh/id_rsa (private key /home/fred/.ssh/id_rsa.pub (public key) Go to the directory cd /home/fred/.ssh (Figure 11-10). Note: The write bits for both group and other are turned off for these ssh key files. Ensure that the private key has a permission of 600. Figure 11-10 ssh key directory Run a command cat id_rsa.pub (Figure 11-11). Figure 11-11 SSH Key Chapter 11.
Copy the key from 5250 emulator screen as shown in Figure 11-12. Figure 11-12 SSH Key content Establish the connection to HMC from qp2term shell (Figure 11-13) using the command: ssh -T 9.5.92.92 (For example, 9.5.92.92 is an IP address of the HMC).
Follow the instructions shown in Figure 11-13 above to logon into the HMC. Once you logon to the HMC, run the command mkauthkeys to authenticate the key which we have generated, and paste the key here, as shown in Figure 11-14. Figure 11-14 Mkauthkeys Once the key authentication is done, you can logon to the HMC without userid and password. Run a command ssh -T 9.5.92.92 to logon to the HMC. Figure 11-15 HMC logon without password Chapter 11.
11.4 Running DLPAR scripts from i5/OS Once you logon to the HMC from i5/OS, all the HMC commands are available to use. You can use these commands to do DLPAR functions. Also, you can write the scripts to run the specific tasks and schedule the script to run from i5/OS. Scripts can be written from the desktop PC, and using the operation navigator, drag and drop them to the desired IFS directory (for example, /home/fred) in i5/OS.
Refer to the script shown in Figure 11-17. Figure 11-17 systemname - script Run the script (systemname) from the QSHELL command prompt (Figure 11-18). Figure 11-18 systemname You can see the output of the script Server-9406-550-SN10F17AD as shown here in Figure 11-18. Similarly, you can write the script to logon to the HMC and perform the specific tasks. Note: To see the command syntax, logon to the HMC and type lssyscfg --help and press Enter.
Example 11-2 mem-status PATH=$PATH:/QOpenSys/usr/bin:/usr/ccs/bin:/QOpenSys/usr/bin/X11:/usr/sbin:.:/usr/bin ssh -T 9.5.92.92 lshwres -m Server-9406-550-SN10F17AD -r mem --level lpar Figure 11-19 mem-status-script Figure 11-20 mem-status Figure 11-20 above shows the details of partition and the same is explained in Table 11-1.
Move the memory from the partition (5095 RCHAS55B 4 Disk) to partition (RCHAS55 #1 Partition) by executing the script mem-move as shown in Figure 11-22. This script (Example 11-3) moves the memory of size 1024 Mb from partition id 3 (5095 RCHAS55B 4 Disk) to partition id 4 (RCHAS55 #1 Partition). Example 11-3 mem-move PATH=$PATH:/QOpenSys/usr/bin:/usr/ccs/bin:/QOpenSys/usr/bin/X11:/usr/sbin:.:/usr/bin ssh -T 9.5.92.
Figure 11-21 shows the results.
11.5 Scheduling the DLPAR function from i5/OS Scripts can be scheduled to run from the i5/OS using the job scheduler. Scripts can be written from the desktop, and using the operation navigator, drag and drop them to the desired IFS directory (for example, /home/fred) in i5/OS. Scheduling the memory movement .
Note: The script shown in Figure 11-21 is scheduled here. Figure 11-26 shows the added job schedule entry. Figure 11-26 addjobscde-added The history log shows the memory size changes after the scheduled operation is completed (Figure 11-27). Figure 11-27 History log (System i5 #1 Partition) 11.5.1 Scheduling the i/o movement In this section we discuss how to schedule i/o movement. Adding the i/o to the partition The following procedure shows the how to schedule the i/o removal, adding, and movement.
Refer to Chapter 11, “OpenSSH” on page 341 for information on creating user ids in the HMC and i5/OS as well as the ssh authentication procedure. Figure 11-28 FelixComplex System Logon to i5/OS using the user name fred and enter the command QSH from the main menu command line to enter into the QSHELL environment. Create a script as shown in Example 11-4 from the windows desktop, and then through the operation navigator, drag and drop the file into the /home/fred directory.
You can write a script as shown in Example 11-5 to see the iodetails of the partition. Example 11-5 iodetails-script PATH=$PATH:/QOpenSys/usr/bin:/usr/ccs/bin:/QOpenSys/usr/bin/X11:/usr/sbin:.:/usr/bin #################### ssh -T 9.5.17.228 lshwres -r io --rsubtype slot -m FelixComplex -F phys_loc,description,lpar_name,drc_index,bus_id --header Figure 11-30 shows the script. Figure 11-30 iodetails-script The results are shown in Figure 11-31 on page 358 through Figure 11-36 on page 361.
QSH Command Entry C07,Empty slot,null,2103000F,15 C08,Empty slot,null,2104000F,15 C09,Empty slot,null,2105000F,15 C11,I/O Processor,null,21010013,19 C12,PCI Ultra4 SCSI Disk Controller,null,21020013,19 C13,Empty slot,null,21030013,19 C14,Empty slot,null,21040013,19 C15,Empty slot,null,21050013,19 C01,I/O Processor,null,21010014,20 C02,PCI Ultra4 SCSI Disk Controller,null,21020014,20 C03,Empty slot,null,21030014,20 C04,Empty slot,null,21040014,20 C05,I/O Processor,null,21010015,21 C06,PCI Ultra4 SCSI Disk Co
QSH Command Entry C07,Empty slot,null,21030012,18 C08,Empty slot,null,21040012,18 C09,Empty slot,null,21050012,18 C11,I/O Processor,null,21010016,22 C12,PCI Ultra4 SCSI Disk Controller,null,21020016,22 C13,Empty slot,null,21030016,22 C14,Empty slot,null,21040016,22 C15,Empty slot,null,21050016,22 C01,Empty slot,null,21010017,23 C02,Empty slot,null,21020017,23 C03,Empty slot,null,21030017,23 C04,Empty slot,null,21040017,23 C05,I/O Processor,null,21010018,24 C06,PCI Ultra4 SCSI Disk Controller,null,21020018,2
QSH Command Entry C07,SCSI bus controller,SixteenProcs,2103000C,12 C08,Empty slot,SixteenProcs,2104000C,12 C09,Empty slot,SixteenProcs,2105000C,12 $ ===> F3=Exit F6=Print F9=Retrieve F12=Disconnect F13=Clear F17=Top F18=Bottom F21=CL command entry Figure 11-36 iodetails - continued 5 Figure 11-31 through Figure 11-36 have shown the output of the iodetails script. In this output, IOP and IOA at location C11 and C12 at BUS ID13 are not allocated to any partition (that is, lpar_name is null).
From the i5/OS main menu, run the command wrkjobscde and press Enter, then press F6 to add the entry (Figure 11-37). Figure 11-37 wrkjobscde -io-add Enter the job name in the Job name field (for example, IOADD). Enter the qsh command in the Command to run field as shown in Figure 11-38 and press Enter.
Figure 11-39 shows the added job schedule entry. Figure 11-39 io-add-scheduled-entry Note: To see the command syntax, logon to the HMC, and from the command line, type chhwres --help and press Enter. Once the scheduled activity is completed, you can check the history log (which indicates the completion of the job) as shown in Figure 11-40. Figure 11-40 io-add histlog Chapter 11.
Run the script iodetails from the QSHELL to see the resource status as shown in Figure 11-41.
The removal script is shown in Figure 11-42. Figure 11-42 io-remove -script From the i5/OS main menu, run the command wrkjobscde and press Enter, then press F6 to add the entry as shown in Figure 11-43. Figure 11-43 wrkjobscde -io-remove Chapter 11.
The removal script is shown in Figure 11-44. Figure 11-44 io-remove - scheduled Figure 11-45 shows the added job schedule entry.
Once the scheduled activity is completed, you can check the history log (which shows the completion of the job) as shown in Figure 11-46. Figure 11-46 io-remove histlog Figure 11-47 shows the iodetails after removing the i/o.
Moving the i/o from one partition to another To move the i/o from one partition to another, we need to know the partition ids and the drc index of the particular IOP or IOA. You can see the partition details in Figure 11-29 on page 357. To move the IOP (C11) and IOA(C12) from the SixteenProc partition to the test partition, run the script as shown in Example 11-8. Example 11-3 shows the details taken from Figure 11-29 on page 357 and Figure 11-41 on page 364.
Download the PuTTY utility programs (putty.exe and plink.exe) from the Internet to the folder c:\putty as shown in Figure 11-48. Figure 11-48 putty - Folder Type plink and press Enter from c:\putty to see the command syntax as shown in Figure 11-49. Figure 11-49 Plink command syntax Chapter 11.
370 Logical Partitions on System i5
12 Chapter 12. Using Work Management to influence System i5 resources This chapter describes the new options added to OS/400 V5R3 Work Management to influence System i5 performance for specific workloads. These new options are processor and memory affinity on some multi-processors models of System i5 systems. © Copyright IBM Corp. 2005, 2006. All rights reserved.
12.1 Main storage and processor affinity concept In some environments and system configurations, processes and threads can achieve improved affinity for memory and processor resources. This improved level of affinity can result in improved performance. You can tune the main storage affinity level setting on your server at the process level and at the thread level. Important: Tuning main storage affinity levels may improve performance in some environments or system configurations, or degrade it in others.
Figure 12-1 Processors and memory layout for n-way PowerPC MCMs The memory affinity support recognizes the relationship between processors, memory, and multichip modules (MCMs) in SMP machines such as the IBM ^ System i5. The support provides improved performance to some high performance computing applications. Memory affinity is a special purpose option for improving performance on IBM System i5 machines that contain multiple multichip modules.
12.2.1 QTHDRSCAFN (thread affinity) This specifies whether secondary threads will have affinity to the same group of processors and memory as the initial thread or not. It also specifies the degree to which the system tries to maintain the affinity between threads and the subset of system resources they are assigned. A change made to this system value takes effect immediately for all jobs that become active after the change, but only if they retrieve their affinity values from the system value.
12.2.3 ADDRTGE command — new parameters In the following sections we provide a description of the new parameters. Thread resources affinity (THDRSCAFN) This specifies the affinity of threads to system resources. Element 1: Group, single values – *SYSVAL When a job is started using this routing entry, the thread resources affinity value from the QTHDRSCAFN system value will be used. – *NOGROUP Jobs using this routing entry will have affinity to a group of processors and memory.
376 Logical Partitions on System i5
13 Chapter 13. Virtual Partition Manager In this chapter we discuss the following topics: Introduction to Virtual Partition Manager for eServer System i5 systems Planning for Virtual Partition Manager Getting started with Virtual Partition Manager Preparing your system for Virtual Partition Manager Creating Linux partitions using Virtual Partition Manager Establishing network connectivity for Linux partitions Setting up i5/OS virtual I/O resources for Linux partitions © Copyright IBM Corp.
13.1 Introduction to Virtual Partition Manager for eServer System i5 systems With the recently announced System i5 processor based eServer i5 systems, IBM is delivering the 3rd generation of logical partitioning for the iSeries family of servers. The new partitioning capabilities enable customers to further simplify their infrastructures. The IBM Virtualization Engine™, which provides support for logical partitioning and resource virtualization through i5/OS, is included with eServer System i5 systems.
13.2 Planning for Virtual Partition Manager Virtual Partition Manager is enabled by enhancing the partition management tasks in the Dedicated Service Tools (DST) and System Service Tools (SST) for i5/OS V5R3. This capability is enabled only for eServer i5 systems, allowing you to create up to a maximum of four Linux partitions in addition to the one i5/OS partition that owns all of the I/O resources for the Linux partitions.
Migration of partition configuration data to HMC is not available. If HMC is deployed at a future stage, you need to recreate the Linux partitions. The data stored through virtual I/O on i5/OS remains unaffected. Initially, the Virtual Partition Manager configuration screens are only available in English and are not translated. Virtual Partition Manager cannot be managed through services offerings such as LPAR Toolkit or similar LPAR management tools provided by various IBM business partners.
13.2.4 Design and validate your partition configuration Use the Logical Partition Validation Tool (LVT) to help you design a partitioned system. You can download a copy from the following Web address: http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/index.htm?info/iphat/iphatlvt. htm The LVT provides you with a validation report that reflects your system requirements while not exceeding logical partition recommendations.
13.3.1 Minimum configuration requirements The following requirements apply for Linux partitions created on an eServer i5 system. Each partition requires the following components: Processor unit: – 0.10 processing units allocated out of a shared processing pool Memory: – A minimum of 128MB of memory or region size (whichever is largest) is needed. – Hypervisor memory set aside from your total memory capacity—available for managing logical partitions.
Figure 13-1 Prerequisite tool selections 13.3.2 Complete initial setup of your eServer i5 Before you define Linux partitions and load Linux distribution, you need to complete the Initial Server Setup tasks using either the predefined or customized setup checklists. You can find the checklists at the following Web address: http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/index.
13.4 Preparing your system for Virtual Partition Manager This section provides step-by-step instructions on how you can remove logical resources from i5/OS using the Virtual Partition Manager, in preparation for defining new Linux partitions. With your new eServer i5 system, by default, the i5/OS partition currently owns all of the processor, memory, and I/O resources. You can invoke the Virtual Partition Manager either through Dedicated Service Tools (DST) or System Service Tools (SST) tasks.
2. Enter your user ID and a password as seen in Figure 13-3. This assumes that your Security Officer already created a DST/SST user profile for you to use, and provided adequate privileges to allow you to perform partition creation and management tasks. For information about how to create DST user profiles, refer to the InfoCenter article on Managing service tools user ID and passwords at the following Web address: http://publib.boulder.ibm.com/infocenter/iseries/v5r3/ic2924/index.
3. From the Start Service Tools (SST) menu, select option 5. Work with system partitions as seen in Figure 13-4.
An informational message appears as shown in Figure 13-5. This message appears when you enter the option to Work with System Partitions for the first time, or when you clear all partition configuration data. Note: If another session is currently using the Virtual Partition Manager, an error dialog appears indicating that the tool is already in use. Figure 13-5 Initial informational display Chapter 13.
4. Press Enter at the message. The Logical Partition management tasks appear, as seen in in Figure 13-6. Figure 13-6 Work with System Partitions 5. Select option 3. Work with partition configuration, as shown in Figure 13-6. The objective of the next few steps is to ensure that we remove processing and memory resources from the i5/OS partition so that we can create Linux partitions.
6. From the Work with Partition Configuration menu, select option 2. Change partition configuration for your i5/OS instance as shown in Figure 13-7. Figure 13-7 Work with Partition Configuration Chapter 13.
With a new system, one i5/OS partition will be defined and the Change Partition Configuration display will show the defaults as shown in Figure 13-8. This is where we will remove resources from your i5/OS instance so that additional Linux partitions can be created.
7. Make several changes here, based on the resources you want to set for the i5/OS partition. You need to assign the CPU and memory allocation according to the planning you completed with the Logical Partitioning Validation Tool (LVT). We examine all of the changes step-by-step as highlighted in Figure 13-9.
Shared Processor Pool Units: Specifies the total number of processing units that will be available to the partition after the resources are removed. In this example, the i5/OS partition will be left with 100 processing units, or a full processor after we have removed the CPU resources. Minimum / Maximum Shared Processor Pool Units: A minimum of 0.10 processing units is required for every full processor that may be utilized for the given partition.
Size of Partition Memory: Linux partitions require a minimum of 128 megabytes. In this example, the value indicates the amount of main storage that remains with i5/OS partition. Make the new value multiples of the value set as the LMB size during your initial set up using ASMI. For example, you cannot set a value of 6700 because it gives an error message, like the one show in Figure 13-10.
Enable Workload Manager: The default value for Virtual Partition Manager is set to 2 = No, meaning that the partition is not allowed to use future workload management tool within the partition to automatically adjust resource assignments, such as the IBM Enterprise Workload Manager. Virtual Ethernet Identifiers: A value of 1 indicates you are enabling one of the virtual Ethernet communications ports for inter-partition communications between Linux or i5/OS partitions.
9. Press Enter again on the Confirm Changed Partition screen, which will complete the changes required on the i5/OS partition (Figure 13-12). Figure 13-12 Partition Change Successful Notice that the changes made to i5/OS resources requires an IPL of the system, as indicated by the “<” in Figure 13-12. There is no need to perform this task at present, you can perform the IPL once you define all of the Linux partitions. Also notice that changes to memory allocation in i5/OS partition are not immediate. 10..
11.Figure 13-13 shows the available CPU and memory resources for creating new partitions. In the next section, we use these resources to define new Linux partitions. Figure 13-13 Available Resources for Creating New Partitions 12.You are now ready to create Linux partitions using Virtual Partition Manager, and the resources that have been removed from i5/OS partition.
When you exceed the allocation of five virtual I/O slots, the partition hypervisor sends a command to allocate more virtual I/O slots. At this point, it just gets everything it will ever need. Therefore, once you go beyond the eight available virtual I/O slots, the system sets itself up to use all of the available virtual I/O slots and enables them during the next IPL.
2. Assign the values for creating the new Linux partition as per your Logical Partitioning Validation Tool (LVT) output, as shown in Figure 13-15. A brief explanation of each of the value is also provided here: – Partition Identifier and Name: Enter the partition name for your Linux partition. You can also change the partition identifier should you choose to. In this example, the default given (next partition identifier number) by the system is selected.
– Minimum / Maximum Shared Processor Pool Units: A minimum of 0.10 processing units is required for every full processor that may be utilized for the given partition. Assign the values appropriately based on the range you want your partitions to utilize unused processing cycles. – Uncapped processing: You have the option to have your Linux partition shared capped, or shared uncapped. See the shared processor section in the IBM Information Center for more information about capped and uncapped processors.
3. Once these values are set, you will get a confirmation display as shown below in Figure 13-16. Press Enter, and you will be returned back to the Work with System Partitions display. You can repeat the above steps to define another Linux partition, if necessary.
4. Once you have defined all of your partitions, you can view them using Option 3 from Work with System Partitions as shown below in Figure 13-17. In this example, Linux4 was defined as a capped processor partition. Figure 13-17 View of All new Partitions Defined Chapter 13.
5. 5. You can either update the partitions to change any resource configurations, or delete them and recreate them. Keep in mind that you can only change or delete one partition at a time. If you wanted to start all over again and clean up all of the configuration information, you can use the option to Clear Partition Configuration Data as discussed in the “Recover configuration data” on page 403 section. 6.
Recover configuration data If for some reason you want to restart from the very beginning by deleting all of the partitions, you may want to do so by taking option 5 from the Work with System Partitions display, and then selecting option 7. Clear configuration data, as shown here in Figure 13-19. Figure 13-19 Clear configuration data Take care when taking this option, as it completely removes all Linux partition configurations.
Migration considerations when moving to HMC The following steps outline some of the planning considerations for migrating off of the Virtual Partition Manager to HMC managed Linux partitions. Please note that you cannot save and restore partition configuration data; instead, you must create them in their entirety. However, you do not need to recreate your data saved in Linux partitions through the Network Server Storage Space.
There are a number of steps that you must complete for a Proxy ARP configuration, which include the following actions: Define a virtual LAN segment that places all of the Linux partitions and the i5/OS partition on the same virtual LAN. Complete this through the LPAR definition (discussed earlier). Create an Ethernet Line Descriptor for the virtual LAN adapter defined for the i5/OS partition. Create a TCP/IP interface for the virtual LAN adapter defined for the i5/OS partition.
2. Use the CRTLINETH command to create the Ethernet line descriptor. See Figure 13-21. Figure 13-21 Create Ethernet Line Description – Line Description: Enter the name for the Ethernet line description. For virtual Ethernets it is a common practice to start the name with ‘VRT’ and include the virtual Ethernet number in the name. As an example, if you are creating the line descriptor for the 1st virtual Ethernet, a common name to use would be ‘VRTETH01’.
3. Once the Ethernet line descriptor is created, it needs to be varied on. You can accomplish this with the Work with Line Descriptors (WRKLND) command as seen in Figure 13-22. Figure 13-22 Work with Configuration Status 4. Create the TCP/IP interface for the i5/OS network adapter for the virtual LAN. To create the TCP/IP interface, type the command ADDTCPIFC (Add TCP/IP Interface) as seen in Figure 13-23. Chapter 13.
Figure 13-23 Add TCP/IP interface – Internet Address: Type the address of the network interface for the i5/OS partition on the virtual LAN. Note: This is the address that Linux partitions use as their gateway (or route) to the external network. – Line description: The line description is the name of the Ethernet line descriptor (defined earlier) for the network adapter on the virtual LAN. – Subnet Mask: The subnet mask defines the size of the network to which the interface is being.
Figure 13-24 Start TCP/IP interface 6. Proxy ARP requires that TCP/IP packets flow between two network interfaces (the virtual and real/physical interface). This requires that “IP datagram forwarding” be enabled. Enter the command CHGTCPA (Change TCP/IP Attributes), and change the value of IP datagram forwarding to *YES as seen in Figure 13-25. Figure 13-25 Change TCP/IP attributes Note: After you enable datagram forwarding, ping the address of the i5/OS TCP/IP interface on the virtual LAN.
13.7 Setting up i5/OS virtual I/O resources for Linux partitions There are three components you need create in i5/OS to support Linux partitions with hosted resources. This chapter provides instructions for defining the Network Server Descriptor and Network Storage Space. 13.7.1 Network Server Descriptor The Network Server Descriptor defines a number of parameters for the Linux environment including startup location as well as startup parameters.
– Network server description: This is the user-defined name for the Network Server. – Resource name: The Resource name indicates the Virtual SCSI server adapter that provides virtual I/O resources (virtual disk [NWSSTG], virtual CD/DVD, virtual tape) to the Linux partition that has the corresponding Virtual SCSI client adapter. *AUTO indicates that the system determines the resource name of the first (and in this case the only) Virtual SCSI server adapter for the partition.
– IPL source: The IPL source indicates where to look for the initial boot file. A *NWSSTG setting indicates that the initial boot file is in the bootable disk partition of the first disk linked to the Network Server Descriptor. A *STMF setting indicates that the initial boot file is a stream file located in the IFS. When the setting is *STMF the path indicated by 2 is used. Note: The installation of Linux is typically performed with IPL Source set to *STMF.
Use the following steps to create the Network Server Storage Space: 1. Type the Create Network Server Storage Space command, CRTNWSSTG, which creates the Network Server Storage Space. Entering the command displays Figure 13-28. Figure 13-28 Create Server Storage Space – Network server storage space: The Network server storage space is a user-defined name given to the network server storage space. – Size: The size field indicates the size (in megabytes) for the virtual disk.
3. Associate the Network Server Storage Space with the Network Server, by linking the storage space to the network server. Type the Add Server Storage Link command, ADDNWSSTGL as seen in Figure 13-29. Figure 13-29 Add Server Storage Link – Network server storage space: The name of the Network Server Storage Space to be linked to the Network Server defined in 2. – Network server description: The name of the Network Server to which the storage space defined in 1 is linked.
Virtual console access Access to the Linux console is provided through the hosting i5/OS partition via a TCP/IP-based application. Access to the console is limited to DST user ids that were granted “remote panel key authority”. This section provides the instructions for defining the DST user and accessing the virtual console. Use the following steps to Create the Dedicated System Server Tools user with the correct authorities: 1. DST users are created through System Server Tools.
4. Press Enter. The Create Service Tools User ID screen appears, as shown in Figure 13-31. Figure 13-31 Create Service Tools ID – Password: Type the password for the user-id being created. This is the password used to access the virtual console. – Set password to expire: Type 2 to indicate that the password should not be set to expired. 5. Press Enter to complete the DST user definition. 6. After you create the DST user, modify the authorities for the user to include the remote panel key authority.
7. On the Work with Server Tools User IDs, select option 7 (Change Privileges) for the user just created as seen in Figure 13-32. Figure 13-32 Change Service Tools User Privileges – Partition remote panel key: This is the authority that needs to be granted for virtual console access. Note: In addition to the Partition remote panel key authority, the user id also requires “System partitions—operations” and “System partitions—administration” authority. 8.
2. A list of Linux partitions is provided. Type the number that corresponds to the partition for which you want to access the console. 3. When prompted for the OS/400 service tools user id, type the DST user that was created for virtual console access. 4. When prompted for the OS/400 service tools password, type the password defined for the DST user. 5. After the Virtual Console is accessed, the Network Server can be varied on.
13.8 Virtual medial management This section covers virtual medial management. 13.8.1 Linux native backup with virtual tape The OS/400 tape drive can be used by Linux for Linux-based save/restore of files and directories in a hosted partition. Linux oriented backup has the same attributes as i5/OS oriented backup on file and directory level. The only difference is that the backup files are not saved to files in the NFS directory, but directly on tape.
We receive the output as shown in Figure 13-36. rchas10d:~ # mt -f /dev/st1 status drive type = Generic SCSI-2 tape drive status = 805306880 sense key error = 0 residue count = 0 file number = 0 block number = 0 Tape block size 512 bytes. Density code 0x30 (unknown).
14 Chapter 14. Firmware maintenance This chapter describes the various options available for maintaining both HMC and managed system firmware levels. We show you, through examples, some of the main firmware update options. We discuss the different methods of updating the HMC to a new software level as well as installing individual fix packs. We also cover the backup of the HMC to help with the recovery process in the event of a disaster.
14.1 HMC firmware maintenance The HMC software level has to be maintained just like the i5/OS operating system and managed system firmware (SP). HMC firmware is packaged as a full Recovery CD set or as a Corrective Service pack/fix image. The HMC recovery CDs are bootable images and can be used to perform a complete recovery of the HMC (scratch install) or an update to an existing HMC version. The HMC update packages are available on CDs or as downloadable zip files.
14.1.1 How to determine the HMC installed software level There are various ways to display the current HMC software level depending on whether you are using the true HMC console, the Websm client, a restricted shell terminal, or an ssh client. HMC console From the true HMC console, select Help → About Hardware Management Console from the HMC desktop toolbar. A panel similar to the one shown in Figure 14-1 is displayed. This screen shows the installed HMC version, release, and build.
Figure 14-2 Websm HMC software level HMC ssh client/restricted shell terminal You can start an ssh client to the HMC (see “Scheduling the DLPAR function from Windows” on page 368) or a restricted shell terminal (see “Initial tour of the desktop” on page 56). By using the lshmc -V command, we can see the installed HMC software level. See Figure 14-3.
Figure 14-4 shows the HMC support Web site for HMC version 4 release 4. 1 3 6 4 7 2 5 8 Figure 14-4 HMC support Web site Obtaining the HMC recovery CDs From Figure 14-4 you can see that the HMC recovery CDs are not available to download and can only be ordered from IBM and sent via post.
Important: You must NOT unzip the HMC update files when burning to external media, The HMC will itself, unpack these files during the install process. You can also use the external IBM FTP server to download the HMC_Update_VxRyM0_n.zip files to one of your company’s servers (such as i5/OS IFS). The external FTP server site is: ftp://techsupport.services.ibm.
14.1.4 HMC backup of critical console data We recommend taking a backup of the HMCs Critical Console Data (CCD) before proceeding with the HMC upgrade process. This backup will save all the HMC user data, user preferences, any partition profile data backup on the HMC disk, and various log files. Important: The backup CCD is not a complete save of the HMC, as it contains user data as well as any system updates/fixes since the last install or upgrade from the HMC recovery CDs.
Important: If you need to recover your HMC CCD from a FTP server or NFS during a HMC scratch install, you will need to reconfigure the HMC network settings before you are able to connect to the relevant remote system. Saving to DVD-RAM eliminates this step. Back up Critical Console Data to DVD on local system There is a DVD drive supplied with the HMC (eServer BACKUP DVD) which can be used to back up the HMC CCD. The only compatible DVD format for writing to, is DVD-RAM.
1. In the HMC Navigation area, click Licensed Internal Code Maintenance. 2. Then click HMC Code Update. 3. In the right-hand window, click Back up Critical Console Data. 4. Select the Send back up critical data to remote site radio button and click the Next button. 5. Enter the host name/IP address of the i5/OS system, along with a valid i5/OS user id and password. You may also enter a useful text description for the backup in the window provided.
14.1.5 Updating the HMC software The media you are using to update the HMC will depend on the method used to upgrade your HMC. We show you two ways of upgrading the HMC. The first method will be using the HMC update packages from an FTP server. The second update method will be using the HMC recovery CDs. Important: The examples in this section are based on the HMC upgrade to V4R4M0 and may change with future upgrades.
The next steps show how to update the HMC from an i5/OS partition. These steps can be performed from the physical HMC or through the Websm client: 1. In the HMC Navigation area, click Licensed Internal Code Maintenance. 2. Then click HMC Code Update. 3. In the right-hand window click Install Corrective Service. 4. The Install Corrective Service panel appears (Figure 14-9). You need to select the second radio button (Download the corrective service file from a remote system) and fill in the supplied fields.
When you have completed all the fields with the correct information, click OK to continue. 5. The HMC working panel appears (Figure 14-10) showing the status of the install process. The HMC update data is inflated and then installed on the HMC. The size of the HMC update zip file will depend on how long the install process takes. In our example, the install of the first HMC package took around 20 minutes. Figure 14-10 Install of HMC update via FTP server 6.
Updating the HMC software level from a HMC Recovery CD set If you have received the HMC Recovery CD set, then you can upgrade your HMC release using these CDs. When we update the HMC with the Recovery CD set, we are in fact replacing all the HMC data on the HMCs disk. To ensure that we keep all of our user data, such as partition/profile data, user profiles, user preferences, network configuration, etc., we must perform the Save Upgrade Data task on the HMC immediately before the HMC update.
e. A confirmation window appears (Figure 14-13) before the save upgrade data is saved to the HMC disk. Figure 14-13 Confirm Save Upgrade Data window Click Continue to proceed with the save. f. When the save is complete, an information window opens - see Figure 14-14. Figure 14-14 Save of Upgrade Data completed 2. Next shut down and power off the HMC. 3. Insert the first HMC recovery CD in to the HMC DVD drive and power on the HMC console. 4.
5. A second HMC Hard Disk Upgrade screen is displayed (Figure 14-16). This screen explains the upgrade process and states that the Save Upgrade Data task must have been performed before continuing. Important: You must NOT continue with the update process if you have not completed the HMC Save Upgrade Data task. If you upgrade the HMC without this save, all the HMC configuration and partition/profile data will be lost.
7. When the HMC has finished installing from the first CD, you are prompted to insert the second CD (Figure 14-18). Figure 14-18 HMC upgrade - insert CD 2 8. Remove the first recovery CD from the HMC DVD drive and insert the second recovery CD. Press any key to continue the HMC upgrade process. The HMC will reboot and then start installing from the second CD. 9. When the HMC has finished installing from the second CD, you are prompted to install the third HMC recovery CD (Figure 14-19).
10.When the HMC has finished installing from the third CD, you are prompted to install the fourth HMC recovery CD (Figure 14-20). Figure 14-20 HMC upgrade - insert CD 4 Remove the third recovery CD from the HMC DVD drive and insert the fourth recovery CD. Type 1 and press Enter to continue with the HMC upgrade process. 11.When the HMC has finished installing from the fourth CD you are prompted to either Restore the Critical Console Data from DVD or finish the HMC installation (Figure 14-21).
14.1.6 Installing an individual HMC fix In our example, as our HMC is not connected to the Internet, we have already downloaded the relevant HMC fix file (MH00222.zip) to our i5/OS partition and have stored it in the /home/qsecofr directory in the IFS (Figure 14-22). Work with Object Links Directory . . . . : /home/qsecofr Type options, press Enter. 2=Edit 3=Copy 4=Remove 5=Display 11=Change current directory ... Opt Object link MH00222.
Figure 14-23 HMC fix install screen When you have completed all the fields with the correct information, click OK to continue. 5. The HMC working panel appears showing the status of the install process (Figure 14-24). The HMC fix data is inflated and then installed on the HMC. Figure 14-24 HMC fix install working screen The size of the HMC fix zip file will depend on how long the install process takes. In our example, the install of the MH00222.zip package took around 5 minutes. 6.
7. This completes the HMC code installation. 8. To verify the new HMC software level, you should refer to section 14.1.1, “How to determine the HMC installed software level” on page 423 14.2 Licensed internal code updates This section looks at the various methods used to manage and update an i5 managed systems firmware. 14.2.1 Firmware overview Firmware refers to the underlying software running on an i5 system independently from any type of operating system (i5/OS,Linux or AIX).
Starting with GA5 firmware, updates will be available in one of the following formats depending on the changes contained in the firmware update: Concurrent install and activate: Fixes can be applied without interrupting running partitions and restarting managed system. Concurrent install with deferred disruptive activate: Fixes can be applied as delayed and activated the next time the managed system is restarted.
When you shut down the i5/OS service partition, the b-side firmware is copied to the t-side on the SP. If the b and t-side are in sync, then the a-side will be copied to the p-side on the SP. Important: Applying MHxxxxx PTFs to a non-service i5/OS partition will have no impact on the firmware update process, as only the defined service partition has the authority to update the SP with any firmware updates. The managed system must be restarted to activate any new firmware changes.
Important: If you change the update policy to allow firmware updates from the operating system, firmware updates from the HMC are not allowed unless the system is powered off. When the managed system is powered off, firmware updates can be performed from the HMC regardless of the setting of this policy. However, care should be taken when updating firmware from both the HMC and the operating system.
Important: For security reasons we recommend that the admin user ID password is changed from the default supplied password. 7. The main ASM screen is presented. In the navigation area, click System Configuration and select Firmware Update Policy (Figure 14-29). 1 Figure 14-29 Firmware update policy screen If you wish to change the firmware update policy, select the appropriate source from the drop down selection list 1 and click Save settings to complete the operation.
4. The Managed System Server Property Dialog window is shown (Figure 14-30). You can select any i5/OS partition to be the service partition, although it must be in an inactive state to be added or removed. Also, only one i5/OS partition can be the service partition at any given time. Figure 14-30 Set i5/OS service partition 5. Select the new service partition from drop down menu and click OK.
14.2.3 Displaying the current firmware levels The installed firmware levels can be seen through both the HMC and i5/OS partitions. This section shows how you can use both methods to view the managed system firmware levels. Using the HMC to display the current firmware levels Use the following steps to display the managed system firmware levels: 1. In the HMC Navigation area, click Licensed Internal Code Maintenance. 2. Then click Licensed Internal Code Updates. 3.
Using the i5/OS service partition to display firmware levels You can use the following steps from an i5/OS 5250 screen to display the current installed firmware levels: 1. Enter the STRSST command on an i5/OS command line and enter a valid user ID and password. 2. Select option 1, Start a service tool, and press Enter. 3. Then select option 4, Display/Alter/Dump, and press Enter. 4. Next take option 1, Display/Alter storage, and press Enter. 5.
14.2.4 Updating firmware through the HMC (out-of-band) This section shows how to update the i5 firmware via the HMC. Important: For all models except 59x model servers, we recommend that you install HMC fixes before you upgrade to a new server firmware release. For 59x model servers, you must install HMC fixes before you upgrade to a new server or power subsystem firmware release.
You should select Server Firmware: Update Policy Set to HMC from the drop down topic window and click Go. The next screen shown is the iSeries Recommended Fixes - Server Firmware: Update Policy Set to HMC Web page (Figure 14-35). Figure 14-35 Server firmware - HMC update policy set to HMC There are numerous ways of obtaining the i5 firmware which are explained in detail on this Web page.
Installing i5 out-of-band firmware updates In this section we show how to update the i5 firmware through the HMC, using a firmware update CD. Figure 14-36 Licensed Internal Code Updates main menu screen The method used to install a firmware update depends on the release level of firmware which is currently installed on your system and the release level you intend to install. The release level of the new firmware can be determined by the prefix of the new firmware levels filename.
6. In the Change Licensed Internal Code window, select Start Change Licensed Internal Code wizard, and click OK to continue. 7. In the Specify LIC Repository window, select the repository location from which you want to download/install the server firmware fixes, and click OK. The following options are available: IBM service Web site: If your HMC has a VPN/modem connection to the Internet you can use this option to download the latest firmware fixes directly to the HMC.
If you wish to change the type of firmware installation click the Advanced Options button. The Managed System and Power Licensed Internal Code (LIC) Concurrency window is presented (Figure 14-38). Figure 14-38 Licensed Internal Code Concurrency screen The available options for the firmware installation are shown. We decide to leave the installation as Concurrent.
10.The Hardware Management Console License Agreement panel is then shown (Figure 14-40). Figure 14-40 HMC License Agreement screen - update release You should read the licensed agreement before clicking the Accept button. 11.The Change Licensed Internal Code Wizard Confirmation screen appears (Figure 14-41).This screen shows all the managed systems that will be updated and the type of update. You can use the View Levels button to see the level of firmware to be installed.
12.The Change Licensed Internal Code Wizard Progress window appears (Figure 14-42).When you install server firmware updates on the t-side, the existing contents of the t-side are permanently installed on the p-side first. Figure 14-42 Change LIC Wizard - Starting Change LIC Wizard - status window In our example the new firmware is installed after 20 minutes. To activate this new level of code, a complete restart of the managed system is required.
In our example, we select DVD drive as our firmware release upgrade is on optical media and click OK to continue. 7. The Hardware Management Console License Agreement panel is then shown (Figure 14-43). Figure 14-43 HMC License Agreement screen - upgrade release You should read the licensed agreement before clicking the Accept button. 8. The Upgrade LIC - Confirm the Action window appears (Figure 14-44).
In our example we see a current EC number of 01SF225 and a new EC number of 01SF230. We also see that the following message is displayed on the HMC window: Quiesce any applications currently running on your operating systems for the systems listed below. This message means that you will need to manually shut down all logical partitions on this managed system before continuing. If you do not power down these partitions, they will be shutdown abnormally during the firmware release upgrade.
14.2.5 Updating firmware through an i5/OS service partition (in-band) This section shows how to update the i5 firmware via an i5/OS service partition. The examples contained in this section are based on firmware levels available at the time of writing this redbook and may change with future releases. Important: We recommend that you install HMC fixes before you upgrade to a new server firmware release.
The next screen shown is the iSeries Recommended Fixes - Server Firmware: Update Policy Set to Operating System Web page (Figure 14-47) 1 2 Figure 14-47 i5 recommended fixes Web site MHxxxxx- in-band There are numerous ways of obtaining the i5 firmware. Normally the firmware is ordered by using a marker PTF MHxxxxx. This marker PTF may have several co-requisite PTFs which make up the firmware package.
Figure 14-48 shows our current firmware levels before we install and apply our marker PTF. see “Using the i5/OS service partition to display firmware levels” on page 447 to view your own system firmware levels. We can see that our system firmware update policy is set to operating system by the OS MANAGED key word show below. Display Formatted Data Page/Line. . . 1 / Columns. . . : 1 - 78 4 Find . . . . . . . . . . . ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+...
3. We use the i5/OS DSPPTF command to see the status of the applied PTFs (see Figure 14-49). There is a new PTF status indicator for the firmware PTFs. Notice in the status field that our firmware PTFs are set as ‘Not applied - IPL’. This new status means that we must perform a system IPL (shutdown of all partitions and restart the managed system) to activate the new firmware PTFs. Display PTF Status System: XXXXXXXX Product ID . . . . . . . . . . . . . : IPL source . . . . . . . . . . . . .
6. When the service partition is active, check the PTF status with the i5/OS DSPPTF command and firmware levels again from the DST/SST environment. Figure 14-50 shows our firmware levels after the system IPL. Display Formatted Data Page/Line. . . 1 / Columns. . . : 1 - 78 4 Find . . . . . . . . . . . ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+... LS Flash Sync Enabled.
462 Logical Partitions on System i5
15 Chapter 15. HMC Access Password Reset Using Advanced System Management Interface (ASMI) The HMC Access password is a managed system password used to authenticate the HMC. It is one of three managed system passwords set when the system is first installed. Using Advanced System management Interface (ASMI), you can reset the HMC Access password if the password is lost. You can access ASMI via a Web browser, an ASCII console, or HMC. In this section, we will access ASMI via HMC or Web browser only.
15.1 Accessing the ASMI using the HMC To access the Advanced System Management Interface using the Hardware Management Console (HMC), complete the following steps: 1. Ensure that the HMC is set up and configured properly. 2. In the HMC console navigation area, expand the managed system you want to work with. 3. Expand Service Applications and select Service Focal Point. 4. In the content area, select Service Utilities. 5. From the Service Utilities window, select the managed system you want to work with. 6.
When your Web browser is connected to ASMI, the Web browser will display ASMI main page as shown in Figure 15-1. Sign-in to ASMI as user admin and enter the password for user admin. The default password for user admin is admin. Figure 15-1 ASMI main page Chapter 15.
After you have logged into ASMI, the Web browser will display the ASMI main menu as shown in Figure 15-2.
To change HMC Access password after you logged into ASMI, perform the following steps: 1. Select the Login Profile menu and expand it. It will display four sub-menus, they are Change Password, Retrieve Login Audits, Change Default Language, and Change Installed Language. 2. Select the Change Password menu (Figure 15-3). Figure 15-3 Select Change Password menu from Login Profile Chapter 15.
3. Select the user ID of the user whose password you wish to change. In order to change the HMC Access password, select the HMC user ID from the drop down menu (Figure 15-4).
4. Enter the admin current password, insert the new password for the HMC Access user, and re-enter the new password as shown in Figure 15-5. Figure 15-5 Enter password for admin and HMC Chapter 15.
5. If the admin password was not entered correctly, ASMI will notify the failure and the HMC Access password is not changed as shown in Figure 15-6.
6. Click Continue to change the HMC Access password. ASMI will notify you if the password has been changed successfully (Figure 15-7). Figure 15-7 Change password completed After resetting the HMC Access password, you can use it to authenticate HMC to the Managed System. If you have lost your administrator password, you cannot reset the HMC Access password until you know your current administrator password. There are two methods to reset the administrator password: 1.
472 Logical Partitions on System i5
A Appendix A. HMC command list This appendix contains the following topics: HMC CLI Introduction HMC CLI Commands listed by task HMC CLI Commands listed by name HMC CLI Commands listed by category HMC CLI Command usage HMC CLI Command attributes © Copyright IBM Corp. 2005, 2006. All rights reserved.
HMC CLI introduction The primary intention of the HMC CLI (Command Line Interface) is for the creation of scripts to automate the management of partition profiles. For example, a script could move processing resources into a partition for nightly batch processing and move those resources back before the start of daily operations in the morning. HMC command naming convention HMC commands are named following the UNIX naming convention for commands. In particular: mk is used for create/make actions.
Task HMC CLI command Create system profile mksyscfg Delete LPAR rmsyscfg Delete LPAR profile rmsyscfg Delete system profile rmsyscfg Determine DRC indexes for physical I/O slots lshwres Determine memory region size lshwres Fast power off the managed system chsysstate Get LPAR state lssyscfg Hard partition reset chsysstate List all partitions in a managed system lssyscfg List all systems managed by the HMC lssyscfg List I/O resources for a managed system lshwres List LPAR profile pr
HMC CLI commands by name Table A-3 lists the HMC commands by name.
Command Description Associated tasks rmsyscfg Remove system configuration Delete LPAR Delete LPAR profile Delete system profile HMC CLI commands by category The following section discusses how to perform various functions using the HMC CLI. The functions are broken down into categories, managed system, DLPAR, etc. Working with the managed system Here we list the commands for working with the managed system. Powering on the managed system Use the chsysstate command to power on the managed system.
Listing all systems managed by the HMC Use the lssyscfg command to list system configuration and managed system MTMS information.
name desired_proc_units min_proc_units lpar_id hsl_opti_pool_id load_source_slot min_mem min_interactive virtual_scsi_adapters max_mem desired_interactive uncap_weight proc_type max_interactive virtual_eth_adapters max_virtual_slots Tip: lnstead of entering configuration information on the command line with the -i flag, the information can instead be placed in a file, and the filename specified with the -f flag. Command attributes are discussed in “HMC CLI command attributes” on page 494.
Note: Instead of entering configuration information on the command line with the -i flag, the information can instead be placed in a file, and the filename specified with the -f flag. Command attributes are discussed in “HMC CLI command attributes” on page 494. Activating a partition Use the chsysstate command to activate a partition.
removed, or moved must be specified with the -q flag. This quantity is in megabytes, and must be a multiple of the memory region size for the managed system. Determining memory region size To see what the memory region size is for the managed system, enter this command: lshwres -r mem -m –-level sys -F mem_region_size The value returned is the memory region size in megabytes.
Processors Processing resources can be dynamically added to a partition, removed from a partition, or moved from one partition to another. These processing resources depend on the type of processors used by the partitions: For partitions using dedicated processors, processing resources are dedicated processors. For partitions using shared processors, processing resources include virtual processors and processing units.
Processing resources can also be moved between partitions using dedicated processors and partitions using shared processors. To move processing resources from a partition using dedicated processors to a partition using shared processors, specify the quantity of processors using the --procs flag. This quantity is converted to processing units (by multiplying the quantity by 100) by the HMC for the target partition.
Listing LPAR profile properties Use the lssyscfg command to list a partition profile. Type the following command: lssyscfg -r prof -m -–filter "lpar_names=, profile_names=" Use the --filter parameter to specify the partition for which partition profiles are to be listed, and to specify which profile names to list. While the filter can only specify a single partition, it can specify multiple profile names for that partition.
Working with system profiles This section describes commands for working with system profiles. Creating a system profile Use the mksyscfg command to create a system profile. In the following example, the user is making a system profile named sysprof1, with partition profile prof1 for partition lpar1 and partition profile prof1 for partition lpar2.
profile_names lpar_names | lpar_ids name Listing hardware resources The lshwres command, which lists the hardware resources of a managed system, can be used to display I/O, virtual I/O, processor, and memory resources.
Virtual serial servers with open connections: lshwres -m -r virtualio --rsubtype serial --level openserial Virtual SCSI adapters: lshwres -m -r virtualio --rsubtype scsi --level lpar Partition-level virtual slot information: lshwres -m -r virtualio --rsubtype slot --level lpar Virtual slot information: lshwres -m -r virtualio --rsubtype slot --level slot Listing memory resources Use the following commands to list: Memory infor
slot - I/O slot taggedio - tagged I/O eth - virtual ethernet scsi - virtual SCSI serial - virtual serial -m - the managed system's name -o - the operation to perform: a - add resources r - remove resources m - move resources s - set attributes -p - the user defined name of the partition to add resources to, to move or remove resources from, or to set attributes for --id - the ID of the partition to add resources to, to move or remove resources from, or to set
-r -m -f -i "" --help - the type of system resource(s) to be changed: sys - managed system lpar - partition prof - partition profile sysprof - system profile - the managed system's name - the name of the file containing the configuration data for this command the format is: attr_name1=value,attr_name2=value,... or "attr_name1=value1,value2,...",... - the configuration data for this command the format is: "attr_name1=value,attr_name2=value,...
-f --test --continue --help to activate - the name of the profile to use when activating a partition this parameter is only valid for -r - validate the system profile this parameter is only valid for -r - continue on error when activating a profile this parameter is only valid for -r - prints this help lpar sysprof system sysprof List hardware resources (lshwres) This command lists the hardware resources of a managed system (Example A-4).
-r -r -r -r -r -r -r -r -r io io mem mem proc proc proc virtualio virtualio -r -r -r -r virtualio virtualio virtualio virtualio -r virtualio -r virtualio --rsubtype iopool [--filter pools,lpar_ids | lpar_names] --rsubtype taggedio [--filter lpar_ids | lpar_names] --level sys --level lpar [--filter lpar_ids | lpar_names] --level sys --level lpar [--filter lpar_ids | lpar_names] --level sharedpool --rsubtype eth --level sys --rsubtype eth --level lpar [--filter slots,vlans,lpar_ids | lpar_names] --rsubty
Create (make) system configuration (mksyscfg) This command creates partitions, partition profiles, or system profiles (Example A-6). Example: A-6 Command usage for mksyscfg Usage: mksyscfg -r lpar | prof | sysprof -m -f | -i "" [--help] Creates partitions, partition profiles, or system profiles.
-r sysprof required: name, lpar_ids | lpar_names, profile_names Remove system configuration (rmsyscfg) This command removes a partition, a partition profile, or a system profile (Example A-7). Example: A-7 Command usage for rmsyscfg Usage: rmsyscfg -r lpar | prof | sysprof -m [-n
HMC CLI command attributes Table A-7 lists the command attributes that are available, along with the commands in which they are valid, and provides a description of each command. Table A-7 HMC CLI command attributes Command attributes Attribute Used in command Description activated_profile lssyscfg User defined name of the profile that was used when the partition was activated. addl_vlan_ids chhwres and lshwres List of additional VLAN IDs assigned to the virtual ethernet adapter.
Command attributes cod_capable lssyscfg Indicates whether the managed system supports Capacity on Demand (CoD). Possible values are 0 (no) or 1 (yes). config lshwres Virtual slot configuration state. Possible values are empty, ethernet, SCSI, serial, or SMC. config_proc_units lshwres Total number of processing units assigned to the shared processor pool. configurable_sys_me m lshwres Total amount, in megabytes, of configurable memory on the managed system.
Command attributes curr_mem lshwres Current amount of memory, in megabytes, which are owned by the partition. curr_mem_region_siz e lshwres The current memory region size in megabytes. curr_min_interactive lshwres A percentage. This attribute is only valid for OS/400 curr_min_mem lshwres Minimum amount of memory, in megabytes, that the partition will support when running.
Command attributes desired_proc_units chsyscfg, lssyscfg, and mksyscfg Desired number of processing units for the partition. This attribute is only valid when the processing mode is shared. device_attr chhwres and lshwres Indicates whether the virtual SCSI or serial device is a client or server device. Valid values are client or server. drc_name lshwres The DRC name of the I/O slot. dump_type lsdump Type of hardware dump.
Command attributes io_slots chsyscfg, lssyscfg, and mksyscfg List of I/O slots for the partition. Each item in this list has the format: phys_loc/ slot_io_pool_id/is_required Note that the attribute names are not present in the list, just their values are present. For example, U47070041076RX5L1-P2-C3/1/2/1 specifies an I/O slot with a physical location code of U47070041076RX5L1-P2-C3, it is assigned to I/O pool 2, and it is a required slot.
Command attributes lpar_io_pool_ids chsyscfg, lshwres, and mksyscfg List of IDs of the I/O pools in which the partition is participating. A valid I/O pool ID is a number between 0 and the maximum number of I/O pools supported on the managed system (max_io_pools) - 1. A value of none, which indicates that the partition is not participating in any I/O pools, is also valid. lpar_keylock lssyscfg Partition keylock position. Possible values are norm (normal) or manual (manual).
Command attributes max_shared_pools lshwres Maximum number of shared processing pools which are supported on the managed system. max_virtual_slots chsyscfg, lssyscfg, and mksyscfg Maximum number of virtual slots for the partition. Valid input values are 2 - 65535. The default value is 4. max_vlans_per_port lshwres Maximum number of supported VLAN IDs per virtual ethernet port. mem_region_size chhwres The memory region size, in megabytes, for the managed system.
Command attributes os400_capable lssyscfg Indicates whether the managed system supports OS/400 partitions. Possible values are 0 (no) or 1 (yes). parent_slot lshwres Complete physical location code of the parent slot.
Command attributes A percentage. This attribute is only valid for OS/400 pend_min_interactive lshwres pend_min_mem lshwres pend_min_procs lshwres pend_min_proc_units lshwres pend_procs lshwres pend_proc_type lshwres pend_proc_units lshwres pend_shared_procs lshwres pend_sharing_mode lshwres pend_sys_keylock lssyscfg pend_total_avail_proc _units lshwres pend_uncap_weight lshwres phys_loc lshwres Complete physical location code of the slot.
Command attributes remote_lpar_id chhwres lshwres For client adapters, this specifies the ID of the partition which has the hosting (server) virtual serial/SCSI adapter for this adapter. For server adapters, this specifies the ID of the partition which has the only client virtual serial/SCSI adapter allowed to connect to this adapter. A value of any indicates that any client virtual serial/SCSI adapter should be allowed to connect to this adapter.
Command attributes service_lpar_id chsyscfg and lssyscfg For chsyscfg, this specifies the ID of the partition to be given service authority immediately. For lssyscfg, this shows the ID of the partition that currently has service authority service_lpar_name chsyscfg and lssyscfg For chsyscfg, this specifies the name of the partition to be given service authority immediately. For lssyscfg, this shows the name of the partition that currently has service authority.
Command attributes state lshwres and lssyscfg status lshwres supports_hmc chhwres and lshwres sys_ipl_attr lssyscfg sys_ipl_major_type lssyscfg sys_ipl_minor_type lssyscfg time lssyscfg total_cycles lshwres total_proc_units lshwres type_model lssyscfg uncap_weight chhwres, chsyscfg, lssyscfg, and mksyscfg unit_id lshwres unit_model lshwres unit_serial_num lshwres utilized_cycles lshwres virtual_eth_adapters chsyscfg, lssyscfg, and mksyscfg List of virtual ethernet adapters.
Command attributes virtual_scsi_adapters chsyscfg, lssyscfg, and mksyscfg List of virtual SCSI adapters. Each item in this list has the format: slot_num/device_attr /remote_lpar_id/remote_lpar_name /remote_slot_num/is_required Note that the attribute names are not present in the list; only their values are present. If an attribute is optional and is not to be included, then no value would be specified for that attribute.
Glossary CCIN_Custom Card Identification Number CoD_ Capacity Upgrade on Demand CSU_Customer Set Up. DHCP_Dynamic Host Configuration Protocol DNS_Domain Name Server FRU_ Field Replaceable Unit HMC_Hardware Management Console. HSL_High Speed Link MTMS_Machine Type Machine Serial PTF_Program Temporary Fix SMA_Switch Mode Adapter Term3. Term3 definition. VPD_Vital Product Data © Copyright IBM Corp. 2005, 2006. All rights reserved.
508 Logical Partitions on System i5
Index Numerics 5250 client 99 5250 console 46 5250 Emulator 57 5250 OLTP 155 5250 virtual terminal 99 570 node 52 7310-C03 47 7310-CR2 47 7316 48 7316-TF2 48 A Abnormal IPL 87 Additional ethernet LAN 50 Administrator mailing address 128 Advanced Operator 142, 238 Advanced System Manager 55 AIX boot mode 64 AIX error log ID 101 AIX partition 2 Alternate Console 168 Alternate IPL 140 ASM interface 73 Autodetection 117 Automatic allocation 119 Automatic reboot function 96 Automatically boot Partition 170 Auto
Dedicated processors 149, 185 Dedicated Service Tools 199 Default gateway address 98 Default gateway device 120 Default partition profile 140 Default profile 174 Delayed partition shut down 87 Deleting a partition 201 Deleting a partition profile 202 Deleting a user 238 Desired 3 Desired memory 148 Desired processing units 152 Desired processors 150 Desktop HMC 47 DHCP 53, 98 DHCP client 98 DHCP server 50, 55, 98, 118 Dial prefix values 130 Dial-up from the local HMC 129 Digital certificate 41 Direct Operat
I I/O resources 3, 155 I/O slot view 66 i5/OS hang 86 i5/OS partition 2 IBM Service 3 IBM Service and Support 54 ibm5250 57 Identify LED processing 104 Immediate partition shut down 87 Immediate reboot 193 iNav 94 Inbound connectivity settings 104 Initial tour of the HMC desktop 56 Initialize profile data 73 Initializing 62, 213 Install hardware 100 Install/Add/Remove/hardware 103 Installing the HMC 50 Interactive capacity 155 Inventory scout profile configuration 100 Inventory Scout Service 212 Inventory s
N Native IO support 5 Net menu 57 Network configuration 114 No connection 62, 214 Nonroutable IP address ranges 119 Non-volatile random access memory 2 Not configured (not-bootable) 180 Number of connected minutes 104 Number of disconnected minutes 104 Number of minutes between outages 104 NVRAM 2, 248 O Object Manager Security 238 Object manager security 235 OEM display 48 Open Terminal Window 75 Opera Browser 57 Operating 62, 213 Operation console device 164 Operations Console 2, 32 Operator 238 Opticonn
Redbooks Web site Contact us xiii Redundant HMC configurations 245 Remote access 123 Remote management 205 Remote service 99 Remote Service and Support Facility 164, 200 Remote support 104 Remote Support Facility 104, 129 Remote Support Information panel 128 Remote support request 104 Remote Technical Support 100 Remove managed system connection 71 Remove profile data 73 Repair serviceable event 100–102 Replace parts 100, 103 Required 3 Reset HMC connection 71 Resource allocation printout 253 Resource confi
U Unassigned cards/slots 66 Uncapped partition 185 Uncapped shared processor 149 Uncapped shared processor partition 154 Update managed system password 73 USB ports 47 User Administrator 238 Uses of partition profiles 140 Using the HMC as the console 165 V Validated partition profile 3 View console events 98 View Guided Setup Wizard Log 136 Viewer 238 Viewing user information 238 Virtual Adapter 75 Virtual Adapters 94 Virtual console 46 Virtual Devices 182 Virtual Ethernet 158 Virtual ethernet 92 Virtual e
Logical Partitions on System i5 Logical Partitions on System i5 A Guide to Planning and Configuring LPAR with HMC on System i Logical Partitions on System i5 Logical Partitions on System i5 (1.0” spine) 0.875”<->1.
Logical Partitions on System i5 Logical Partitions on System i5
Back cover ® Logical Partitions on System i5 A Guide to Planning and Configuring LPAR with HMC on System i Understand the new Logical Partitions for IBM Power5 architecture Learn how to install, configure, and manage LPAR with the latest HMC Discover how to implement OS/400 logical partitions This IBM Redbook gives a broad understanding of the new System i5 architecture as it applies to logically partitioned System i5 systems.