Redbooks Paper Dino Quintero Sven Meissner Andrei Socoliuc Hardware Management Console (HMC) Case Configuration Study for LPAR Management This IBM® Redpaper provides Hardware Management Console (HMC) configuration considerations and describes case studies about how to use the HMC in a production environment. This document does not describe how to install the HMC or how to set up LPARs. We assume you are familiar with the HMC.
Automation High availability considerations for HMCs Introduction and overview The Hardware Management Console (HMC) is a dedicated workstation that allows you to configure and manage partitions. To perform maintenance operations, a graphical user interface (GUI) is provided.
Table 1 Types of HMCs Type Supported managed systems HMC code version 7315-CR3 (rack mount) POWER4 or POWER51 HMC 3.x, HMC 4.x, or HMC 5.x 7315-C04 (desktop) POWER4 or POWER51 HMC 3.x, HMC 4.x, or HMC 5.x 7310-CR3 (rack mount) POWER5 HMC 4.x or HMC 5.x 7310-C04 (desktop) POWER5 HMC 4.x or HMC 5.x 1 - Licensed Internal Code needed (FC0961) to upgrade these HMCs to manager POWER5 systems. A single HMC cannot be used to manage a mixed environment of POWER4 and POWER5 systems. The HMC 3.
The maximum number of HMCs supported by a single POWER5 managed system is two. The number of LPARs managed by a single HMC has been increased from earlier versions of the HMC to the current supported release as shown in Table 3. Table 3 HMC history HMC code No. of HMCs No. of servers No. of LPARs Other information 4.1.x 1 4 40 iSeries Only 4.2.0 2 16 64 p5 520 550 570 4.2.1 2 32 160 OpenPower 720 4.3.1 2 32 254 p5 590 595 4.4.0 2 32 254 p5 575 HMC 7310-CR3/C04 4.5.
menus. However not all POWER5 servers support this mechanism of allocation. Currently p575, p590, and p595 servers support only DHCP. Note: Either eth0 or eth1 can be a DHCP server on the HMC. HMC to partitions: HMC requires TCP/IP connection to communicate with the partitions for functions such as dynamic LPAR and Service Focal Point. Service Agent (SA) connections: SA is the application running on the HMC for reporting hardware failures to the IBM support center.
multi-threading. SMT is a feature supported only in AIX 5.3 and Linux at an appropriate level. Multiple operating system support: Logical partitioning allows a single server to run multiple operating system images concurrently. On a POWER5 system the following operating systems can be installed: AIX 5L™ Version 5.2 ML4 or later, SUSE Linux Enterprise Server 9 Service Pack 2, Red Hat Enterprise Linux ES 4 QU1, and i5/OS.
To calculate your desired and maximum memory values accurately, we recommend that you use the LVT tool. This tool is available at: http://www.ibm.com/servers/eserver/iseries/lpar/systemdesign.htm Figure 1 shows an example of how you can use the LPAR validation tool to verify a memory configuration. In Figure 1, there are 4 partitions (P1..P4) defined on a p595 system with a total amount of 32 GB of memory.
The memory allocated to the hypervisor is 1792 MB. When we change the maximum memory parameter of partition P3 from 4096 MB to 32768 MB, the memory allocated to the hypervisor increases to 2004 MB as shown in Figure 2. Figure 2 Memory used by hypervisor Figure 3 is another example of using LVT when verifying a wrong memory configuration. Note that the total amount of allocated memory is 30 GB, but the maximum limits for the partitions require a larger hypervisor memory.
Micro-partitioning With POWER5 systems, increased flexibility is provided for allocating CPU resources by using micropartitioning features. The following parameters can be set up on the HMC: Dedicated/shared mode, which allows a partition to allocate either a full CPU or partial units. The minimum CPU allocation unit for a partition is 0.1. Minimum, desired, and maximum limits for the number of CPUs allocated to a dedicated partition.
Note: Take into consideration that changes in the profile will not get activated unless you power off and start up your partition. Rebooting of the operating system is not sufficient. Capacity on Demand The Capacity on Demand (CoD) for POWER5 systems offers multiple options, including: Permanent Capacity on Demand: – Provides system upgrades by activating processors and/or memory. – No special contracts and no monitoring are required. – Purchase agreement is fulfilled using activation keys.
HMC sample scenarios The following examples illustrate POWER5 advance features. Examples of using capped/uncapped, weight, dynamic LPAR and CoD features Our case study describes different possibilities to take advantage of the micropartitioning features and CoD assuming a failover/fallback scenario based on two independent servers. The scenario does not address a particular clustering mechanism used between the two nodes. We describe the operation by using both the WebSM GUI and the command line interface.
P550 – 2 CPU - 8GB P550 – 4 CPU – 8 GB nils (production) 2 CPUs (dedicated) 7 GB julia (standby) 0.2 CPU (shared) 1024 MB Cluster Oli (production) 1 CPU (dedicated) 5120 MB Nicole_vio 0.8 CPU (shared) 1024 MB HMC 1 HMC 2 Figure 4 Initial configuration Table 5 shows our configuration in detail. Our test system has only one 4-pack DASD available. Therefore we installed a VIO server to have sufficient disks available for our partitions.
Table 6 Memory allocation Memory (MB) Partition name Min Desired Max nicole_vio 512 1024 2048 oli 1024 5120 8192 julia 512 1024 8192 Enabling ssh access to HMC By default, the ssh server on the HMC is not enabled. The following steps configure ssh access for node julia on HMC. The procedure will allow node julia to run HMC commands without providing a password. Enabling the remote command execution on HMC.
HMC Configuration. In the right panel select Customize Network Setting, press the LAN Adapters tab, choose the interface used for remote access and press Details. In the new window select the Firewall tab. Check that the ssh port is allowed for access (see Figure 6). Figure 6 Firewall settings for eth1 interface Install the ssh client on the AIX node: The packages can be found on the AIX 5L Bonus Pack CD. To get the latest release packages, access the following URL: http://sourceforge.
openssh.msg.en_US 3.8.0.5302 C F Open Secure Shell Messages - Log in the user account used for remote access to the HMC. Generate the ssh keys using the ssh-keygen command. In Example 2, we used the root user account and specified the RSA algorithm for encryption. The security keys are saved in the /.ssh directory. Example 2 ssh-keygen output root@julia/>ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (//.
Now, we force node nils to fail and prepare to start the takeover scenario (see Figure 7). P550 – 2 CPU - 8GB P550 – 4 CPU – 8 GB 1 nils (production) 2 CPUs (dedicated) 7 GB julia (production) 2 CPU (shared) 7 GB takeover oli (production) 1 CPU (dedicated) 5120 MB nicole_vio (VIO server) 0.
Figure 8 Activating the On/Off CoD Activating On/Off CoD using the command line interface. Example 4 shows how node julia activates 2 CPUs and 8 GB of RAM for 3 days by running via ssh the command chcod on the HMC. Example 4 Activating CoD using command line interface CPU: root@julia/.ssh>ssh hscroot@hmctot184 "chcod -r proc -q 2 -d 3" -m p550_itso1 -o a -c onoff Memory: root@julia/.
Note: If you use reserve CoD instead of ON/OFF CoD to temporarily activate processors, you can assign the CPUs to shared partitions only. In order for node julia to operate with the same resources as node nils had, we have to add 1.8 processing units and 6.5 GB memory to this node. Allocation of processor units. – Using the graphical user interface. In the Server and Partition panel on HMC, right-click on partition julia and select Dynamic Logical Partitioning → Processor Resources → Add.
Example 5 Perform the CPU addition from the command line root@julia/>lsdev -Cc processor proc0 Available 00-00 Processor root@julia/>ssh hscroot@hmctot184 lshwres -r proc -m p550_itso1 --level\ \ >lpar --filter "lpar_names=julia" -F lpar_name:curr_proc_units:curr_procs\ \ >--header lpar_name:curr_proc_units:curr_procs julia:0.2:1 root@julia/>ssh hscroot@hmctot184 chhwres -m p550_itso1 -o a -p julia \ \ -r proc --procunits 1.
Figure 10 Add memory to partition – Using the command line. Example 6 shows how to allocate 6 GB of memory to partition julia.
At the time node nils is back and ready to reacquire the applications running on node julia, we reduce the memory and CPU to the initial values and turn off CoD. In order for node julia to operate with the initial resources, we have to remove 1.8 processing units and 6 GB memory from this partition. 1. Perform dynamic LPAR operations to decrease the CPU units and memory capacity of the target partition. The following steps are taken to decrease the CPU units and memory capacity of the target partition.
– Using the command line interface. Note: When allocating memory to a partition or moving it between partitions, you can increase the time-out limit of the operation to prevent a failure response before the operation completes. Use the Advance tab of the dynamic LPAR memory menu (see Figure 10 on page 20) to increase the time-out limit. Example 7 shows how to deallocate via the command line 6 GB of memory from node julia.
Figure 12 Perform the deallocation for the CPU units – Using the command line interface to remove 1.8 processing units from node julia is shown in Example 8. Example 8 Deallocating the CPU root@julia/>lsdev -Cc processor proc0 Available 00-00 Processor proc2 Available 00-02 Processor root@julia/>ssh hscroot@hmctot184 lshwres -r proc -m p550_itso1 --level\ \ >lpar --filter "lpar_names=julia" -F lpar_name:curr_proc_units:curr_procs\ \ >--header lpar_name:curr_proc_units:curr_procs julia:2.
2. Deactivating the On/Off CoD for CPU and memory. For an example of the graphical interface, refer to the menu presented in Figure 8 on page 17, and the section “Activating On/Off CoD using the command line interface.” on page 17. Example 9 shows how to use the command line interface to deactivate the processor and memory CoD resources.
Figure 13 Toggle the Capped/Uncapped option You have to consider the number of virtual processors to be able to use all the CPUs from the shared processor pool. In our example, after the CoD operation, we have 3.0 available processing units in the shared processor pool and 1 dedicated processor allocated to node oli. The partition nicole_vio uses 0.8 processing units and is capped. Partition julia uses 0.2 units and 1 virtual processor, and can use 1 physical CPU.
Example of using two uncapped partitions and the weight For the example of two uncapped partitions using the same shared processor pool, we use the configuration described in Table 7. Table 7 CPU allocation table Partition name CPU (Min/Des/Max) Virtual processors (Min/Des/Max) Dedicated/ Shared Capped/ Uncapped Weight nicole_vio 1/1/1 N/A Dedicated N/A N/A oli 0.1/1.0/2.0 1/4/4 Shared Uncapped 128 julia 0.1/1.0/2.
Cpu2 Cpu3 0 0 0 0 757 712 771 712 699 698 6 100 6 100 15 100 27 100 0 0 0 0 0 0.37 0 0.37 2172 2178 We changed the weight for the partition oli to the maximum value 255 while partition julia is set to 128. The operation can be performed dynamically. For accessing the GUI menus, from the Server Management menu of the HMC, right-click on the partition name and select Dynamic Logical Partitioning → Processor Resources →Add (as shown in Figure 14).
Cpu2 Cpu3 0 0 0 0 756 702 740 703 700 699 8 100 8 100 19 100 2 100 0 0 0 0 0 0.42 0 0.41 2683 2652 In Example 13 and Example 14 the physc parameter has different values for the two nodes. Example 14 Output of topas -L on node julia Interval: 7 Logical Partition: julia Tue Mar 31 17:49:57 1970 Psize: 3 Shared SMT OFF Online Memory: 512.0 Ent: 1.
Node oli has increased processing loads during the workday: 7 AM to 7 PM and it is idle most of the time outside this interval. Partition julia has an increased processing load during 10 PM to 5 AM and is idle the rest of the time. Since both partitions are uncapped, we will reallocate only a piece of memory to partition julia during the idle period of time of partition oli. This example shows how to implement via the HMC scheduler the dynamic LPAR operations for the memory.
Figure 16 Selecting the scheduled operation 3. Next, in the Date and Time tab, select the time for the beginning of the operation and a time window where the operation can be started as shown in Figure 17. Figure 17 Selecting the starting window of the scheduled operation 4. Click on the Repeat tab and select the days of the week for running the scheduler. We selected each day of the week for an infinite period of time as shown in Figure 18 on page 31.
Figure 18 Selecting the days of the week for the schedule 5. Click on the Options tab and specify the details of the dynamic LPAR operation as shown in Figure 19. Figure 19 Specifying the details of the dynamic LPAR operation Click on the Save button to activate the scheduler. Note: By default, the time-out period for the dynamic LPAR operation is 5 minutes. In our test case, the memory reallocation was performed for 2GB of RAM.
6. Repeat steps 1 through 5 for creating the reverse operation, specifying julia the target partition for the scheduled operation, and 06:00:00 AM for the start window of the scheduler. 7. After setting up both operations, their status can be checked in the Customize Scheduled Operations window for each of the nodes as shown in Figure 20. Figure 20 Current scheduled operations for node oli 8.
Comparing profile values with current settings If you perform a dynamic LPAR operation and you want to make this change permanent, you have to do maintenance on the appropriate profile. Otherwise, after the next shutdown and power on of the LPAR, the partition will have the old properties and this might not be desired. The script in Example 15 compares minimum, desired, and maximum values regarding CPU and memory of a the profiles with the current settings. You can use it to monitor these settings.
}; }; }; }; Here is a sample output from the script shown in Example 15 on page 33. Example 16 Monitoring sample script output julia:/home/romeo # .
hmc2 hmc2 hmc2 hmc2 hmc2 hmc2 hmc2 hmc2 hmc2 hmc2 cec-green cec-green cec-green cec-green cec-green cec-green cec-green cec-green cec-green cec-green green2 green2 green2 green2 green3 green3 green3 green3 green3 green3 max_mem: max_procs: min_procs: des_procs: min_mem: des_mem: max_mem: max_procs: min_procs: des_procs: prof= prof= prof= prof= prof= prof= prof= prof= prof= prof= 32768 4 1 2 2048 12288 curr= 32768 4 1 2 curr= 4608 1 In Example 16 on page 34, you can see that the LPAR blue6 has 2 GB m
Working with two HMCs eases the planning of HMC downtimes for software maintenance, as there is no downtime needed. While doing the HMC code update on one HMC, the other one continues to manage the environment. This situation allows one HMC to run at the new fix level, while the other HMC can continue to run the previous one. You should take care to move both HMCs to the same level to provide an identical user interface.
Note: Either eth0 or eth1 can be a DHCP server on the HMC. The managed system will be automatically visible on the HMCs. This is our recommended way to do high availability with HMCs. It is supported by all POWER5 systems. Two HMCs on the same network, using static IP addresses is shown in Figure 23.
A new system is shipped with a default IP-addresses. You can change these IP-addresses by connecting your laptop to either T1 or T2 of the CEC. Assign an IP-address to your laptop’s interface that is in the same network as the respective network adapter of your CEC. For T1, it is network 192.168.2.0/24 and for T2 192.168.3.0/24. Do not use the same IP-addresses as the CEC already have assigned.
For more detailed information, refer to “Access to the ASMI menu” on page 40“. On HMC1, the managed system becomes automatically visible. On HMC2, the managed system must be added manually. To add a managed system, select the Server Management bar and choose Add Managed System(s) as shown in Figure 25.
APPENDIX The following sections contain additional information to be considered when dealing with HMCs.
Figure 26 Accessing the ASMI menu using WebSM For further information related to the access to the ASMI menus, refer to the “ASMI Setup Guide” at: http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/info/iphby/iphby.pdf Configuring a secure connection for WebSM The following example describes how to set up a secure WebSM connection for a Windows client and a cluster of two HMCs. Note: Before configuring the WebSM client, ensure that your name resolution works properly.
Access the secure WebSM download page and run the InstallShield program for your platform: http:///remote_client_security.html Verify the WebSM installation by starting the WebSM client program and connect to the HMC. The next steps describe how to configure the secure connection to WebSM server. The following steps need to be performed from the HMC console. The Security Management panel is not available via WebSM: Choose one of the HMCs as the Certificate Authority.
For our example, we perform the following actions: – Enter an organization name: ITSO. – Verify the certificate expiration date is set to a future date. – Click the OK button, and a password is requested at the end of the process. The password is used each time you perform operations on the Certification Authority Server. The next step is to generate the authentication keys for the WebSM clients and servers: – Private keys will be installed on the HMCs.
At this menu: – Add both HMCs in the list of servers (the current HMC should already be listed): hmctot184.itso.ibm.com, hmctot182.itso.ibm.com – Enter the organization name: ITSO. – Verify that the certificate expiration date is set to a future date. Install the previous generated private key to the current HMC. Select System Manager Security → Server Security → Install the private key ring file for this server. Then select as input device the directory /var/websm/security/tmp as shown in Figure 29.
Figure 30 Copying the private key ring file to removable media Tip: To transfer the security keys from the HMC, you can use the floppy drive or a flash memory. Plug the device in the USB port, before running the copy procedure, and then, it will show up in the menu as shown in Figure 30. Copy the private key from removable media to the second HMC. Insert the removable media in the second HMC. From the HMC menu select: System Manager Security → Server Security.
Figure 31 Installing the private key ring file for the second HMC Copy the public key ring file to removable media for installing the key file on the client PC. Select System Manager Security → Certificate Authority, and in the right panel, select Copy this Certificate Authority Public Key Ring File to removable media. A dialog panel is displayed (see Figure 32 on page 47).
Figure 32 Save the public key ring file to removable media You will be provided with a second window to specify the format of the file to be saved. Depending on the platform of the WebSM client, you can select either: – HMC or AIX client: A tar archive is created on the selected media. – PC Client: A regular file is created on the selected media. This option requires a formatted media. Note: Two files are saved on the media, containing the public key ring files: SM.pubkr and smpubkr.zip.
Figure 33 Select the security option for the authentication Select one of the two options: – Always use a secure connection: Only an SSL connection is allowed. – Allow the user to choose secure or unsecure connections: A checkbox is displayed at the time of connecting the WebSM client to the HMC, allowing you to choose a secure (SSL) or an unsecure connection. Verify the status on the HMC to ensure that it is configured and the private key ring is installed as shown in Figure 34.
Next, go to each of your remote clients and copy the PUBLIC key ring file into the “codebase” directory under WebSM. When you log in via WebSM, you will get information if the SSL connection is available or not. Verify the checkbox Enable secure communication” in Figure 35. Figure 35 WebSM logon panel Enabling NTP on the HMC The pSeries and iSeries Hardware Management Console (HMC) supports Network Time Protocol (NTP) which allows an administrator to synchronize time across several systems.
Attention: Before updating the microcode of the system, we recommend to carefully read the installation notes of the version you plan to install. For further information, refer to the microcode download for eServer pSeries systems page at: http://techsupport.services.ibm.com/server/mdownload The following procedure is an example of running a microcode update procedure for a p550 system using the HMC. In our example, we use a p550 system attached to the HMC.
Figure 36 License Internal Code Updates menus on the HMC Note: In our example, we choose to upgrade to a new release. When updating the firmware level at the same release, choose Change Licensed Internal Code for the same release. 2. Select the target system (see Figure 37) and click OK.
3. We downloaded the microcode image to an FTP server, so we specify as LIC Repository FTP Site (Figure 38). Figure 38 Specify the microcode location 4. In the details window, enter the IP address of the FTP server, username and password for the access and the location of the microcode image (see Figure 39). After connecting to the FTP server, a license acceptance window is displayed. Confirm the license agreement and continue with the next step.
5. You are provided with a new window which displays the current and the target release of the firmware (see Figure 40). Click OK to start the upgrade process. Figure 40 Upgrade information The update process might take 20-30 minutes. When the update operation ends, the status completed is displayed in the status window, as shown in Figure 41. Figure 41 Update microcode completed Referenced Web sites Latest HMC code updates: http://techsupport.services.ibm.
Dual HMC cabling on the IBM 9119-595 and 9119-590 Servers: http://www.redbooks.ibm.com/abstracts/tips0537.html?Open ASMI setup guide: http://publib.boulder.ibm.com/infocenter/eserver/v1r2s/en_US/info/iphby/iph by.
The team that wrote this Redpaper This Redpaper was produced by a team of specialists from around the world working at the International Technical Support Organization, Austin Center. Dino Quintero is a Consulting IT Specialist at ITSO in Poughkeepsie, New York. Before joining ITSO, he worked as a Performance Analyst for the Enterprise Systems Group and as a Disaster Recovery Architect for IBM Global Services. His areas of expertise include disaster recovery and pSeries clustering solutions.
Yvonne Lyon International Technical Support Organization, Austin Center 56 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
This document was created or updated on February 23, 2006. ® Send us your comments in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an e-mail to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 905, 11501 Burnet Road Austin, Texas 78758-3493 U.S.A.