Configuring and Migrating Memory on vPars A Technical White Paper Introduction ................................................................................................................................... 2 Memory Categories ........................................................................................................................ 3 Interleaved and Cell Local Memory ................................................................................................ 3 ILM and CLM Guidelines..
Introduction Virtual Partitions (vPars) is a feature on HP-UX 11i based systems that allows a system administrator to divide the hardware resources on a single L-class, N-class, and cellular-based hard partition into one or more logical partitions. This is accomplished through a software layer called the vPars monitor that resides between the operating system kernel and the firmware. The vPars monitor controls the ownership of the processors, memory, and I/O resources on the system.
Memory Categories The vPars A.01.xx, A.02.xx and A.03.xx release streams support partitioning on HP-UX 11i v1 systems which only run on uniform memory systems (interleaved memory on cellular systems). The vPars A.04.xx release steam supports partitioning on HP-UX 11i v2 systems which support memory locality based optimizations on non-uniform memory access (NUMA) systems. The vPars A.05.xx release supports partitioning on HP-UX 11i v3 systems which allow online addition and deletion of memory.
. Create each partition using vparcreate and assign the required amount of ILM and CLM to the partition using the memory options of the vparcreate and vparmodify commands. # vparcreate –p –a mem:: -a cell::mem:: ... or # vparmodify –p –a mem:: -a cell::mem:: ... 5. If needed, assign CLP to have CPUs and memory from the same locality using the vparcreate and vparmodify commands. # vparcreate –p -a cell::cpu:: ...
Note that in the above examples, we have merely specified a memory size without any particular cell information. Hence the memory is allocated from the ILM present on the system.
as well. Assigning processors from one cell and cell local memory from another cell might lead to less optimal performance due to increased distance between processor and memory locality. The HP-UX kernel optimizes and performs better when it has processors, memory, and if possible I/O from the same locality. For more information on processor configuration in a vPars environment refer to the CPU Configuration Guidelines for vPars white paper [3]. Base and Floating Memory Starting with the A.05.
the other hand, changing the amount of memory that is base and floating does not require a system or monitor reboot. Any available memory in the vPars monitor can be used as base or floating while assigning it to a partition. When the partition boots or when the memory is added online, the kernel gets the specified amount as base or floating. Once the memory is removed from the partition and becomes available, it can be used either as base or floating while assigning to other partitions.
To add 1 GB as base and 512 MB as floating CLM from cell 0 to vpar1 and 512 MB as base and 512 MB as floating CLM from cell 1 to vpar2, we would do the following: # vparmodify –p vpar1 –a cell:0:mem::1024 –a cell:0:mem::512:f # vparmodify –p vpar2 –a cell:1:mem::512 –a cell:1:mem::512:f Assuming both vpar1 and vpar2 are live, to remove 512 MB of floating ILM from vpar2 and add it to vpar1 as base ILM, we would do the following: # vparmodify –p vpar2 –d mem::512:f # vparmodify –p vpar1 –a mem::512 To remov
a system with all base memory might perform better compared with another system with the same amount of memory but divided between base and floating memory. Ø Some kernel sub-systems and applications do their allocations based on memory discovered during boot time. These subsystems or applications might allocate their cache based on the amount of base memory available to the kernel during boot time and might not scale that cache when more base memory is later added online.
Memory Granules – Containers for Physical Memory The granule (aka segment) denotes the unit of memory by which the user can assign or remove memory resources to a partition. In the A.01.xx, A.02.xx and A.03.xx vPars releases, the vPars software fixes the granule size at 64 MB and does not provide any flexibility to the system administrator to change it. The vPars A.04.xx and A.05.
3. Create the first partition and the vPars database and specify the ILM and CLM granule size using the -g option. On HP Integrity servers, specify the y attribute to update the firmware NVRAM with the granule size. # vparcreate –p -D -g ilm:[:y] –g clm:[:y] .. 4. When ready to move to a new granule size, shut down the system and boot the monitor with the new database. To illustrate further, let us consider the same setup discussed in previous chapters.
• On HP Integrity servers, if the system contains a significant number of granules, it might increase the boot time of partitions that are running the prior HP-UX 11i v2 release. Hence, if the partition is running HP-UX 11i v2 and boot speed is a strong requirement, choose a large granule size. The significant number of granules does not impact the boot time of partitions running HP-UX 11i v3 and might not impact the boot time of partitions that run future HP-UX 11i v2 releases.
Memory Resource Assignment In a vPars system, by default, a partition does not get any memory when it is created. Hence, the memory required for a partition must be explicitly added. There are two parts to the memory assignment: the amount and the actual address ranges that form this amount.
Memory Allocation and Binding – Live Database Once booted, the vPars monitor maintains a memory copy of the file database. Any resource modification to the partition in this live database is validated against the available resources. Hence, when ILM or CLM memory is added, the vPars monitor checks whether the requested amount is available. If not, the system administrator knows when the vPars command returns if the memory allocated is less than the requested amount.
to delete to optimize the deletion time. Hence, the granules selected for deletion need not be in any specific order. The following lists some of the side effects related to memory granule binding: 1. As evident from the example, due to the dynamism of memory binding, the ILM and CLM address ranges associated with a partition change across reboots, deletes and adds. Hence, the system administrator should not rely on a partition getting the same memory address ranges. 2.
User Specified Ranges The previous chapter described how the system administrator can specify the amount of memory, the type and the locality required for the partition and let the vPars monitor choose the ranges that will be part of the partition. Instead of letting the vPars monitor pick the ranges, the system administrator can explicitly specify one or more address ranges, known as user specified ranges, within which all or a portion of the requested memory should reside.
the user has already explicitly bound 512 MB of address range as base and 512 MB of address range as floating. 3. When the partition is live, add or delete of a user specified range results in an increase or decrease of memory that the partition owns. In the example above, if vpar1 is live, at the end of the operation vpar1 will own 1.5 GB of base and 1.5 GB of floating ILM. 4.
Memory Migration Management This chapter describes the tools available to the system administrator to manage and monitor memory migration before, during, and after the operation. Following are some of the tools available to the system administrator to manage and monitor memory migration: • vparstatus command output (-v and -A options). • GlancePlus performance monitor (gpm –rpt MemoryReport). • EVM(5) event management progress logs. • Cancel operation in the vparmodify command (-C option).
2. Find out the available amount of memory in each locality using the vparstatus (-A option) output. The amount selected from each locality to add should be less than or equal to this available amount. Monitoring the Status of an Online Operation The vparstatus command output (-v option) shows the status of the last initiated CPU or memory migration operation. In the vparstatus output, this shows up under the section called ”OL* Details”.
At each step, appropriate commands are executed to look at the memory usage and monitor the progress of the operation. Only the relevant output from the command is shown. Experimental Setup The setup used for this experiment is a system with 12 GB ILM and three partitions: vpar1 with 2 GB of base memory and 1 GB of floating memory, vpar2 with 2 GB of base memory and 1 GB of floating memory, and vpar3 with 3 GB of base memory and remaining memory assigned as floating memory.
Memory Usage on vpar2 The vparstatus output below shows the portion of the total memory that is floating memory on vpar2. The GlancePlus Memory Report confirms that partition has just 13 MB of free memory and that the kernel is writing contents of the pages to disk (Paged Out is 12760 KB) to free up memory for applications. vpar2# vparstatus -v -p vpar2 [Memory Details] ...... ILM Total (MB): 3072 (Floating 1024) ......
Operation: Memory Deletion Status: PENDING vpar1# vparstatus -v -p vpar1 [Memory Details] ...... ILM Total (MB): 2048 (Floating 0) ...... [OL* Details] Sequence ID: 1 Operation: Memory Deletion Status: PASS vpar1# gpm -rpt MemoryReport & vpar1# evmget | evmshow OLAD: The olad infrastructure is locked and ready to accept parameters for the operation. No other olad operations may be initiated on this nPartition until the operation is complete. The sequence number for this operation is 1.
vpar1# vparstatus –A ...... [Available ILM (Base /Range)]: (bytes) (MB) [Available ILM (MB)]: 1024 ...... 0x150000000/1024 Addition of the Freed Memory to vpar2 1 GB of available memory is added as floating memory to vpar2 using vparmodify. The output of vparstatus shows the PASS state of the memory addition operation with sequence identifier 1. As described earlier, the sequence id is unique within each partition and not across partitions.
OLAD: The olad infrastructure is locked and ready to accept parameters for the operation. No other olad operations may be initiated on this nPartition until the operation is complete. The sequence number for this operation is 1. OLAD: The olad infrastructure has set the parameters for the operation. The sequence number is 1 and the parameters are "vpar memory add operation". OLAD: The olad infrastructure has started the requested operation with sequence number 1. ......
Memory Accounting On any given system not all physical memory is available for application use. In a non-vPars system, firmware takes some memory for its code and data structures before it hands over remaining memory to the OS kernel and the OS kernel uses some memory for its code and data structures. The memory taken by the kernel depends on the amount of memory and other resources (processors and memory) in the system.
[Memory Details] ILM, user-assigned [Base /Range]: 0x40000000/512 (bytes) (MB) ILM, monitor-assigned [Base /Range]: 0x20000000/512 (bytes) (MB) ILM Total (MB): 1024 (Floating 0) CLM, monitor-assigned [CellID Base /Range]: 0 0x70080000000/1024 (bytes) (MB) CLM (CellID MB): 0 1024 (Floating 0) Name: vpar2 State: Down [Memory Details] ILM, user-assigned [Base /Range]: 0x4080000000/512 (bytes) (MB) ILM, monitor-assigned [Base /Range]: (bytes) (MB) ILM Total (MB): 1024 (Floating 0) CLM, user-assigned [CellID Bas
difference is 9 MB. This difference can be more on some systems or platforms. The following lists some of the reasons why the difference can be more: • When the amount of memory each cell contributes to interleaving is not uniform, some amount of memory is lost during interleaving. For example, on a 3 cell system where one cell contributes 0.5 GB, a second cell contributes 1 GB, and the third cell contributes 2 GB, some amount of memory will not be available after interleaving.
References 1. See Chapter 12 in HP-UX 11i Version 2 September 2004 Release Notes: HP 9000 Servers, HP Integrity Servers, and HP Workstations located at http://docs.hp.com/en/5990-8153/index.html. 2. See http://docs.hp.com/en/4913/ccNUMA_White_Paper.pdf for the white paper titled “ccNUMA Overview”. 3. See http://docs.hp.com/en/8767/cpu_config.pdf for the white paper titled “CPU Configuration Guidelines for vPars”. 4. See http://docs.hp.com/en/hpux11iv3.