IBM i Virtualization and Open Storage (read-me first) Mike Schambureck and Keith Zblewski (schambur@us.ibm.com and zblewski@us.ibm.com) IBM Lab Services – IBM Power Systems, Rochester, MN March, 2013 This “read-me first” paper provides detailed instructions on using IBM i 6.1/7.1 virtualization and connecting open storage to IBM i. It provides information on the prerequisites, supported hardware and software, planning considerations, install and post-install tasks such as backups.
1 2 3 4 5 IBM i virtualization solutions ..................................................................................... 5 1.1 IBM i logical partition (LPAR) hosting another IBM i partition........................ 5 1.2 IBM i using open storage as a client of the Virtual I/O Server (VIOS).............. 5 1.3 IBM i on a Power blade ...................................................................................... 5 IBM i hosting IBM i ...........................................................
5.2 Virtual Fibre Channel adapter concepts ............................................................ 21 5.2.1 NPIV Supported hardware and software ...................................................... 22 5.3 Virtual SCSI (VSCSI) configuration concepts ................................................. 22 5.3.1 VSCSI adapter configuration using IVM ..................................................... 24 5.3.2 Creating multiple Virtual SCSI adapters using the VIOS CLI ..................... 24 5.3.
8.2 Virtual tape configuration using the SDMC ..................................................... 43 8.3 Virtual tape configuration using IVM............................................................... 44 9 IBM i installation and configuration ......................................................................... 45 9.1 Post-IBM i install tasks and considerations ...................................................... 45 9.1.1 Configure IBM i networking ...............................................
1 IBM i virtualization solutions IBM i 6.1 introduced three significant virtualization capabilities that allow faster deployment of IBM i workloads within a larger heterogeneous IT environment. This section introduces and differentiates these new technologies. Note: The Oct. 2012 IBM i announcements stated for POWER7+ models: Native IBM i I/O support using IBM i 7.1 is supported, but no native IBM i 6.1 support of I/O. IBM i 6.1 can be a client partition and be provided I/O linkages through either IBM i 7.
2 IBM i hosting IBM i 2.1 IBM i hosting IBM i concepts The capability of an IBM i partition to host another IBM i partition involves hardware and virtualization components. The hardware components are the storage, optical and network adapters and devices physically assigned to the host IBM i LPAR. The virtualization components are the system firmware and IBM i operating system objects necessary to virtualize the physical I/O resources to client partitions.
partition is supported for IBM i in this environment, by assigning both the types of adapters to the partition in the HMC or SDMC. 2.1.2 Storage virtualization 2.1.2.1 Disk virtualization To virtualize integrated disk (SCSI, SAS or SSD) or LUNs from a SAN system to an IBM i client partition or virtual server, both HMC/SDMC and IBM i objects must be created.
Figure 3: Example of what the client partition sees as storage. Storage spaces for an IBM i client partition do not have to match physical disk sizes; they can be created from 160 MB to 1 TB in size, as long as there is available storage in the host. The 160 MB minimum size is a requirement from the storage management Licensed Internal Code (LIC) on the client partition. For an IBM i client partition, up to 16 NWSSTGs can be linked to a single NWSD, and therefore, to a single VSCSI connection.
2.1.2.3 Optical virtualization Any optical drive supported in the host IBM i LPAR can be virtualized to an IBM i client LPAR. An existing VSCSI connection can be used, or a new connection can be created explicitly for optical I/O traffic. By default, if a VSCSI connection exists between host and client, all physical and virtual OPTxx optical drives in the host will be available to the client, where they will also be recognized as OPTxx devices.
or earlier Storage adapters (FC, SAS, SCSI) Yes Storage devices and subsystems Yes Network adapters Yes Optical devices Yes Tape devices Yes* Must be supported by IBM i 6.1 or higher and supported on POWER6/7-based IBM Power server Must be supported by IBM i 6.1 or higher and supported on POWER6/7-based IBM Power server Must be supported by IBM i 6.1 or higher and supported on POWER6/7-based IBM Power server Must be supported by IBM i 6.
is created, it occupies that amount of physical storage in the host IBM i LPAR, even if the disk capacity is only 50% utilized in the client LPAR. The number of NWSSTG objects is important for the client IBM i partition to allow for concurrent disk IO. It is recommended to closely match the total size of the storage spaces for each client partition to its initial disk requirements.
The hosting IBM i partition will experience an increase in the paging of its memory pages as a result of its hosting role. Monitor and adjust the memory pools using the tips in section “General Performance Information, Tips, and Techniques” of the Performance Capabilities Reference manual at: (http://www.ibm.com/systems/i/solutions/perfmgmt/resource.html) 2.3.
A logical port on a HEA A virtual Ethernet adapter Note that both physical and virtual I/O resources can be assigned to an IBM i virtual client partition. If a physical network adapter was not assigned to the IBM i client partition when it was first created, refer to the topic Managing physical I/O devices and slots dynamically using the HMC in the Power Systems Logical Partitioning Guide at: (http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.
2.5.4 Configuring Electronic Customer Support (ECS) over LAN A supported WAN adapter can be assigned to the IBM i client partition for ECS. Alternatively, ECS over LAN can be configured. Refer to the topic Setting up a connection to IBM in the Information Center at: http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp?topic=/rzaji/rzaji_setup.htm. 2.5.
2.5.7 Operational considerations Remember that the IBM i host partition’s state may affect the IBM i client partition. For instance, when the hosting partition is restarted, the client partition loses contact with its disk. This results in a system reference code sequence of A6xx0255 shown for the client partition. The client VSCSI adapter loses its virtual path to the server when the server is rebooted and any I/O that was in flight is returned to the device drivers as "aborted".
3 IBM i using open storage configurations 3.1 Storage Area Networks 101 There are some basic SAN concepts that need to be understood by a person coming from an integrated disks background: The disk drives installed in a SAN are not directly seen by IBM i. They are typically grouped into RAID sets of drives called arrays, extent pools or storage pools. RAID 0, 1, 5, 6 and 10 are generally supported. The disks are RAIDed in the SAN and should not* be protected on IBM i.
As a client of VIOS, with VIOS facilitating the access to the storage using N-Port ID virtualization (NPIV). In this case VIOS does not see the LUNs, they are mapped to a virtual WWPN that the IBM i partition owns. More on that later The table below applies to both of the connection methods: IBM i direct attachment and IBM i VIOS storage subsystem. This paper does not attempt to list the full device support of VIOS, nor of any other clients of VIOS, such as AIX and Linux.
Another source of information is the IBM i POWER External Storage Support Matrix Summary at http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4563 To verify the supported combinations of adapters, servers and SANS use the System Storage Interoperation Center (SSIC) at: http://www.ibm.com/systems/support/storage/ssic/interoperability.wss 3.4 Software and firmware: Direct attached Software or firmware type (minimums) IBM i 6.1 with LIC 6.1.1 or higher IBM i 6.1 without LIC 6.1.1 IBM i 5.
4 IBM i using direct attach storage The capability to use open storage directly attached to IBM i has been around for a while, but only on SANs that can support 520 bye sectors such as the DS8000’s. In more recent releases of IBM i the capability to handle the processing of 512 byte sectors has expanded the list of SANs that can be directly attached. The DS5300 and DS5100 are on this list. The following sections address a high level view of the configuration steps involved. 4.
5 IBM i hosted by VIOS 5.1 Virtual SCSI and Storage virtualization concepts The IBM i storage portfolio now includes integrated SCSI, SAS or Solid State disk (SSD), as well as FC-attached storage subsystems that support direct attachment to IBM i, as discussed in the prior section. The capability to use open storage through VIOS extends the IBM i storage portfolio to include other 512-byte-per-sector storage subsystems.
The hardware and virtualization components for attaching open storage to IBM i illustrated in Figure 4 also apply to using DS5000, XIV and other subsystems supported for this solution, as listed in the “IBM i using open storage supported configurations” section. Three management interfaces are available for virtualizing I/O resources through VIOS to IBM i: the Hardware Management Console (HMC), the Systems Director Management Console (SDMC) and the Integrated Virtualization Manager (IVM).
Figure 5: Using NPIV for IBM i From the storage subsystem’s perspective, the LUNs, volume group and host connection are created as though the IBM i LPAR is directly connected to the storage through the SAN fabric. While VIOS still plays a role with NPIV, that role is much more of a passthrough one when compared to VSCSI. Note that an 8Gb or 16Gb FC adapter is required; however, the FC switches do not need to be at that speed.
In an IBM i client partition, a VSCSI client adapter is recognized as a type 290A DCxx storage controller device. Figure 6 depicts the VSCSI client adapter, as well as several open storage LUNs and an optical drive virtualized by VIOS: Figure 6: VSCSI client adapter and open storage LUNs and an optical drive as seen in IBM i. In VIOS, a VSCSI server adapter is recognized as a vhostX device: Figure 7: Example of VSCSI adapter seen as vhost0 by VIOS.
LUNs available in VIOS can then be linked to the new vhostX device through vtscsiX devices, making them available to IBM i. Prior to May 2009, creating vtscsiX devices and thus virtualizing open storage LUNs to IBM i was necessarily a task performed only on the IVM/VIOS command line when the server was HMCmanaged. When the server is IVM-managed, assignment of virtual resources is performed using the IVM browser-based interface for the first 16 devices for a client partition.
Log into VIOS with padmin or another administrator user ID. VIOS always has partition ID 1 when IVM is used, and by default carries the serial number of the blade as a name.
Specify the next available Server Adapter slot number (determined above) and click “Only selected client partition can connect”. Do NOT use Any Partition can connect. That will not work.
Click the Virtual Adapters tab and then click Actions -> Create Virtual Adapter -> SCSI Adapter… Specify the next available (determined above) Client Adapter slot number and select the VIOS from the Server partition list. Specify the next available (determined above) Server Adapter slot number and click OK to create the client Virtual SCSI Adapter As soon as they are available in VIOS, open storage LUNs and optical devices can be assigned to IBM i using one of the following options: 5.3.
One use of the command line is to add more details to the LUNs to better manage them. The make virtual device (mkvdev) command has a device parameter that can be used to add more descriptive names. Refer to the VIOS command reference for more details at: http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/topic/iphcg/iphcg.pdf If the QAUTOCFG system value in IBM i is set to 1 (which is the default), the new virtual resources will become available in IBM i immediately.
Select the cd#/virtual optical device and assign it to a partition. 5.4.5 Assigning a physical or virtual Optical/DVD/CD drive using IVM IVM has an interface to move the optical drive under the View/Modify Storage task, then clicking the Optical/CD tab. Select the cd#/virtual optical device and assign it to a partition. 5.5 Network virtualization Using virtual Ethernet adapters for a partition-to-partition communication within a Power server is an existing Power server capability.
If this VE adapter is for VIOS and it will be used for a shared Ethernet adapter (SEA) (coming up in the following sections), then select the Use this adapter for Ethernet bridging. Click OK to create the adapter. This was a dynamic add to the partition. You should go under Configuration -> Save Current Configuration to save the changes to the partition profile. Additionally, link aggregation is supported for fail over and load balancing across multiple Ethernet ports.
5.5.5 Shared Ethernet Adapters (SEAs) In VIOS, the same Ethernet type of device, entX, is used for logical Host Ethernet ports, physical and virtual Ethernet adapters: Figure 8: Example list of Ethernet ports in VIOS. Inter-partition communication is facilitated via virtual Ethernet adapters using the same PVIDs. VIOS provides access to external networking to client partitions by bridging a physical Ethernet adapter and one or more virtual Ethernet adapters.
5.5.5.3 Configuring SEAs using the VIOS CLI The following document describes how to set up auto mode using the command line interface, but you can substitute sharing for the ha_mode parameter value. Refer to the following steps at: https://www-304.ibm.com/support/docview.
cfgdev If you are using IVM, click Hardware Inventory and then click Configure devices to run the cfgdev command. After detecting the device, VIOS will automatically configure it and make it available for use. It can then be virtualized to the IBM i client partition using the HMC, as described in the previous section. Note that if the SAN configuration is performed before the VIOS partition boots, this step is not necessary, as VIOS will recognize all available devices at boot time.
6.2 Performance When creating an open storage LUN configuration for IBM i as a client of VIOS, it is crucial to plan for both capacity and performance. As LUNs are virtualized for IBM i by VIOS instead of being directly connected it may seem that the virtualization layer will necessarily add a significant performance overhead. However, internal IBM performance tests clearly show that the VIOS layer adds a negligible amount of overhead to each I/O operation.
MPIO solution that provides redundant paths to a single set of LUNs. There are two MPIO scenarios possible with VIOS that remove the requirement for two sets of LUNs: A single VIOS partition using two FC adapters to connect to the same set of LUNs Two VIOS partitions providing redundant paths to the same set of LUNs on a single open storage subsystem 6.3.2.
chdev –dev hdiskX –attr algorithm=round_robin, or chdev –dev hdiskX –attr algorithm=fail_over These commands must be repeated for each hdisk. 6.3.2.3 Redundant VIOS LPARs with client-side MPIO (VSCSI) Beginning with IBM i 6.1 with Licensed Internal Code (LIC) 6.1.1, the IBM i VSCSI client driver supports MPIO through two or more VIOS partitions to a single set of LUNs (up to a maximum of eight VIOS partitions).
Note: For D5000, DS4000 and DS3000 storage subsystems with dual controllers, a connection must be made to both of the controllers to allow an active and a failover path. When the volumes are created on these systems, the host OS type should be set to DEFAULT or AIX (not AIX ADT/AVT, or failover/failback oscillations might occur). For all storage systems, it is recommended that the fabric configuration uses separate dedicated zones and FC cables for each connection.
Note: The order of the parameters can be changed, as shown, to facilitate repeating the command and only having to alter the hdisk number. Note: Some of the low end SANs might not handle the larger number of concurrent commands as well, which can adversely affect performance. For redundant VIOS servers, each server needs to be able to access the same hdisks, so another attribute needs to be set for this: reserve_policy=no_reserve on each hdisk. Add a space between the attribute lists on the command.
7 Attaching open storage to IBM i through VIOS As described in the “IBM i using open storage throughhosted by VIOS Virtual SCSI and Storage virtualization conceptsconcepts” section, IBM i joins the VIOS virtualization environment, allowing it to use open storage.
host can access. These zone sets are combined under a zone configuration for saving and enabling/activating. See the switch manuals for the specifics. To assist in the SAN zoning of the virtual WWPNs associated with NPIV configurations there are management console commands to list and login/logout the virtual WWPNs. The login option will make them appear on the FC switches to enable easier configuration. See the lsnportlogin, chnportlogin commands in the HMC command reference located here: http://publib.
This is because AIX partitions can also use this interface and AIX supports up to 256 hdisks per adapter (though they seldom map that many). If you map more than 16, the additional hdisks will not be seen by IBM i. Choose an unassigned hdisk to map to the adapter and click Assign. Repeat this process for each hdisk. As the mapping completes, the IBM i client partition should be listed as the new hdisk owner. Close the window when done. 7.
You may have to check the port type defined on the tape media library for the fibre channel port associated with the tape device. Log into the tape library interface Go to Configure Library Then select Drives Set the port type to N-Port Accordingly, DS8000 LUNs might be created as “IBM i protected” or “IBM i unprotected” and will correctly report as such to Storage Management in IBM i.
8 IBM i tape options An IBM i client partition with a VIOS host can use a mix of virtual and physical I/O resources. If the VIOS hosting partition supports a single physical tape device, it can be virtualized to the client partition(s) by using a VSCSI adapter pair between the VIOS and the client IBM i. This is facilitated by the management console being used. See the following sections for details.
Use telnet or PuTTy to connect to the VIOS partition. Sign in using padmin as the user ID. Enter cfgdev to check for new devices. Enter lsdev | grep rmt to view the tape devices and ensure that they are in Available state. Enter lsdev | grep vhost and note the last vhost listed there. You need to associate this device with a VSCSI adapter pair. You need to use the SDMC interface to create those. Refer to the “Error! Reference source not found.” section for details.
9 IBM i installation and configuration The IBM i client partition configuration as a client of VIOS is the same as that for a client of an IBM i 6.1 host partition. Refer to the Creating an IBM i logical partition that uses IBM i virtual I/O resources using the HMC topic in the Logical Partitioning Guide at: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf. The guide does not address VIOS hosting IBM i, but the VSCSI and virtual Ethernet adapter concepts are the same.
Figure 10 Sample communications resources. If the IBM i client partition is to use a virtual Ethernet adapter to communicate with an external network, additional configuration in the VIOS hosting partition must be done. An SEA must be created in VIOS to bridge the internal virtual LAN (VLAN) to the external LAN. Use the HMC/SDMC and the instructions in section Network virtualization to perform the SEA configuration.
In IVM, click View/Modify Partitions . Select the IBM i partition. From the More Tasks list, select Operator panel service functions Select the function you wish to perform and click OK If the system is SDMC-managed, perform follow these steps: On the SDMC welcome page, select the host you are working with. Right-click the host and click Service and Support-> Control Panel Functions-> (20) Type, Model, Feature.The function is limited to displaying the hardware information listed. 9.1.
DS5000 direct attachment to IBM i 10 10.1 Overview IBM i has been able to connect to 512-byte-per-sector open storage through VIOS since early 2008 and the solution is implemented in many production environments. On October 2009, IBM further simplified open storage use for IBM i by announcing direct FC attachment to the DS5100 and DS5300 subsystems. With the new enhancement, the IBM i LPAR owns one or more FC adapters, which are connected to the SAN fabric.
As of the publication date, only the following software or firmware is supported for IBM i directly attached to DS5100 and DS5300 systems. This support is also reflected in “IBM i using open storage supported configurations” section. IBM i 6.1 with LIC 6.1.1 or higher Controller firmware 07.60.28.00 or higher DS Storage Manager 10.60.x5.17 or higher IBM i Host Attachment Kit, FC #7735 (required) Storage Partitioning (strongly recommended) 10.
results for IBM i directly attached to DS5300 system are in Chapter 5.1.2 of the Performance Capabilities Reference manual at: http://www03.ibm.com/systems/resources/systems_power_software_i_perfmgmt_pcrm_apr2011.pdf. 10.4 Sizing and configuration There are two main sources of sizing information when planning a configuration involving IBM i directly accessing DS5100 or DS5300 systems: the Disk Magic sizing tool and the Performance Capabilities Reference manual mentioned in the previous chapter.
11 Redundant VIOS Virtual Servers hosting IBM i For a production IBM i virtual server hosted by VIOS, you want redundant VIOS virtual servers in case of failures or scheduled maintenance on the hosting VIOS virtual server. IVM cannot be used since it is actually running in VIOS and can only manage that virtual server. For a Power server or a POWER blade, either an HMC or an SDMC can be used. You need to consider the following steps for the configuration of this environment.
12 Copy Services and IBM i 12.1 DS4000 and DS5000 IBM has conducted some basic functional testing of DS4000 and DS5000 and Copy Services with IBM i as client of VIOS. In this section, you will find information on the scenarios tested and the resulting statements of support for using DS4000 and DS5000 Copy Services with IBM i. 12.1.1 FlashCopy and VolumeCopy 12.1.1.
PowerHA® for IBM i. The components of the solution – DS4000 or DS5000 FlashCopy/VolumeCopy, VIOS and IBM i – must be managed separately and require the corresponding skill set. Also note that support for this solution will be provided by multiple IBM support organizations and not solely by the IBM i Support Center. Support statements: DS4000 and DS5000 FlashCopy and VolumeCopy are supported by IBM i as a client of VIOS on both IBM Power servers and IBM Power blades.
Figure 12: the test environment used for ERM. 12.1.3 ERM support statements The use of DS4000 and DS5000 Enhanced Remote Mirroring with IBM i as a client of VIOS is supported as outlined in this section. Note that to implement and use this solution, multiple manual steps on the DS4000 or DS5000 storage subsystem, in VIOS and in IBM i are required. Currently, no toolkit that automates this solution exists and it is not part of IBM PowerHA for IBM i.
Support statements for ERM: DS4000 and DS5000 ERM is supported by IBM i as a client of VIOS on both IBM Power servers and IBM Power blades. Synchronous ERM (DS4000 and DS5000 Metro Mirror) is supported. Asynchronous ERM with Write Consistency Groups (DS4000 and DS5000 Global Mirror) is supported. Asynchronous ERM (DS4000 and DS5000 Global Copy) is not supported. Full-system ERM (Metro Mirror and Global Mirror) for a planned switchover (IBM i production LPAR is powered off) is supported.
Figure 13:The test environment used for FlashCopy. 12.2.1.2 FlashCopy statements The use of SVC FlashCopy with IBM i as a client of VIOS is supported as outlined below. Please note that to implement and use this solution, multiple manual steps in SVC, in VIOS and in IBM i are required. Currently, no toolkit that automates this solution exists and it is not part of IBM PowerHA for IBM i.
12.2.2 Metro and Global Mirror 12.2.2.1 Test scenario Figure 14 shows the test environment used for Metro and Global Mirror. Figure 14: The test environment used for Metro Mirror and Global Mirror. 12.2.2.2 Metro Mirror and Global Mirror support statements The use of SVC Metro Mirror and Global Mirror with IBM i as a client of VIOS is supported as outlined in the following section. Note that to implement and use this solution, multiple manual steps in SVC, in VIOS and in IBM i are required.
Support statements for SVC Metro and Global Mirror: Both SVC Metro Mirror and Global Mirror are supported by IBM i as a client of VIOS on both IBM Power servers and IBM Power blades. Only full-system replication is supported. Replication of IASPs with Metro Mirror or Global Mirror is not supported. Both Metro Mirror and Global Mirror are supported for a planned switchover (IBM i production partition is powered off).
12.4 DS5000 Direct attach Copy Services IBM has conducted some basic functional testing of DS5100 and DS5300 Copy Services when directly attached to IBM i. This section summarizes the resulting support statements. Note that using Copy Services when directly attaching DS5100 and DS5300 storage systems involves manual steps in both the DS Storage Manager GUI and IBM i. PowerHA for IBM i does not support DS5100 and DS5300 Copy Services when directly attachied to IBM i (or through VIOS).
13 Appendix: 13.1 Additional resources These websites provide useful references to supplement the information contained in this paper 13.2 IBM i IBM i on a Power Blade Read-me First: http://www.ibm.com/systems/power/hardware/blades/ibmi.html. IBM STG Lab Services: http://www.ibm.com/systems/services/labservices/contact.html. Logical Partitioning Guide: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf. IBM i installation: http://publib.boulder.ibm.
IBM Midrange System Storage Hardware Guide(Redbook): http://www.redbooks.ibm.com/abstracts/sg247676.html?Open. IBM System Storage DS3500: Introduction and Implementation Guide: http://www.redbooks.ibm.com/redpieces/pdfs/sg247914.pdf 13.4 VIOS PowerVM Editions Guide: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/arecu/arecuk ickoff.htm Advanced POWER Virtualization on IBM System p5: Introduction and Configuration (Redbook): http://www.redbooks.ibm.
14 Trademarks and disclaimers This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other countries, and the information is subject to change without notice. Refer to your local IBM business contact for information on the IBM offerings available in your area. Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources.
Cooling, Chiphopper, Chipkill, Cloudscape, DataPower, DB2 OLAP Server, DB2 Universal Database, DFDSM, DFSORT, DS4000, DS6000, DS8000, e-business (logo), e-business on demand, EnergyScale, Enterprise Workload Manager, eServer, Express Middleware, Express Portfolio, Express Servers, Express Servers and Storage, General Purpose File System, GigaProcessor, GPFS, HACMP, HACMP/6000, IBM Systems Director Active Energy Manager, IBM TotalStorage Proven, IBMLink, IMS, Intelligent Miner, iSeries, Micro-Partitioning, N