Datasheet

34
6.2 Performance
When creating an open storage LUN configuration for IBM i as a client of VIOS, it is crucial to
plan for both capacity and performance. As LUNs are virtualized for IBM i by VIOS instead of
being directly connected it may seem that the virtualization layer will necessarily add a significant
performance overhead. However, internal IBM performance tests clearly show that the VIOS
layer adds a negligible amount of overhead to each I/O operation. Instead, the tests demonstrate
that when IBM i uses open storage LUNs virtualized by VIOS, performance is almost
entirely determined by the physical and logical configuration of the storage subsystem.
The IBM Rochester, MN, performance team has run a significant number of tests with IBM i as a
client of VIOS using open storage. The resulting recommendations on configuring both the open
storage and VIOS are available in the latest Performance Capabilities Reference manual (PCRM)
at: http://www.ibm.com/systems/i/solutions/perfmgmt/resource.html. Chapter 6 focuses on
virtualized storage for IBM i. In most cases, an existing IBM i partition using physical storage will
be migrated to open storage LUNs virtualized by VIOS. The recommended approach here is to
start with the partition’s original physical disk configuration; then create a similar setup with the
physical drives in the open storage subsystem on which LUNs are created, while following the
suggestions in the PCRM sections. A basic rule of thumb is to make at least 6-8 LUNs for an IBM
i partition.
The commonly used SAN disk sizing tool Disk Magic can also be used to model the projected
IBM i performance of different physical and logical drive configurations on supported subsystems.
You can work with IBM Techline or your IBM Business Partner for a Disk Magic analysis. The
latest version of Disk Magic includes support for multiple open storage subsystems and IBM i as
a virtual client of VIOS.
6.3 Dual hosting and multi-path I/O (MPIO)
An IBM i client partition in this environment has a dependency on VIOS: if the VIOS partition fails,
IBM i on the client will lose contact with the virtualized open storage LUNs. The LUNs would also
become unavailable if VIOS is brought down for scheduled maintenance or a release upgrade. To
remove this dependency, two or more VIOS partitions can be used to simultaneously provide
virtual storage to one or more IBM i client partitions.
6.3.1 Dual VIOS LPARs with IBM i mirroring
Prior to the availability of redundant VIOS LPARs with clients-side MPIO for IBM i, the only
method to achieve VIOS redundancy was to use mirroring within IBM i. This configuration uses
the same concepts as that for a single VIOS described in the “Virtual SCSI and Storage
virtualization conceptsStorage virtualization section. In addition, at least one additional VSCSI
client adapter exists in the client LPAR, connected to a VSCSI server adapter in the second VIOS
on the same Power server. A second set of LUNs of the same number and size is created on the
same or a different open storage subsystem, and connected to the second VIOS. The host-side
configuration of the second VIOS mimics that of the first host, with the same number of LUNs
(hdisks), vtscsiX and vhostX devices. As a result, the client partition recognizes a second set of
virtual disks of the same number and size. To achieve redundancy, adapter-level mirroring in IBM
i is used between the two sets of virtualized LUNs from the two hosts. Thus, if a VIOS partition
fails or is taken down for maintenance, mirroring will be suspended, but the IBM i client will
continue to operate. When the inactive VIOS is either recovered or restarted, mirroring can be
resumed in IBM i.
6.3.2 Path redundancy to a single set of LUNs
Note that the dual-VIOS solution above provides a level of redundancy by attaching two separate
sets of open storage LUNs to the same IBM i client through separate VIOS partitions. It is not an