HP-UX 11i v3 Mass Storage I/O Performance Improvements
Introduction
HP-UX 11i v3 introduces a new mass storage subsystem that provides improvements in manageability,
availability, scalability, and performance. This paper discusses the corresponding mass storage I/O
performance improvements and related capabilities, including:
• Native multi-pathing, which automatically takes advantage of multiple hardware paths to increase
I/O throughput
• Boot and scan improvements, which decrease boot and scan times
• Crash dump performance improvements, through the parallelization of the dump process
• Improved performance tracking, reporting tools, and statistics
• New and more flexible performance related tunables
• Logical Volume Manager (LVM) performance improvements
• Async Disk driver performance improvements
These performance improvements are built into HP-UX 11i v3 and do not require the purchase or
installation of add-on products to obtain the performance benefits. This white paper discusses each of
these.
Native multi-pathing
Multi-pathing is the ability to use multiple paths to a LUN to provide the following benefits:
• Availability: transparent recovery from path failures via failover to an alternate path.
• Performance: increased I/O performance via load-balancing of I/O requests across available
paths.
HP-UX 11i v3 provides native multi-pathing built-in to the mass storage stack. Native multi-pathing has
additional manageability benefits (such as transparently handling changes in the SAN without the
need for reconfiguration) because it is built into the OS. For additional information, see the Native
Multi-Pathing white paper. This paper discusses the performance benefits of the HP-UX 11i v3 native
multi-pathing.
If there are multiple hardware paths from the host to a LUN (for example, through multiple HBA ports
or multiple target ports), the native multi-pathing transparently distributes I/O requests across all
available paths to the LUN, using a choice of load balancing policies. Load-balancing policies
determine how a path is chosen for each I/O request, and include the following:
• Round-robin policy — A path is selected in a round-robin fashion from the list of available paths.
This is the default policy for random access devices (for example, disks).
• Least-command-load policy — A path with the least number of outstanding I/O requests is selected.
• Cell-local round-robin policy — A path belonging to the cell (in cell-based servers) on which the I/O
request was initiated is selected.
• Path lock down policy — A single specified path is selected for all I/O requests to the LUN. This is
primarily used for sequential access devices (for example, tapes and autochangers).
• Preferred path policy — A path belonging to a user-specified preferred path is selected. This is
similar to path lockdown except that it also provides for automatic path failover when the preferred
path fails.
2