Fibre Channel SAN Configuration Guide ESX 4.0 ESXi 4.0 vCenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document, see http://www.vmware.com/support/pubs.
Fibre Channel SAN Configuration Guide You can find the most up-to-date technical documentation on the VMware Web site at: http://www.vmware.com/support/ The VMware Web site also provides the latest product updates. If you have comments about this documentation, submit your feedback to: docfeedback@vmware.com Copyright © 2009, 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents Updated Information 5 About This Book 7 1 Overview of VMware ESX/ESXi 9 Introduction to ESX/ESXi 9 Understanding Virtualization 10 Interacting with ESX/ESXi Systems 13 2 Using ESX/ESXi with Fibre Channel SAN 15 Storage Area Network Concepts 15 Overview of Using ESX/ESXi with a SAN 17 Understanding VMFS Datastores 18 Making LUN Decisions 19 Specifics of Using SAN Storage with ESX/ESXi 21 How Virtual Machines Access Data on a SAN 22 Understanding Multipathing and Failover 22 Choosing Virtual Mach
Fibre Channel SAN Configuration Guide Setting Up the Emulex FC HBA for Boot from SAN 48 6 Managing ESX/ESXi Systems That Use SAN Storage 51 Viewing Storage Adapter Information 51 Viewing Storage Device Information 52 Viewing Datastore Information 54 Resolving Display Issues 54 N-Port ID Virtualization 57 Path Scanning and Claiming 60 Path Management and Manual, or Static, Load Balancing 63 Path Failover 64 Set Device Driver Options for SCSI Controllers 65 Sharing Diagnostic Partitions 65 Disable Automatic
Updated Information This Fibre Channel SAN Configuration Guide is updated with each release of the product or when necessary. This table provides the update history of the Fibre Channel SAN Configuration Guide. Revision Description EN-000109-05 n n n n “HP StorageWorks XP,” on page 40 and Appendix A, “Multipathing Checklist,” on page 77 have been changed to include host mode parameters required for HP StorageWorks XP arrays.
Fibre Channel SAN Configuration Guide 6 VMware, Inc.
About This Book ® The Fibre Channel SAN Configuration Guide explains how to use VMware ESX™ and VMware ESXi systems with a Fibre Channel storage area network (SAN). The manual discusses conceptual background, installation requirements, and management information in the following main topics: n Understanding ESX/ESXi – Introduces ESX/ESXi systems for SAN administrators.
Fibre Channel SAN Configuration Guide Technical Support and Education Resources The following technical support resources are available to you. To access the current version of this book and other books, go to http://www.vmware.com/support/pubs. Online and Telephone Support To use online support to submit technical support requests, view your product and contract information, and register your products, go to http://www.vmware.com/support.
Overview of VMware ESX/ESXi 1 You can use ESX/ESXi in conjunction with a Fibre Channel storage area network (SAN), a specialized highspeed network that uses Fibre Channel (FC) protocol to transmit data between your computer systems and high-performance storage subsystems. Using ESX/ESXi with a SAN provides extra storage for consolidation, improves reliability, and helps with disaster recovery. To use ESX/ESXi effectively with a SAN, you must have a working knowledge of ESX/ESXi systems and SAN concepts.
Fibre Channel SAN Configuration Guide The virtualization layer schedules the virtual machine operating systems and, if you are running an ESX host, the service console. The virtualization layer manages how the operating systems access physical resources. The VMkernel must have its own drivers to provide access to the physical devices. VMkernel drivers are modified Linux drivers, even though the VMkernel is not a Linux variant.
Chapter 1 Overview of VMware ESX/ESXi CPU, Memory, and Network Virtualization A VMware virtual machine provides complete hardware virtualization. The guest operating system and applications running on a virtual machine can never determine directly which physical resources they are accessing (such as which physical CPU they are running on in a multiprocessor system, or which physical memory is mapped to their pages). The following virtualization processes occur.
Fibre Channel SAN Configuration Guide Each virtual disk that a virtual machine can access through one of the virtual SCSI controllers resides in the VMware Virtual Machine File System (VMFS) datastore, NFS-based datastore, or on a raw disk. From the standpoint of the virtual machine, each virtual disk appears as if it were a SCSI drive connected to a SCSI controller.
Chapter 1 Overview of VMware ESX/ESXi Raw Device Mapping A raw device mapping (RDM) is a special file in a VMFS volume that acts as a proxy for a raw device, such as a SAN LUN. With the RDM, the SAN LUN can be directly and entirely allocated to a virtual machine. The RDM provides some of the advantages of a virtual disk in the VMFS file system, while keeping some advantages of direct access to physical devices.
Fibre Channel SAN Configuration Guide 14 VMware, Inc.
Using ESX/ESXi with Fibre Channel SAN 2 When you set up ESX/ESXi hosts to use FC SAN array storage, special considerations are necessary. This section provides introductory information about how to use ESX/ESXi with a SAN array.
Fibre Channel SAN Configuration Guide Zoning is similar to LUN masking, which is commonly used for permission management. LUN masking is a process that makes a LUN available to some hosts and unavailable to other hosts. Usually, LUN masking is performed at the SP or server level. Ports In the context of this document, a port is the connection from a device into the SAN. Each node in the SAN, a host, storage device, and fabric component has one or more ports that connect it to the SAN.
Chapter 2 Using ESX/ESXi with Fibre Channel SAN Overview of Using ESX/ESXi with a SAN Using ESX/ESXi with a SAN improves flexibility, efficiency, and reliability. Using ESX/ESXi with a SAN also supports centralized management and failover and load balancing technologies. The following are benefits of using ESX/ESXi with a SAN: n You can store data redundantly and configure multiple paths to your storage, eliminating a single point of failure.
Fibre Channel SAN Configuration Guide Disaster recovery Having all data stored on a SAN facilitates the remote storage of data backups. You can restart virtual machines on remote ESX/ESXi hosts for recovery if one site is compromised. Simplified array migrations and storage upgrades When you purchase new storage systems or arrays, use storage VMotion to perform live automated migration of virtual machine disk files from existing storage to their new destination.
Chapter 2 Using ESX/ESXi with Fibre Channel SAN Figure 2-1. Sharing a VMFS Datastore Across ESX/ESXi Hosts ESX/ESXi A ESX/ESXi B ESX/ESXi C VM1 VM2 VM3 VMFS volume disk1 disk2 virtual disk files disk3 Because virtual machines share a common VMFS datastore, it might be difficult to characterize peak-access periods or to optimize performance. You must plan virtual machine storage access for peak periods, but different applications might have different peak-access periods.
Fibre Channel SAN Configuration Guide You might want more, smaller LUNs for the following reasons: n Less wasted storage space. n Different applications might need different RAID characteristics. n More flexibility, as the multipathing policy and disk shares are set per LUN. n Use of Microsoft Cluster Service requires that each cluster disk resource is in its own LUN. n Better performance because there is less contention for a single volume.
Chapter 2 Using ESX/ESXi with Fibre Channel SAN Use Disk Shares to Prioritize Virtual Machines If multiple virtual machines access the same VMFS datastore (and therefore the same LUN), use disk shares to prioritize the disk accesses from the virtual machines. Disk shares distinguish high-priority from lowpriority virtual machines. Procedure 1 Start a vSphere Client and connect to vCenter Server. 2 Select the virtual machine in the inventory panel and click Edit virtual machine settings from the menu.
Fibre Channel SAN Configuration Guide If you decide to run the SAN management software on a virtual machine, you gain the benefits of running a virtual machine, including failover using VMotion and VMware HA. Because of the additional level of indirection, however, the management software might not be able to detect the SAN. This problem can be resolved by using an RDM. NOTE Whether a virtual machine can run management software successfully depends on the particular storage system.
Chapter 2 Using ESX/ESXi with Fibre Channel SAN Host-Based Failover with Fibre Channel To support multipathing, your host typically has two or more HBAs available. This configuration supplements the SAN multipathing configuration that generally provides one or more switches in the SAN fabric and the one or more storage processors on the storage array device itself. In Figure 2-2, multiple physical paths connect each server with the storage device.
Fibre Channel SAN Configuration Guide n Handles I/O queueing to the physical storage HBAs. n Handles physical path discovery and removal. n Provides logical device and physical path I/O statistics. As Figure 2-3 illustrates, multiple third-party MPPs can run in parallel with the VMware NMP. The third-party MPPs can replace the behavior of the NMP and take complete control of the path failover and the loadbalancing operations for specified storage devices. Figure 2-3.
Chapter 2 Using ESX/ESXi with Fibre Channel SAN After the NMP determines which SATP to call for a specific storage device and associates the SATP with the physical paths for that storage device, the SATP implements the tasks that include the following: n Monitors health of each physical path. n Reports changes in the state of each physical path. n Performs array-specific actions necessary for storage fail-over. For example, for active/passive devices, it can activate passive paths.
Fibre Channel SAN Configuration Guide Choosing Virtual Machine Locations Storage location is an important factor when you want to optimize the performance of your virtual machines. There is always a trade-off between expensive storage that offers high performance and high availability and storage with lower cost and lower performance. Storage can be divided into different tiers depending on a number of factors: High tier Offers high performance and high availability.
Chapter 2 Using ESX/ESXi with Fibre Channel SAN Using Cluster Services Server clustering is a method of tying two or more servers together by using a high-speed network connection so that the group of servers functions as a single, logical server. If one of the servers fails, the other servers in the cluster continue operating, picking up the operations that the failed server performs. VMware tests Microsoft Cluster Service in conjunction with ESX/ESXi systems, but other cluster solutions might also work.
Fibre Channel SAN Configuration Guide Using VMotion to Migrate Virtual Machines VMotion allows administrators to manually migrate virtual machines to different hosts. Administrators can migrate a running virtual machine to a different physical server connected to the same SAN without service interruption.
Requirements and Installation 3 When you use ESX/ESXi systems with SAN storage, specific hardware and system requirements exist. This chapter includes the following topics: n “General ESX/ESXi SAN Requirements,” on page 29 n “ESX Boot from SAN Requirements,” on page 31 n “Installation and Setup Steps,” on page 31 General ESX/ESXi SAN Requirements In preparation for configuring your SAN and setting up your ESX/ESXi system to use SAN storage, review the requirements and recommendations.
Fibre Channel SAN Configuration Guide Setting LUN Allocations This topic provides some general information on how to allocate LUNs when your ESX/ESXi works in conjunction with SAN. When you set LUN allocations, note the following points: Storage provisioning. To ensure that the ESX/ESXi system recognizes the LUNs at startup time, provision all LUNs to the appropriate HBAs before you connect the SAN to the ESX/ESXi system. VMware recommends that you provision all LUNs to all ESX/ESXi HBAs at the same time.
Chapter 3 Requirements and Installation ESX Boot from SAN Requirements When you have SAN storage configured with your ESX system, you can place the ESX boot image on one of the LUNs on the SAN. This configuration must meet specific criteria. To enable your ESX system to boot from a SAN, your environment must meet the requirements listed in Table 3-1. Table 3-1. Boot from SAN Requirements Requirement Description ESX system requirements ESX 3.x or later is recommended. When you use the ESX 3.
Fibre Channel SAN Configuration Guide 32 VMware, Inc.
Setting Up SAN Storage Devices with ESX/ESXi 4 This section discusses many of the storage devices supported in conjunction with VMware ESX/ESXi. For each device, it lists the major known potential issues, points to vendor-specific information (if available), and includes information from VMware knowledge base articles. NOTE Information related to specific storage devices is updated only with each release. New information might already be available.
Fibre Channel SAN Configuration Guide Direct connect The server connects to the array without using switches and with only an FC cable. For all other tests, a fabric connection is used. FC Arbitrated Loop (AL) is not supported. Clustering The system is tested with Microsoft Cluster Service running in the virtual machine.
Chapter 4 Setting Up SAN Storage Devices with ESX/ESXi Because this array is an active/passive disk array, the following general considerations apply. n To avoid the possibility of path thrashing, the default multipathing policy is Most Recently Used, not Fixed. The ESX/ESXi system sets the default policy when it identifies the array. n Automatic volume resignaturing is not supported for AX100 storage devices.
Fibre Channel SAN Configuration Guide n SCSI 3 (SC3) set enabled n Unique world wide name (UWN) n SPC-2 (Decal) (SPC2) SPC-2 flag is required The ESX/ESXi host considers any LUNs from a Symmetrix storage array with a capacity of 50MB or less as management LUNs. These LUNs are also known as pseudo or gatekeeper LUNs. These LUNs appear in the EMC Symmetrix Management Interface and should not be used to hold data.
Chapter 4 Setting Up SAN Storage Devices with ESX/ESXi This configuration provides two paths from each HBA, so that each element of the connection can fail over to a redundant path. The order of the paths in this configuration provides HBA and switch failover without the need to trigger SP failover. The storage processor that the preferred paths are connected to must own the LUNs. In the preceding example configuration, SP1 owns them.
Fibre Channel SAN Configuration Guide Configure Storage Processor Sense Data A DS4800 SP that runs Windows as a guest operating system should return Not Ready sense data when it is quiescent. Returning Unit Attention might cause the Windows guest to fail during a failover. Procedure 1 Determine the index for the LNXCL host type by using the following commands in a shell window. Press Enter after each command. SMcli.exe show hosttopology; SMcli.
Chapter 4 Setting Up SAN Storage Devices with ESX/ESXi HP StorageWorks Storage Systems This section includes configuration information for the different HP StorageWorks storage systems. For additional information, see the HP ActiveAnswers section on VMware ESX/ESXi at the HP web site. HP StorageWorks MSA This section lists issues of interest if you are using the active/passive version of the HP StorageWorks MSA.
Fibre Channel SAN Configuration Guide 7 Verify the connection by entering the following: SHOW CONNECTIONS The output displays a single connection with the WWNN and WWPN pair 20:02:00:a0:b8:0c:d5:56 and 20:03:00:a0:b8:0c:d5:57 and the Profile Name set to Linux: Connection Name: ESX_CONN_1 Host WWNN = 20:02:00:a0:b8:0c:d5:56 Host WWPN = 20:03:00:a0:b8:0c:d5:57 Profile Name = Linux Unit Offset = 0 Controller 1 Port 1 Status = Online Controller 2 Port 1 Status = Online NOTE Make sure WWNN = 20:02:00:a0:b8:0
Chapter 4 Setting Up SAN Storage Devices with ESX/ESXi Hitachi Data Systems Storage This section introduces the setup for Hitachi Data Systems storage. This storage solution is also available from Sun and as HP XP storage. LUN masking To mask LUNs on an ESX/ESXi host, use the HDS Storage Navigator software for best results. Microcode and configurations Check with your HDS representative for exact configurations and microcode levels needed for interoperability with ESX/ESXi.
Fibre Channel SAN Configuration Guide 42 VMware, Inc.
Using Boot from SAN with ESX Systems 5 This section discusses the benefits of boot from SAN and describes the tasks you need to perform to have the ESX boot image stored on a SAN LUN. NOTE Skip this information if you do not plan to have your ESX host boot from a SAN.
Fibre Channel SAN Configuration Guide Figure 5-1. How Boot from a SAN Works host service console VMkernel HBA FC switch storage array boot disk NOTE When you use boot from SAN in conjunction with ESX hosts, each host must have its own boot LUN. Benefits of Boot from SAN Booting your ESX host from a SAN provides numerous benefits. The benefits include: n Cheaper servers – Servers can be more dense and run cooler without internal storage.
Chapter 5 Using Boot from SAN with ESX Systems Before You Begin When preparing your ESX host and storage array for the boot from SAN setup, review any available information, including specific recommendations and requirements, vendor's documentation, and so on. Review the following information: n n n The recommendations or sample setups for the type of configuration you want: n Single or redundant paths to the boot LUN. n FC switch fabric.
Fibre Channel SAN Configuration Guide 3 Configure the HBA BIOS for boot from SAN. 4 Boot your ESX system from the ESX installation CD. The QLogic BIOS uses a search list of paths (wwpn:lun) to locate a boot image. If one of the wwpn:lun paths is associated with a passive path, for example, when you use CLARiiON or IBM TotalStorage DS 4000 systems, the BIOS stays with the passive path and does not locate an active path.
Chapter 5 Using Boot from SAN with ESX Systems Enable the QLogic HBA BIOS When configuring the QLogic HBA BIOS to boot ESX from SAN, start with enabling the QLogic HBA BIOS. Procedure 1 2 Enter the BIOS Fast!UTIL configuration utility. a Boot the server. b While booting the server, press Ctrl+Q. Perform the appropriate action depending on the number of HBAs. Option Description One HBA If you have only one host bus adapter (HBA), the Fast!UTIL Options page appears. Skip to Step 3.
Fibre Channel SAN Configuration Guide 3 Use the cursor keys to select the chosen SP and press Enter. n If the SP has only one LUN attached, it is selected as the boot LUN, and you can skip to Step 4. n If the SP has more than one LUN attached, the Select LUN page opens. Use the arrow keys to position to the selected LUN and press Enter. If any remaining storage processors show in the list, position to those entries and press C to clear the data. 4 Press Esc twice to exit.
Chapter 5 Using Boot from SAN with ESX Systems 3 From the Emulex main menu: a Select the same adapter. b Select <1> Configure Boot Devices. c Select the location for the Boot Entry. d Enter the two-digit boot device. e Enter the two-digit (HEX) starting LUN (for example, 08). f Select the boot LUN. g Select <1> WWPN. (Boot this device using WWPN, not DID). h Select to exit and to reboot. 4 Boot into the system BIOS and move Emulex first in the boot controller sequence.
Fibre Channel SAN Configuration Guide 50 VMware, Inc.
Managing ESX/ESXi Systems That Use SAN Storage 6 This section helps you manage your ESX/ESXi system, use SAN storage effectively, and perform troubleshooting. It also explains how to find information about storage devices, adapters, multipathing, and so on.
Fibre Channel SAN Configuration Guide Table 6-1. Storage Adapter Information Adapter Information Description Model Model of the adapter. Targets Number of targets accessed through the adapter. WWN A World Wide Name formed according to Fibre Channel standards that uniquely identifies the FC adapter. Devices All storage devices or LUNs the adapter can access. Paths All paths the adapter uses to access storage devices.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage Understanding Storage Device Naming In the vSphere Client, each storage device, or LUN, is identified by several names. Name A friendly name that the host assigns to a device based on the storage type and manufacturer. You can modify the name using the vSphere Client. Identifier A universally unique identifier that the host extracts from the storage. Depending on the type of storage, the host uses different algorithms to extract the identifier.
Fibre Channel SAN Configuration Guide Display Storage Devices for an Adapter For each storage adapter on your host, you can display a list of storage devices accessible just through this adapter. Procedure 1 In Inventory, select Hosts and Clusters. 2 Select a host and click the Configuration tab. 3 In Hardware, select Storage Adapters. 4 Select the adapter from the Storage Adapters list. 5 Click Devices.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage Table 6-3. Troubleshooting Fibre Channel LUN Display Troubleshooting Task Description Check cable connectivity. If you do not see a port, the problem could be cable connectivity. Check the cables first. Ensure that cables are connected to the ports and a link light indicates that the connection is good. If each end of the cable does not show a good link light, replace the cable. Check zoning.
Fibre Channel SAN Configuration Guide Rescan Storage Adapters When you make changes in your ESX/ESXi host or SAN configuration, you might need to rescan your storage adapters. You can rescan all adapters on your host. If the changes you make are isolated to a specific adapter, rescan only this adapter. Use this procedure if you want to limit the rescan to a particular host or an adapter on the host.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage Disable Sparse LUN Support You can disable the default sparse LUN support to decrease the time ESX/ESXi needs to scan for LUNs. The VMkernel provides sparse LUN support by default. The sparse LUN support enables the VMkernel to perform uninterrupted LUN scanning when a storage system presents LUNs with nonsequential LUN numbering, for example 0, 6, and 23.
Fibre Channel SAN Configuration Guide Requirements for Using NPIV If you plan to enable NPIV on your virtual machines, you should be aware of certain requirements and limitations. The following requirements and limitations exist: n NPIV can only be used for virtual machines with RDM disks. Virtual machines with regular virtual disks use the WWNs of the host’s physical HBAs.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage 11 Select the number of virtual processors in the virtual machine from the pull-down list, and click Next. 12 Configure the virtual machine’s memory size by selecting the number of megabytes, and click Next. 13 Configure network connections, and click Next. 14 Choose the type of SCSI adapter you want to use with the virtual machine. 15 Select Raw Device Mapping, and click Next.
Fibre Channel SAN Configuration Guide Procedure 1 Open the Virtual Machine Properties dialog box. Option Action New virtual machine For a new virtual machine, after creating the virtual machine, on the Ready to Complete New Virtual Machine page select the Edit the virtual machine settings before submitting the creation task checkbox, and click Continue. Existing virtual machine For an existing virtual machine, select the virtual machine from the inventory panel, and click the Edit Settings link.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage You can find some information about modifying claim rules in Appendix B, “Managing Storage Paths and Multipathing Plugins,” on page 79. For detailed descriptions of the commands available to manage PSA, see the vSphere Command-Line Interface Installation and Reference Guide.
Fibre Channel SAN Configuration Guide View Storage Device Paths Use the vSphere Client to view which SATP and PSP the host uses for a specific storage device and the status of all available paths for this storage device. Procedure 1 Log in to the vSphere Client and select a server from the inventory panel. 2 Click the Configuration tab and click Storage in the Hardware panel. 3 Click Devices under View. 4 Click Manage Paths to open the Manage Paths dialog box.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage Change the Path Selection Policy Generally, you do not have to change the default multipathing settings your host uses for a specific storage device. However, if you want to make any changes, you can use the Manage Paths dialog box to modify a path selection policy and specify the preferred path for the Fixed policy. Procedure 1 Open the Manage Paths dialog box either from the Datastores or Devices view. 2 Select a path selection policy.
Fibre Channel SAN Configuration Guide Figure 6-1. Manual Load Balancing with Fibre Channel ESX/ESXi HBA1 HBA2 HBA3 HBA4 FC switch SP1 1 SP2 2 3 4 storage array For load balancing, set the preferred paths as follows. Load balancing can be performed with as few as two HBAs, although this example uses four.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage Procedure 1 Select Start > Run. 2 In the command window, type regedit.exe, and click OK. 3 In the left panel hierarchy view, double-click first HKEY_LOCAL_MACHINE, then System, then CurrentControlSet, then Services, and then Disk. 4 Select the TimeOutValue and set the data value to x03c (hexadecimal) or 60 (decimal).
Fibre Channel SAN Configuration Guide Disable Automatic Host Registration When you use EMC CLARiiON or Invista arrays for storage, the hosts must register with the arrays. ESX/ESXi performs automatic host registration by sending the host's name and IP address to the array. If you prefer to perform manual registration by using storage management software, turn off the ESX/ESXi auto-registration feature. Procedure 1 In the vSphere Client, select the host in the inventory panel.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage Optimizing SAN Storage Performance Several factors contribute to optimizing a typical SAN environment. If the environment is properly configured, the SAN fabric components (particularly the SAN switches) are only minor contributors because of their low latencies relative to servers and storage arrays. Make sure that the paths through the switch fabric are not saturated, that is, that the switch fabric is running at the highest throughput.
Fibre Channel SAN Configuration Guide n When allocating LUNs or RAID groups for ESX/ESXi systems, multiple operating systems use and share that resource. As a result, the performance required from each LUN in the storage subsystem can be much higher if you are working with ESX/ESXi systems than if you are using physical machines. For example, if you expect to run four I/O intensive applications, allocate four times the performance capacity for the ESX/ESXi LUNs.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage Resolve Path Thrashing Use this procedure to resolve path thrashing. Path thrashing occurs on active-passive arrays when two hosts access the LUN through different SPs and, as a result, the LUN is never actually available. Procedure 1 Ensure that all hosts sharing the same set of LUNs on the active-passive arrays use the same storage processor.
Fibre Channel SAN Configuration Guide Procedure 1 In the vSphere Client, select the host in the inventory panel. 2 Click the Configuration tab and click Advanced Settings under Software. 3 Click Disk in the left panel and scroll down to Disk.SchedNumReqOutstanding. 4 Change the parameter value to the number of your choice and click OK. This change can impact disk bandwidth scheduling, but experiments have shown improvements for diskintensive workloads.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage Adjust Queue Depth for an Emulex HBA If you are not satisfied with the performance of your Emulex adapter, you can change its maximum queue depth. Procedure 1 Verify which Emulex HBA module is currently loaded by entering the vmkload_mod -l | grep lpfcdd command. 2 Run the following command. The example shows the lpfcdd_7xx module. Use the appropriate module based on the outcome of Step 1.
Fibre Channel SAN Configuration Guide Snapshot Software Snapshot software allows an administrator to make an instantaneous copy of any single virtual disk defined within the disk subsystem. Snapshot software is available at different levels: n ESX/ESXi hosts allow you to create snapshots of virtual machines. This software is included in the basic ESX/ESXi package.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage Array-Based (Third-Party) Solution When you use an ESX/ESXi system in conjunction with a SAN, you must decide whether array-based tools are more suitable for your particular situation. When you consider an array-based solution, keep in mind the following points: n Array-based solutions usually result in more comprehensive statistics. With RDM, data always takes the same path, which results in easier performance management.
Fibre Channel SAN Configuration Guide Mounting VMFS Datastores with Existing Signatures You might not have to resignature a VMFS datastore copy. You can mount a VMFS datastore copy without changing its signature. For example, you can maintain synchronized copies of virtual machines at a secondary site as part of a disaster recovery plan. In the event of a disaster at the primary site, you can mount the datastore copy and power on the virtual machines at the secondary site.
Chapter 6 Managing ESX/ESXi Systems That Use SAN Storage Procedure 1 Display the datastores. 2 Right-click the datastore to unmount and select Unmount. 3 If the datastore is shared, specify which hosts should no longer access the datastore. a If needed, deselect the hosts where you want to keep the datastore mounted. By default, all hosts are selected. 4 b Click Next. c Review the list of hosts from which to unmount the datastore, and click Finish.
Fibre Channel SAN Configuration Guide 6 Under Mount Options, select Assign a New Signature and click Next. 7 In the Ready to Complete page, review the datastore configuration information and click Finish. What to do next After resignaturing, you might have to do the following: 76 n If the resignatured datastore contains virtual machines, update references to the original VMFS datastore in the virtual machine files, including .vmx, .vmdk, .vmsd, and .vmsn.
Multipathing Checklist A Storage arrays have different multipathing setup requirements. Table A-1. Multipathing Setup Requirements Component Comments All storage arrays Write cache must be disabled if not battery backed. Topology No single failure should cause both HBA and SP failover, especially with active-passive storage arrays. IBM TotalStorage DS 4000 (formerly FastT) Host type must be LNXCL or VMware in later versions.
Fibre Channel SAN Configuration Guide 78 VMware, Inc.
Managing Storage Paths and Multipathing Plugins B Use the vSphere CLI to manage the Pluggable Storage Architecture (PSA) multipathing plugins and storage paths assigned to them. You can use the vSphere CLI to display all multipathing plugins available on your host. You can list any thirdparty MPPs, as well as your host's NMP and SATPs and review the paths they claim. You can also define new paths and specify which multipathing plugin should claim the paths.
Fibre Channel SAN Configuration Guide Procedure u Use the esxcli corestorage claimrule list to list claim rules. Example B-1 shows the output of the command. Example B-1.
Appendix B Managing Storage Paths and Multipathing Plugins Example B-2. Sample Output of the vicfg-mpath Command MPP_1 MPP_2 MPP_3 MASK_PATH NMP Display SATPs for the Host Use the vSphere CLI to list all VMware NMP SATPs loaded into the system. Procedure u To list all VMware SATPs, run the following command. esxcli nmp satp list For each SATP, the command displays information that shows the type of storage array or system this SATP supports and the default PSP for any LUNs using this SATP.
Fibre Channel SAN Configuration Guide Add PSA Claim Rules Use the vSphere CLI to add a new PSA claim rule to the set of claim rules on the system. For the new claim rule to be active, you first define the rule and then load it into your system. You add a new PSA claim rule when, for example, you load a new multipathing plugin (MPP) and need to define which paths this module should claim. You may need to create a new claim rule if you add new paths and want an existing MPP to claim them.
Appendix B Managing Storage Paths and Multipathing Plugins Delete PSA Claim Rules Use the vSphere CLI to remove a PSA claim rule from the set of claim rules on the system. Procedure 1 Delete a claim rule from the set of claim rules. esxcli corestorage claimrule delete -r For information on the options that the command takes, see “esxcli corestorage Command-Line Options,” on page 86. NOTE By default, the PSA claim rule 101 masks Dell array pseudo devices.
Fibre Channel SAN Configuration Guide Example B-5. Masking a LUN In this example, you mask the LUN 20 on targets T1 and T2 accessed through storage adapters vmhba2 and vmhba3.
Appendix B Managing Storage Paths and Multipathing Plugins Procedure 1 To add a claim rule for a specific SATP, run the following command. esxcli nmp satp addrule -e -o
Fibre Channel SAN Configuration Guide esxcli corestorage Command-Line Options Some esxcli corestorage commands, for example the commands that you run to add new claim rules, remove the rules, or mask paths, require that you specify certain options. Table B-1. esxcli corestorage Command-Line Options Option Description Required Option -r Use to specify the order number for the claim rule from 0 to 65535. -t Use to define the set of paths for the claim rule.
Index Symbols * next to path 61 A access, equalizing disk access 69 active-active disk arrays 16, 38, 62 active-passive disk arrays, path policy reset 66 active/active disk arrays, managing paths 63 active/passive disk arrays boot from SAN 31 HP StorageWorks MSA 39 managing paths 63 path thrashing 69 adaptive scheme 20 allocations 30 applications,layered 72 array-based (third-party) solution 73 asterisk next to path 61 auto volume transfer 37 automatic host registration, disabling 66 avoiding problems 66
Fibre Channel SAN Configuration Guide diagnostic partitions boot from SAN 44 sharing 65 direct connect 33 disabling auto volume transfer 37 disabling paths 63 disaster recovery 17 disk access, equalizing 69 disk arrays active-active 62 active-passive 62 active/active 30 active/passive 30, 47, 69 zoning disk array 55, 56 disk shares 21 Disk.MaxLUN 56 Disk.SchedNumReqOutstanding parameter 69 Disk.
Index lower-tier storage 26 LSILogic queue depth 29 LUN decisions adaptive scheme 20 predictive scheme 20 LUN masking, boot from SAN 45 LUN not visible, SP visibility 54 LUNs 1 VMFS volume 29 allocations 30 boot LUN 47 can't see 54 changing number scanned 56 creating, and rescan 54–56 decisions 19 making changes and rescan 55 masking 83 masking changes and rescan 54, 56 multipathing policy 62 NPIV-based access 57 number scanned 56 selecting boot LUN 47 setting multipathing policy 62 sparse 57 M maintenanc
Fibre Channel SAN Configuration Guide preferred path 61 prioritizing virtual machines 21 problems avoiding 66 hub controller 40 performance 68 visibility 54 profile name, Linux 39 PSA, See Pluggable Storage Architecture PSA claim rules, deleting 83 PSPs, See Path Selection Plugins Q Qlogic FC HBA boot from SAN 46 NPIV support 58 Qlogic HBA BIOS, enabling for BFS 47 queue depth 70, 71 R raw device mapping, mapping file 13 RDM mapping file 13 Microsoft Cluster Service 13 refresh 55 requirements, boot from
Index troubleshooting 66 U use cases 17 V vCenter Server, accessing 13 Virtual Machine File System 12 Virtual Machine Monitor 9 virtual machines accessing SANs 22 assigning WWNs to 58 equalizing disk access 69 locations 26 prioritizing 21 virtual ports (VPORTs) 57 virtualization 10 visibility issues 54 VMFS 1 volume per LUN 29 creating new volume 12 locking 12 minimum size 12 number of extents 12 sharing across ESX/ESXi hosts 18 volume resignaturing 73 VMFS datastores changing signatures 75 resignaturing
Fibre Channel SAN Configuration Guide 92 VMware, Inc.