6.0.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- Dependent Hardware iSCSI Considerations
- Configure Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Refresh and Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Upgrading VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Working with Virtual Volumes
- Virtual Machine Storage Policies
- Upgrading Legacy Storage Profiles
- Understanding Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Default Storage Policies
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Filtering Virtual Machine I/O
- VMkernel and Storage
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Storage Thick and Thin Provisioning
- Using Storage Providers
- Using vmkfstools
- vmkfstools Command Syntax
- vmkfstools Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Example for Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Removing Zeroed Blocks
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Example for Cloning or Converting a Virtual Disk
- Migrate Virtual Machines Between DifferentVMware Products
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Example for Creating a Virtual Compatibility Mode RDM
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
- Index
3 Perform any necessary storage array modication.
Most vendors have vendor-specic documentation for seing up a SAN to work with VMware ESXi.
4 Set up the HBAs for the hosts you have connected to the SAN.
5 Install ESXi on the hosts.
6 Create virtual machines and install guest operating systems.
7 (Optional) Set up your system for VMware HA failover or for using Microsoft Clustering Services.
8 Upgrade or modify your environment as needed.
N-Port ID Virtualization
N-Port ID Virtualization (NPIV) is an ANSI T11 standard that describes how a single Fibre Channel HBA
port can register with the fabric using several worldwide port names (WWPNs). This allows a fabric-
aached N-port to claim multiple fabric addresses. Each address appears as a unique entity on the Fibre
Channel fabric.
How NPIV-Based LUN Access Works
NPIV enables a single FC HBA port to register several unique WWNs with the fabric, each of which can be
assigned to an individual virtual machine.
SAN objects, such as switches, HBAs, storage devices, or virtual machines can be assigned World Wide
Name (WWN) identiers. WWNs uniquely identify such objects in the Fibre Channel fabric. When virtual
machines have WWN assignments, they use them for all RDM trac, so the LUNs pointed to by any of the
RDMs on the virtual machine must not be masked against its WWNs. When virtual machines do not have
WWN assignments, they access storage LUNs with the WWNs of their host’s physical HBAs. By using
NPIV, however, a SAN administrator can monitor and route storage access on a per virtual machine basis.
The following section describes how this works.
When a virtual machine has a WWN assigned to it, the virtual machine’s conguration le (.vmx) is updated
to include a WWN pair (consisting of a World Wide Port Name, WWPN, and a World Wide Node Name,
WWNN). As that virtual machine is powered on, the VMkernel instantiates a virtual port (VPORT) on the
physical HBA which is used to access the LUN. The VPORT is a virtual HBA that appears to the FC fabric as
a physical HBA, that is, it has its own unique identier, the WWN pair that was assigned to the virtual
machine. Each VPORT is specic to the virtual machine, and the VPORT is destroyed on the host and it no
longer appears to the FC fabric when the virtual machine is powered o. When a virtual machine is
migrated from one host to another, the VPORT is closed on the rst host and opened on the destination host.
If NPIV is enabled, WWN pairs (WWPN & WWNN) are specied for each virtual machine at creation time.
When a virtual machine using NPIV is powered on, it uses each of these WWN pairs in sequence to try to
discover an access path to the storage. The number of VPORTs that are instantiated equals the number of
physical HBAs present on the host. A VPORT is created on each physical HBA that a physical path is found
on. Each physical path is used to determine the virtual path that will be used to access the LUN. Note that
HBAs that are not NPIV-aware are skipped in this discovery process because VPORTs cannot be instantiated
on them.
Requirements for Using NPIV
If you plan to enable NPIV on your virtual machines, you should be aware of certain requirements.
The following requirements exist:
n
NPIV can be used only for virtual machines with RDM disks. Virtual machines with regular virtual
disks use the WWNs of the host’s physical HBAs.
n
HBAs on your host must support NPIV.
Chapter 4 Configuring Fibre Channel Storage
VMware, Inc. 41