6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
When a virtual machine has a WWN assigned to it, the virtual machine’s configuration file (.vmx) is
updated to include a WWN pair. The WWN pair consists of a World Wide Port Name (WWPN) and a
World Wide Node Name (WWNN). When that virtual machine is powered on, the VMkernel instantiates a
virtual port (VPORT) on the physical HBA which is used to access the LUN. The VPORT is a virtual HBA
that appears to the FC fabric as a physical HBA. The VPORT has its own unique identifier, the WWN pair
that was assigned to the virtual machine.
Each VPORT is specific to the virtual machine. The VPORT is destroyed on the host and it no longer
appears to the FC fabric when the virtual machine is powered off. When a virtual machine is migrated
from one host to another, the VPORT closes on the first host and opens on the destination host.
If NPIV is enabled, WWN pairs (WWPN & WWNN) are specified for each virtual machine at creation time.
When a virtual machine using NPIV is powered on, it uses each of these WWN pairs in sequence to
discover an access path to the storage. The number of VPORTs that are instantiated equals the number
of physical HBAs present on the host. A VPORT is created on each physical HBA that a physical path is
found on. Each physical path determines the virtual path that is used to access the LUN. HBAs that are
not NPIV-aware are skipped in this discovery process because VPORTs cannot be instantiated on them.
Requirements for Using NPIV
If you plan to enable NPIV on your virtual machines, you should be aware of certain requirements.
The following requirements exist:
n
NPIV can be used only for virtual machines with RDM disks. Virtual machines with regular virtual
disks use the WWNs of the host’s physical HBAs.
n
HBAs on your host must support NPIV.
For information, see the VMware Compatibility Guide and refer to your vendor documentation.
n
Use HBAs of the same type, either all QLogic or all Emulex. VMware does not support
heterogeneous HBAs on the same host accessing the same LUNs.
n
If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the virtual
machine. This is required to support multipathing even though only one path at a time will be
active.
n
Make sure that physical HBAs on the host have access to all LUNs that are to be accessed by
NPIV-enabled virtual machines running on that host.
n
The switches in the fabric must be NPIV-aware.
n
When configuring a LUN for NPIV access at the storage level, make sure that the NPIV LUN number
and NPIV target ID match the physical LUN and Target ID.
NPIV Capabilities and Limitations
Learn about specific capabilities and limitations of the use of NPIV with ESXi.
vSphere Storage
VMware, Inc. 46