6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
The rules fall into these categories:
Core Claim Rules These claim rules determine which multipathing module, the NMP, HPP, or
a third-party MPP, claims the specific device.
SATP Claim Rules Depending on the device type, these rules assign a particular SATP
submodule that provides vendor-specific multipathing management to the
device.
You can use the esxcli commands to add or change the core and SATP claim rules. Typically, you add
the claim rules to load a third-party MPP or to hide a LUN from your host. Changing claim rules might be
necessary when default settings for a specific device are not sufficient.
For more information about commands available to manage PSA claim rules, see the Getting Started with
vSphere Command-Line Interfaces.
For a list of storage arrays and corresponding SATPs and PSPs, see the Storage/SAN section of the
vSphere Compatibility Guide.
Multipathing Considerations
Specific considerations apply when you manage storage multipathing plug-ins and claim rules.
The following considerations help you with multipathing:
n
If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC devices is
VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED.
n
When the system searches the SATP rules to locate a SATP for a given device, it searches the driver
rules first. If there is no match, the vendor/model rules are searched, and finally the transport rules
are searched. If no match occurs, NMP selects a default SATP for the device.
n
If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-aware, no
claim rule match occurs for this device. The device is claimed by the default SATP based on the
device's transport type.
n
The default PSP for all devices claimed by VMW_SATP_ALUA is VMW_PSP_MRU. The
VMW_PSP_MRU selects an active/optimized path as reported by the VMW_SATP_ALUA, or an
active/unoptimized path if there is no active/optimized path. This path is used until a better path is
available (MRU). For example, if the VMW_PSP_MRU is currently using an active/unoptimized path
and an active/optimized path becomes available, the VMW_PSP_MRU will switch the current path to
the active/optimized one.
n
While VMW_PSP_MRU is typically selected for ALUA arrays by default, certain ALUA storage arrays
need to use VMW_PSP_FIXED. To check whether your storage array requires VMW_PSP_FIXED,
see the VMware Compatibility Guide or contact your storage vendor. When using VMW_PSP_FIXED
with ALUA arrays, unless you explicitly specify a preferred path, the ESXi host selects the most
optimal working path and designates it as the default preferred path. If the host selected path
becomes unavailable, the host selects an alternative available path. However, if you explicitly
designate the preferred path, it will remain preferred no matter what its status is.
vSphere Storage
VMware, Inc. 221