6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
Each PSP enables and enforces a corresponding path selection policy.
VMW_PSP_MRU —
Most Recently Used
(VMware)
The Most Recently Used (VMware) policy is enforced by VMW_PSP_MRU.
It selects the first working path discovered at system boot time. When the
path becomes unavailable, the host selects an alternative path. The host
does not revert to the original path when that path becomes available. The
Most Recently Used policy does not use the preferred path setting. This
policy is default for most active-passive storage devices.
The VMW_PSP_MRU supports path ranking. To set ranks to individual
paths, use the esxcli storage nmp psp generic pathconfig set
command. For details, see the VMware knowledge base article at
http://kb.vmware.com/kb/2003468 and the vSphere Command-Line
Interface Reference documentation.
VMW_PSP_FIXED —
Fixed (VMware)
This Fixed (VMware) policy is implemented by VMW_PSP_FIXED. The
policy uses the designated preferred path. If the preferred path is not
assigned, the policy selects the first working path discovered at system
boot time. If the preferred path becomes unavailable, the host selects an
alternative available path. The host returns to the previously defined
preferred path when it becomes available again.
Fixed is the default policy for most active-active storage devices.
VMW_PSP_RR —
Round Robin (VMware)
VMW_PSP_RR enables the Round Robin (VMware) policy. The policy uses
an automatic path selection algorithm rotating through the configured paths.
Round Robin is the default policy for many arrays. The policy can be used
with both active-active and active-passive arrays to implement load
balancing across paths for different LUNs. With active-passive arrays, the
policy uses all active paths. With active-active arrays, the policy uses all
available paths.
VMW_PSP_RR has configurable options that you can modify on the
command line. To set these parameters, use the esxcli storage nmp
psp roundrobin command. For details, see the vSphere Command-Line
Interface Reference documentation.
VMware SATPs
Storage Array Type Plug-ins (SATPs) are responsible for array-specific operations. The SATPs are
submodules of the VMware NMP.
ESXi offers an SATP for every type of array that VMware supports. ESXi also provides default SATPs that
support non-specific active-active, active-passive, ALUA, and local devices.
Each SATP accommodates special characteristics of a certain class of storage arrays. The SATP can
perform the array-specific operations required to detect path state and to activate an inactive path. As a
result, the NMP module itself can work with multiple storage arrays without having to be aware of the
storage device specifics.
vSphere Storage
VMware, Inc. 212