6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
4 Double-click TimeOutValue.
5 Set the value data to 0x3c (hexadecimal) or 60 (decimal) and click OK.
After you make this change, Windows waits at least 60 seconds for delayed disk operations to finish
before it generates errors.
6 Reboot guest OS for the change to take effect.
Pluggable Storage Architecture and Path Management
This topic introduces the key concepts behind the ESXi storage multipathing.
Pluggable Storage
Architecture (PSA)
To manage multipathing, ESXi uses a special VMkernel layer, Pluggable
Storage Architecture (PSA). The PSA is an open and modular framework
that coordinates various software modules responsible for multipathing
operations. These modules include generic multipathing modules that
VMware provides, NMP and HPP, and third-party MPPs.
Native Multipathing
Plug-in (NMP)
The NMP is the VMkernel multipathing module that ESXi provides by
default. The NMP associates physical paths with a specific storage device
and provides a default path selection algorithm based on the array type.
The NMP is extensible and manages additional submodules, called Path
Selection Policies (PSPs) and Storage Array Type Policies (SATPs). PSPs
and SATPs can be provided by VMware, or by a third party.
Path Selection Plug-ins
(PSPs)
The PSPs are submodules of the VMware NMP. PSPs are responsible for
selecting a physical path for I/O requests.
Storage Array Type
Plug-ins (SATPs)
The SATPs are submodules of the VMware NMP. SATPs are responsible
for array-specific operations. The SATP can determine the state of a
particular array-specific path, perform a path activation, and detect any path
errors.
Multipathing Plug-ins
(MPPs)
The PSA offers a collection of VMkernel APIs that third parties can use to
create their own multipathing plug-ins (MPPs). The modules provide
specific load balancing and failover functionalities for a particular storage
array. The MPPs can be installed on the ESXi host. They can run in
addition to the VMware native modules, or as their replacement.
VMware High-
Performance Plug-in
(HPP)
The HPP replaces the NMP for high-speed devices, such as NVMe PCIe
flash. The HPP improves the performance of ultra-fast flash devices that
are installed locally on your ESXi host. The plug-in supports only single-
pathed devices.
Claim Rules The PSA uses claim rules to determine whether an MPP or NMP owns the
paths to a particular storage device. The NMP has its own set of claim
rules. These claim rules match the device with a specific SATP and PSP.
vSphere Storage
VMware, Inc. 207