6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
Each virtual volume is bound to a specific protocol endpoint. When a virtual machine on the host performs
an I/O operation, the protocol endpoint directs the I/O to the appropriate virtual volume. Typically, a
storage system requires just a few protocol endpoints. A single protocol endpoint can connect to
hundreds or thousands of virtual volumes.
On the storage side, a storage administrator configures protocol endpoints, one or several per storage
container. The protocol endpoints are a part of the physical storage fabric. The storage system exports
the protocol endpoints with associated storage containers through the storage provider. After you map the
storage container to a Virtual Volumes datastore, the ESXi host discovers the protocol endpoints and they
become visible in the vSphere Client. The protocol endpoints can also be discovered during a storage
rescan. Multiple hosts can discover and mount the protocol endpoints.
In the vSphere Client, the list of available protocol endpoints looks similar to the host storage devices list.
Different storage transports can be used to expose the protocol endpoints to ESXi. When the SCSI-based
transport is used, the protocol endpoint represents a proxy LUN defined by a T10-based LUN WWN. For
the NFS protocol, the protocol endpoint is a mount point, such as an IP address and a share name. You
can configure multipathing on the SCSI-based protocol endpoint, but not on the NFS-based protocol
endpoint. No matter which protocol you use, the storage array can provide multiple protocol endpoints for
availability purposes.
Protocol endpoints are managed per array. ESXi and vCenter Server assume that all protocol endpoints
reported for an array are associated with all containers on that array. For example, if an array has two
containers and three protocol endpoints, ESXi assumes that virtual volumes on both containers can be
bound to all three protocol points.
Binding and Unbinding Virtual Volumes to Protocol Endpoints
At the time of creation, a virtual volume is a passive entity and is not immediately ready for I/O. To access
the virtual volume, ESXi or vCenter Server send a bind request.
The storage system replies with a protocol endpoint ID that becomes an access point to the virtual
volume. The protocol endpoint accepts all I/O requests to the virtual volume. This binding exists until
ESXi sends an unbind request for the virtual volume.
For later bind requests on the same virtual volume, the storage system can return different protocol
endpoint IDs.
When receiving concurrent bind requests to a virtual volume from multiple ESXi hosts, the storage system
can return the same or different endpoint bindings to each requesting ESXi host. In other words, the
storage system can bind different concurrent hosts to the same virtual volume through different endpoints.
The unbind operation removes the I/O access point for the virtual volume. The storage system might
unbind the virtual volume from its protocol endpoint immediately, or after a delay, or take some other
action. A bound virtual volume cannot be deleted until it is unbound.
Virtual Volumes Datastores
A Virtual Volumes (VVol) datastore represents a storage container in vCenter Server and the
vSphere Client.
vSphere Storage
VMware, Inc. 278