6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
The storage provider delivers information from the underlying storage container. The storage container
capabilities appear in vCenter Server and the vSphere Client. Then, in turn, the storage provider
communicates virtual machine storage requirements, which you can define in the form of a storage policy,
to the storage layer. This integration process ensures that a virtual volume created in the storage layer
meets the requirements outlined in the policy.
Typically, vendors are responsible for supplying storage providers that can integrate with vSphere and
provide support to Virtual Volumes. Every storage provider must be certified by VMware and properly
deployed. For information about deploying and upgrading the Virtual Volumes storage provider to a
version compatible with current ESXi release, contact your storage vendor.
After you deploy the storage provider, you must register it in vCenter Server, so that it can communicate
with vSphere through the SMS.
Storage Containers
Unlike traditional LUN and NFS-based storage, the Virtual Volumes functionality does not require
preconfigured volumes on a storage side. Instead, Virtual Volumes uses a storage container. It is a pool of
raw storage capacity or an aggregation of storage capabilities that a storage system can provide to virtual
volumes.
A storage container is a part of the logical storage fabric and is a logical unit of the underlying hardware.
The storage container logically groups virtual volumes based on management and administrative needs.
For example, the storage container can contain all virtual volumes created for a tenant in a multitenant
deployment, or a department in an enterprise deployment. Each storage container serves as a virtual
volume store and virtual volumes are allocated out of the storage container capacity.
Typically, a storage administrator on the storage side defines storage containers. The number of storage
containers, their capacity, and their size depend on a vendor-specific implementation. At least one
container for each storage system is required.
Note A single storage container cannot span different physical arrays.
After you register a storage provider associated with the storage system, vCenter Server discovers all
configured storage containers along with their storage capability profiles, protocol endpoints, and other
attributes. A single storage container can export multiple capability profiles. As a result, virtual machines
with diverse needs and different storage policy settings can be a part of the same storage container.
Initially, all discovered storage containers are not connected to any specific host, and you cannot see
them in the vSphere Client. To mount a storage container, you must map it to a Virtual Volumes datastore.
Protocol Endpoints
Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to
virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol
endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate.
ESXi uses protocol endpoints to establish a data path on demand from virtual machines to their
respective virtual volumes.
vSphere Storage
VMware, Inc. 277