6.0.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- Dependent Hardware iSCSI Considerations
- Configure Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Refresh and Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Upgrading VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Working with Virtual Volumes
- Virtual Machine Storage Policies
- Upgrading Legacy Storage Profiles
- Understanding Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Default Storage Policies
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Filtering Virtual Machine I/O
- VMkernel and Storage
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Storage Thick and Thin Provisioning
- Using Storage Providers
- Using vmkfstools
- vmkfstools Command Syntax
- vmkfstools Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Example for Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Removing Zeroed Blocks
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Example for Cloning or Converting a Virtual Disk
- Migrate Virtual Machines Between DifferentVMware Products
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Example for Creating a Virtual Compatibility Mode RDM
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
- Index
After you register a storage provider associated with the storage system, vCenter Server discovers all
congured storage containers along with their storage capability proles, protocol endpoints, and other
aributes. A single storage container can export multiple capability proles. As a result, virtual machines
with diverse needs and dierent storage policy seings can be a part of the same storage container.
Initially, all discovered storage containers are not connected to any specic host, and you cannot see them in
the vSphere Web Client. To mount a storage container, you must map it to a virtual datastore.
Protocol Endpoints
Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to virtual
volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol endpoint, to
communicate with virtual volumes and virtual disk les that virtual volumes encapsulate. ESXi uses
protocol endpoints to establish a data path on demand from virtual machines to their respective virtual
volumes.
Each virtual volume is bound to a specic protocol endpoint. When a virtual machine on the host performs
an I/O operation, the protocol endpoint directs the I/O to the appropriate virtual volume. Typically, a storage
system requires a very small number of protocol endpoints. A single protocol endpoint can connect to
hundreds or thousands of virtual volumes.
On the storage side, a storage administrator congures protocol endpoints, one or several per storage
container. Protocol endpoints are a part of the physical storage fabric and are exported, along with
associated storage containers, by the storage system through a storage provider. After you map a storage
container to a virtual datastore, protocol endpoints are discovered by ESXi and become visible in the
vSphere Web Client. Protocol endpoints can also be discovered during a storage rescan.
In the vSphere Web Client, the list of available protocol endpoints looks similar to the host storage devices
list. Dierent storage transports can be used to expose protocol endpoints to ESXi. When the SCSI-based
transport is used, the protocol endpoint represents a proxy LUN dened by a T10-based LUN WWN. For
the NFS protocol, the protocol endpoint is a mount-point, such as IP address (or DNS name) and a share
name. You can congure multipathing on a SCSI based protocol endpoint, but not on an NFS based protocol
endpoint. However, no maer which protocol you use, a storage array can provide multiple protocol
endpoints for availability purposes.
Virtual Datastores
A virtual datastore represents a storage container in vCenter Server and the vSphere Web Client.
After vCenter Server discovers storage containers exported by storage systems, you must mount them to be
able to use them. You use the datastore creation wizard in the vSphere Web Client to map a storage
container to a virtual datastore. The virtual datastore that you create corresponds directly to the specic
storage container and becomes the container's representation in vCenter Server and the vSphere Web Client.
From a vSphere administrator prospective, the virtual datastore is similar to any other datastore and is used
to hold virtual machines. Like other datastores, the virtual datastore can be browsed and lists virtual
volumes by virtual machine name. Like traditional datastores, the virtual datastore supports unmounting
and mounting. However, such operations as upgrade and resize are not applicable to the virtual datastore.
The virtual datastore capacity is congurable by the storage administrator outside of vSphere.
You can use virtual datastores with traditional VMFS and NFS datastores and with Virtual SAN.
N The size of a virtual volume must be a multiple of 1 MB, with a minimum size of 1 MB. As a result,
all virtual disks that you provision on a virtual datastore or migrate from any datastore other than the
virtual datastore should be an even multiple of 1 MB in size. If the virtual disk you migrate to the virtual
datastore is not an even multiple of 1 MB, extend the disk manually to the nearest even multiple of 1 MB.
vSphere Storage
216 VMware, Inc.