6.5.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Software-Defined Storage and Storage Policy Based Management
- About Storage Policy Based Management
- Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- Default Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
NFS 3 uses one TCP connection for I/O. As a result, ESXi supports I/O on only one IP address or
hostname for the NFS server, and does not support multiple paths. Depending on your network
infrastructure and configuration, you can use the network stack to configure multiple connections to the
storage targets. In this case, you must have multiple datastores, each datastore using separate network
connections between the host and the storage.
NFS 4.1 provides multipathing for servers that support the session trunking. When the trunking is
available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking is not
supported.
NFS and Hardware Acceleration
Virtual disks created on NFS datastores are thin-provisioned by default. To be able to create thick-
provisioned virtual disks, you must use hardware acceleration that supports the Reserve Space
operation.
NFS 3 and NFS 4.1 support hardware acceleration that allows your host to integrate with NAS devices
and use several hardware operations that NAS storage provides. For more information, see Hardware
Acceleration on NAS Devices.
NFS Datastores
When you create an NFS datastore, make sure to follow specific guidelines.
The NFS datastore guidelines and best practices include the following items:
n
You cannot use different NFS versions to mount the same datastore on different hosts. NFS 3 and
NFS 4.1 clients are not compatible and do not use the same locking protocol. As a result, accessing
the same virtual disks from two incompatible clients might result in incorrect behavior and cause data
corruption.
n
NFS 3 and NFS 4.1 datastores can coexist on the same host.
n
ESXi cannot automatically upgrade NFS version 3 to version 4.1, but you can use other conversion
methods. For information, see NFS Protocols and ESXi.
n
When you mount the same NFS 3 volume on different hosts, make sure that the server and folder
names are identical across the hosts. If the names do not match, the hosts see the same NFS
version 3 volume as two different datastores. This error might result in a failure of such features as
vMotion. An example of such discrepancy is entering filer as the server name on one host and
filer.domain.com on the other. This guideline does not apply to NFS version 4.1.
n
If you use non-ASCII characters to name datastores and virtual machines, make sure that the
underlying NFS server offers internationalization support. If the server does not support international
characters, use only ASCII characters, or unpredictable failures might occur.
Firewall Configurations for NFS Storage
ESXi includes a firewall between the management interface and the network. The firewall is enabled by
default. At installation time, the ESXi firewall is configured to block incoming and outgoing traffic, except
traffic for the default services, such as NFS.
vSphere Storage
VMware, Inc. 170