6.0.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- Dependent Hardware iSCSI Considerations
- Configure Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Refresh and Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Upgrading VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Working with Virtual Volumes
- Virtual Machine Storage Policies
- Upgrading Legacy Storage Profiles
- Understanding Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Default Storage Policies
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Filtering Virtual Machine I/O
- VMkernel and Storage
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Storage Thick and Thin Provisioning
- Using Storage Providers
- Using vmkfstools
- vmkfstools Command Syntax
- vmkfstools Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Example for Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Removing Zeroed Blocks
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Example for Cloning or Converting a Virtual Disk
- Migrate Virtual Machines Between DifferentVMware Products
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Example for Creating a Virtual Compatibility Mode RDM
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
- Index
NFS Datastore Guidelines
n
To use NFS 4.1, upgrade your vSphere environment to version 6.x. You cannot mount an NFS 4.1
datastore to hosts that do not support version 4.1.
n
You cannot use dierent NFS versions to mount the same datastore. NFS 3 and NFS 4.1 clients do not
use the same locking protocol. As a result, accessing the same virtual disks from two incompatible
clients might result in incorrect behavior and cause data corruption.
n
NFS 3 and NFS 4.1 datastores can coexist on the same host.
n
vSphere does not support datastore upgrades from NFS version 3 to version 4.1.
n
When you mount the same NFS 3 volume on dierent hosts, make sure that the server and folder
names are identical across the hosts. If the names do not match, the hosts see the same NFS version 3
volume as two dierent datastores. This error might result in a failure of such features as vMotion. An
example of such discrepancy is entering filer as the server name on one host and filer.domain.com on
the other. This guideline does not apply to NFS version 4.1.
n
If you use non-ASCII characters to name datastores and virtual machines, make sure that the
underlying NFS server oers internationalization support. If the server does not support international
characters, use only ASCII characters, or unpredictable failures might occur.
NFS Protocols and ESXi
ESXi supports NFS protocols version 3 and 4.1. To support both versions, ESXi uses two dierent NFS
clients.
NFS Protocol Version 3
vSphere supports NFS version 3 in TCP. When you use this version, the following considerations apply:
n
With NFS version 3, storage trac is transmied in an unencrypted format across the LAN. Because of
this limited security, use NFS storage on trusted networks only and isolate the trac on separate
physical switches. You can also use a private VLAN.
n
NFS 3 uses only one TCP connection for I/O. As a result, ESXi supports I/O on only one IP address or
hostname for the NFS server, and does not support multiple paths. Depending on your network
infrastructure and conguration, you can use network stack to congure multiple connections to the
storage targets. In this case, you must have multiple datastores, with each datastore using separate
network connections between the host and the storage.
n
With NFS 3, ESXi does not support the delegate user functionality that enables access to NFS volumes
by using nonroot credentials. You must ensure that each host has root access to the volume.
n
NFS 3 supports hardware acceleration that allows your host to integrate with NAS devices and use
several hardware operations that NAS storage provides. For more information, see “Hardware
Acceleration on NAS Devices,” on page 265.
n
When hardware acceleration is supported, you can create thick-provisioned virtual disk on NFS 3
datastores.
n
NFS 3 locking on ESXi does not use the Network Lock Manager (NLM) protocol. Instead, VMware
provides its own locking protocol. NFS 3 locks are implemented by creating lock les on the NFS server.
Lock les are named .lck-file_id..
NFS Protocol Version 4.1
When you use NFS 4.1, the following considerations apply:
n
NFS 4.1 provides multipathing for servers that support session trunking. When trunking is available,
you can use multiple IP addresses to access a single NFS volume. Client ID trunking is not supported.
vSphere Storage
154 VMware, Inc.