6.0.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- Dependent Hardware iSCSI Considerations
- Configure Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Refresh and Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Upgrading VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Working with Virtual Volumes
- Virtual Machine Storage Policies
- Upgrading Legacy Storage Profiles
- Understanding Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Default Storage Policies
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Filtering Virtual Machine I/O
- VMkernel and Storage
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Storage Thick and Thin Provisioning
- Using Storage Providers
- Using vmkfstools
- vmkfstools Command Syntax
- vmkfstools Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Example for Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Removing Zeroed Blocks
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Example for Cloning or Converting a Virtual Disk
- Migrate Virtual Machines Between DifferentVMware Products
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Example for Creating a Virtual Compatibility Mode RDM
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
- Index
n
Fault Tolerance (FT) and Host Proles
N NFS 4.1 does not support legacy Fault Tolerance.
n
ISO images, which are presented as CD-ROMs to virtual machines
n
Virtual machine snapshots
n
Virtual machines with large capacity virtual disks, or disks greater than 2TB. Virtual disks created on
NFS datastores are thin-provisioned by default, unless you use hardware acceleration that supports the
Reserve Space operation. NFS 4.1 does not support hardware acceleration. For information, see
“Hardware Acceleration on NAS Devices,” on page 265.
NFS Storage Guidelines and Requirements
When using NFS storage, you must follow specic conguration, networking, and NFS datastore guidelines.
NFS Server Configuration Guidelines
n
Make sure that NFS servers you use are listed in the VMware HCL. Use the correct version for the server
rmware.
n
When conguring NFS storage, follow the recommendation of your storage vendor.
n
Ensure that the NFS volume is exported using NFS over TCP.
n
Make sure that the NFS server exports a particular share as either NFS 3 or NFS 4.1, but does not
provide both protocol versions for the same share. This policy needs to be enforced by the server
because ESXi does not prevent mounting the same share through dierent NFS versions.
n
NFS 3 and non-Kerberos NFS 4.1 do not support the delegate user functionality that enables access to
NFS volumes using nonroot credentials. If you use NFS 3 or non-Kerberos NFS 4.1, ensure that each
host has root access to the volume. Dierent storage vendors have dierent methods of enabling this
functionality, but typically this is done on the NAS servers by using the no_root_squash option. If the
NAS server does not grant root access, you might still be able to mount the NFS datastore on the host.
However, you will not be able to create any virtual machines on the datastore.
n
If the underlying NFS volume, on which les are stored, is read-only, make sure that the volume is
exported as a read-only share by the NFS server, or congure it as a read-only datastore on the ESXi
host. Otherwise, the host considers the datastore to be read-write and might not be able to open the
les.
NFS Networking Guidelines
n
For network connectivity, the host requires a standard network adapter.
n
ESXi supports Layer 2 and Layer 3 Network switches. If you use Layer 3 switches, ESXi hosts and NFS
storage arrays must be on dierent subnets and the network switch must handle the routing
information.
n
A VMkernel port group is required for NFS storage. You can create a new VMkernel port group for IP
storage on an already existing virtual switch (vSwitch) or on a new vSwitch when it is congured. The
vSwitch can be a vSphere Standard Switch (VSS) or a vSphere Distributed Switch (VDS).
n
If you use multiple ports for NFS trac, make sure that you correctly congure your virtual switches
and physical switches. For information, see the vSphere Networking documentation.
n
NFS 3 and non-Kerberos NFS 4.1 support IPv4 and IPv6.
Chapter 16 Working with Datastores
VMware, Inc. 153