6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
n
NFS and Hardware Acceleration
Virtual disks created on NFS datastores are thin-provisioned by default. To be able to create thick-
provisioned virtual disks, you must use hardware acceleration that supports the Reserve Space
operation.
n
NFS Datastores
When you create an NFS datastore, make sure to follow specific guidelines.
NFS Server Configuration
When you configure NFS servers to work with ESXi, follow recommendation of your storage vendor. In
addition to these general recommendations, use specific guidelines that apply to NFS in vSphere
environment.
The guidelines include the following items.
n
Make sure that the NAS servers you use are listed in the VMware HCL. Use the correct version for
the server firmware.
n
Ensure that the NFS volume is exported using NFS over TCP.
n
Make sure that the NAS server exports a particular share as either NFS 3 or NFS 4.1. The NAS
server must not provide both protocol versions for the same share. The NAS server must enforce this
policy because ESXi does not prevent mounting the same share through different NFS versions.
n
NFS 3 and non-Kerberos (AUTH_SYS) NFS 4.1 do not support the delegate user functionality that
enables access to NFS volumes using nonroot credentials. If you use NFS 3 or non-Kerberos NFS
4.1, ensure that each host has root access to the volume. Different storage vendors have different
methods of enabling this functionality, but typically the NAS servers use the no_root_squash option.
If the NAS server does not grant root access, you can still mount the NFS datastore on the host.
However, you cannot create any virtual machines on the datastore.
n
If the underlying NFS volume is read-only, make sure that the volume is exported as a read-only
share by the NFS server. Or mount the volume as a read-only datastore on the ESXi host. Otherwise,
the host considers the datastore to be read-write and might not open the files.
NFS Networking
An ESXi host uses TCP/IP network connection to access a remote NAS server. Certain guidelines and
best practices exist for configuring the networking when you use NFS storage.
For more information, see the vSphere Networking documentation.
n
For network connectivity, use a standard network adapter in your ESXi host.
n
ESXi supports Layer 2 and Layer 3 Network switches. If you use Layer 3 switches, ESXi hosts and
NFS storage arrays must be on different subnets and the network switch must handle the routing
information.
n
Configure a VMkernel port group for NFS storage. You can create the VMkernel port group for IP
storage on an existing virtual switch (vSwitch) or on a new vSwitch. The vSwitch can be a vSphere
Standard Switch (VSS) or a vSphere Distributed Switch (VDS).
vSphere Storage
VMware, Inc. 169