6.5.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Software-Defined Storage and Storage Policy Based Management
- About Storage Policy Based Management
- Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- Default Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
n
You cannot use multipathing software inside a virtual machine to perform I/O load balancing to a
single physical LUN. However, when your Microsoft Windows virtual machine uses dynamic disks,
this restriction does not apply. For information about configuring dynamic disks, see Set Up Dynamic
Disk Mirroring.
Setting LUN Allocations
This topic provides general information about how to allocate LUNs when your ESXi works with SAN.
When you set LUN allocations, be aware of the following points:
Storage provisioning To ensure that the ESXi system recognizes the LUNs at startup time,
provision all LUNs to the appropriate HBAs before you connect the SAN to
the ESXi system.
Provision all LUNs to all ESXi HBAs at the same time. HBA failover works
only if all HBAs see the same LUNs.
For LUNs that are shared among multiple hosts, make sure that LUN IDs
are consistent across all hosts.
vMotion and VMware
DRS
When you use vCenter Server and vMotion or DRS, make sure that the
LUNs for the virtual machines are provisioned to all ESXi hosts. This action
provides the most ability to move virtual machines.
Active-active compared
to active-passive arrays
When you use vMotion or DRS with an active-passive SAN storage device,
make sure that all ESXi systems have consistent paths to all storage
processors. Not doing so can cause path thrashing when a vMotion
migration occurs.
For active-passive storage arrays not listed in Storage/SAN Compatibility,
VMware does not support storage port failover. In those cases, you must
connect the server to the active port on the storage array. This configuration
ensures that the LUNs are presented to the ESXi host.
Setting Fibre Channel HBAs
Typically, FC HBAs that you use on your ESXi host work correctly with the default configuration settings.
You should follow the configuration guidelines provided by your storage array vendor. During FC HBA
setup, consider the following issues.
n
Do not mix FC HBAs from different vendors in a single host. Having different models of the same
HBA is supported, but a single LUN cannot be accessed through two different HBA types, only
through the same type.
n
Ensure that the firmware level on each HBA is the same.
n
Set the timeout value for detecting a failover. To ensure optimal performance, do not change the
default value.
vSphere Storage
VMware, Inc. 42