6.5.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Software-Defined Storage and Storage Policy Based Management
- About Storage Policy Based Management
- Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- Default Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
To improve the array performance in the vSphere environment, follow these general guidelines:
n
When assigning LUNs, remember that several hosts might access the LUN, and that several virtual
machines can run on each host. One LUN used by a host can service I/O from many different
applications running on different operating systems. Because of this diverse workload, the RAID
group containing the ESXi LUNs typically does not include LUNs used by other servers that are not
running ESXi.
n
Make sure that the read/write caching is available.
n
SAN storage arrays require continual redesign and tuning to ensure that I/O is load-balanced across
all storage array paths. To meet this requirement, distribute the paths to the LUNs among all the SPs
to provide optimal load-balancing. Close monitoring indicates when it is necessary to rebalance the
LUN distribution.
Tuning statically balanced storage arrays is a matter of monitoring the specific performance statistics,
such as I/O operations per second, blocks per second, and response time. Distributing the LUN
workload to spread the workload across all the SPs is also important.
Note Dynamic load-balancing is not currently supported with ESXi.
Server Performance with Fibre Channel
You must consider several factors to ensure optimal server performance.
Each server application must have access to its designated storage with the following conditions:
n
High I/O rate (number of I/O operations per second)
n
High throughput (megabytes per second)
n
Minimal latency (response times)
Because each application has different requirements, you can meet these goals by selecting an
appropriate RAID group on the storage array.
To achieve performance goals, follow these guidelines:
n
Place each LUN on a RAID group that provides the necessary performance levels. Monitor the
activities and resource use of other LUNs in the assigned RAID group. A high-performance RAID
group that has too many applications doing I/O to it might not meet performance goals required by an
application running on the ESXi host.
n
Ensure that each host has enough HBAs to increase throughput for the applications on the host for
the peak period. I/O spread across multiple HBAs provides faster throughput and less latency for
each application.
n
To provide redundancy for a potential HBA failure, make sure that the host is connected to a dual
redundant fabric.
vSphere Storage
VMware, Inc. 63