6.0.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- Dependent Hardware iSCSI Considerations
- Configure Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Refresh and Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Upgrading VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Working with Virtual Volumes
- Virtual Machine Storage Policies
- Upgrading Legacy Storage Profiles
- Understanding Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Default Storage Policies
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Filtering Virtual Machine I/O
- VMkernel and Storage
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Storage Thick and Thin Provisioning
- Using Storage Providers
- Using vmkfstools
- vmkfstools Command Syntax
- vmkfstools Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Example for Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Removing Zeroed Blocks
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Example for Cloning or Converting a Virtual Disk
- Migrate Virtual Machines Between DifferentVMware Products
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Example for Creating a Virtual Compatibility Mode RDM
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
- Index
ESXi Hosts and Multiple Storage Arrays
An ESXi host can access storage devices presented from multiple storage arrays, including arrays from
dierent vendors.
When you use multiple arrays from dierent vendors, the following considerations apply:
n
If your host uses the same Storage Array Type Plugin (SATP) for multiple arrays, be careful when you
need to change the default Path Selection Policy (PSP) for that SATP. The change will apply to all arrays.
For information on SATPs and PSPs, see Chapter 17, “Understanding Multipathing and Failover,” on
page 183.
n
Some storage arrays make recommendations on queue depth and other seings. Typically, these
seings are congured globally at the ESXi host level. Making a change for one array impacts other
arrays that present LUNs to the host. For information on changing queue depth, see the VMware
knowledge base article at hp://kb.vmware.com/kb/1267.
n
Use single-initiator-single-target zoning when zoning ESXi hosts to Fibre Channel arrays. With this type
of conguration, fabric related events that occur on one array do not impact other arrays. For more
information about zoning, see “Using Zoning with Fibre Channel SANs,” on page 36.
Making LUN Decisions
You must plan how to set up storage for your ESXi systems before you format LUNs with VMFS datastores.
When you make your LUN decision, keep in mind the following considerations:
n
Each LUN should have the correct RAID level and storage characteristic for the applications running in
virtual machines that use the LUN.
n
Each LUN must contain only one VMFS datastore.
n
If multiple virtual machines access the same VMFS, use disk shares to prioritize virtual machines.
You might want fewer, larger LUNs for the following reasons:
n
More exibility to create virtual machines without asking the storage administrator for more space.
n
More exibility for resizing virtual disks, doing snapshots, and so on.
n
Fewer VMFS datastores to manage.
You might want more, smaller LUNs for the following reasons:
n
Less wasted storage space.
n
Dierent applications might need dierent RAID characteristics.
n
More exibility, as the multipathing policy and disk shares are set per LUN.
n
Use of Microsoft Cluster Service requires that each cluster disk resource is in its own LUN.
n
Beer performance because there is less contention for a single volume.
When the storage characterization for a virtual machine is not available, there is often no simple method to
determine the number and size of LUNs to provision. You can experiment using either a predictive or
adaptive scheme.
Chapter 2 Overview of Using ESXi with a SAN
VMware, Inc. 29