6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
When you use SAN storage with ESXi, the following considerations apply:
n
You cannot use SAN administration tools to access operating systems of virtual machines that reside
on the storage. With traditional tools, you can monitor only the VMware ESXi operating system. You
use the vSphere Client to monitor virtual machines.
n
The HBA visible to the SAN administration tools is part of the ESXi system, not part of the virtual
machine.
n
Typically, your ESXi system performs multipathing for you.
ESXi Hosts and Multiple Storage Arrays
An ESXi host can access storage devices presented from multiple storage arrays, including arrays from
different vendors.
When you use multiple arrays from different vendors, the following considerations apply:
n
If your host uses the same SATP for multiple arrays, be careful when you change the default PSP for
that SATP. The change applies to all arrays. For information on SATPs and PSPs, see Chapter 18
Understanding Multipathing and Failover.
n
Some storage arrays make recommendations on queue depth and other settings. Typically, these
settings are configured globally at the ESXi host level. Changing settings for one array impacts other
arrays that present LUNs to the host. For information on changing queue depth, see the VMware
knowledge base article at http://kb.vmware.com/kb/1267.
n
Use single-initiator-single-target zoning when zoning ESXi hosts to Fibre Channel arrays. With this
type of configuration, fabric-related events that occur on one array do not impact other arrays. For
more information about zoning, see Using Zoning with Fibre Channel SANs.
Making LUN Decisions
You must plan how to set up storage for your ESXi systems before you format LUNs with VMFS
datastores.
When you make your LUN decision, the following considerations apply:
n
Each LUN must have the correct RAID level and storage characteristic for the applications running in
virtual machines that use the LUN.
n
Each LUN must contain only one VMFS datastore.
n
If multiple virtual machines access the same VMFS, use disk shares to prioritize virtual machines.
You might want fewer, larger LUNs for the following reasons:
n
More flexibility to create virtual machines without asking the storage administrator for more space.
n
More flexibility for resizing virtual disks, doing snapshots, and so on.
n
Fewer VMFS datastores to manage.
vSphere Storage
VMware, Inc. 34