6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
iSER differs from traditional iSCSI as it replaces the TCP/IP data transfer model with the Remote Direct
Memory Access (RDMA) transport. Using the direct data placement technology of the RDMA, the iSER
protocol can transfer data directly between the memory buffers of the ESXi host and storage devices.
This method eliminates unnecessary TCP/IP processing and data coping, and can also reduce latency
and the CPU load on the storage device.
In the iSER environment, iSCSI works exactly as before, but uses an underlying RDMA fabric interface
instead of the TCP/IP-based interface.
Because the iSER protocol preserves the compatibility with iSCSI infrastructure, the process of enabling
iSER on the ESXi host is similar to the iSCSI process. See Configure iSER Adapters.
Establishing iSCSI Connections
In the ESXi context, the term target identifies a single storage unit that your host can access. The terms
storage device and LUN describe a logical volume that represents storage space on a target. Typically,
the terms device and LUN, in the ESXi context, mean a SCSI volume presented to your host from a
storage target and available for formatting.
Different iSCSI storage vendors present storage to hosts in different ways. Some vendors present
multiple LUNs on a single target. Others present multiple targets with one LUN each.
Figure 10‑1. Target Compared to LUN Representations
Storage Array
Target
LUN LUN LUN
Storage Array
Target TargetTarget
LUN LUN LUN
In these examples, three LUNs are available in each of these configurations. In the first case, the host
detects one target but that target has three LUNs that can be used. Each of the LUNs represents
individual storage volume. In the second case, the host detects three different targets, each having one
LUN.
Host-based iSCSI initiators establish connections to each target. Storage systems with a single target
containing multiple LUNs have traffic to all the LUNs on a single connection. With a system that has three
targets with one LUN each, the host uses separate connections to the three LUNs.
This information is useful when you are trying to aggregate storage traffic on multiple connections from
the host with multiple iSCSI adapters. You can set the traffic for one target to a particular adapter, and use
a different adapter for the traffic to another target.
iSCSI Storage System Types
ESXi supports different storage systems and arrays.
vSphere Storage
VMware, Inc. 71