6.5.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Software-Defined Storage and Storage Policy Based Management
- About Storage Policy Based Management
- Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- Default Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
Generally, a single path from a host to a LUN consists of an HBA, switch ports, connecting cables, and
the storage controller port. If any component of the path fails, the host selects another available path for
I/O. The process of detecting a failed path and switching to another is called path failover.
Ports in Fibre Channel SAN
In the context of this document, a port is the connection from a device into the SAN. Each node in the
SAN, such as a host, a storage device, or a fabric component has one or more ports that connect it to the
SAN. Ports are identified in a number of ways.
WWPN (World Wide
Port Name)
A globally unique identifier for a port that allows certain applications to
access the port. The FC switches discover the WWPN of a device or host
and assign a port address to the device.
Port_ID (or port
address)
Within a SAN, each port has a unique port ID that serves as the FC
address for the port. This unique ID enables routing of data through the
SAN to that port. The FC switches assign the port ID when the device logs
in to the fabric. The port ID is valid only while the device is logged on.
When N-Port ID Virtualization (NPIV) is used, a single FC HBA port (N-port) can register with the fabric by
using several WWPNs. This method allows an N-port to claim multiple fabric addresses, each of which
appears as a unique entity. When ESXi hosts use a SAN, these multiple, unique identifiers allow the
assignment of WWNs to individual virtual machines as part of their configuration.
Fibre Channel Storage Array Types
ESXi supports different storage systems and arrays.
The types of storage that your host supports include active-active, active-passive, and ALUA-compliant.
Active-active storage
system
Supports access to the LUNs simultaneously through all the storage ports
that are available without significant performance degradation. All the paths
are active, unless a path fails.
Active-passive storage
system
A system in which one storage processor is actively providing access to a
given LUN. The other processors act as a backup for the LUN and can be
actively providing access to other LUN I/O. I/O can be successfully sent
only to an active port for a given LUN. If access through the active storage
port fails, one of the passive storage processors can be activated by the
servers accessing it.
Asymmetrical storage
system
Supports Asymmetric Logical Unit Access (ALUA). ALUA-compliant storage
systems provide different levels of access per port. With ALUA, the host
can determine the states of target ports and prioritize paths. The host uses
some of the active paths as primary, and uses others as secondary.
vSphere Storage
VMware, Inc. 38