6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
n
Host profiles that contain Virtual Volumes datastores are vCenter Server specific. After you extract
this type of host profile, you can attach it only to hosts and clusters managed by the same
vCenter Server as the reference host.
Best Practices for Storage Container Provisioning
Follow these best practices when provisioning storage containers on the vSphere Virtual Volumes array
side.
Creating Containers Based on Your Limits
Because storage containers apply logical limits when grouping virtual volumes, the container must match
the boundaries that you want to apply.
Examples might include a container created for a tenant in a multitenant deployment, or a container for a
department in an enterprise deployment.
n
Organizations or departments, for example, Human Resources and Finance
n
Groups or projects, for example, Team A and Red Team
n
Customers
Putting All Storage Capabilities in a Single Container
Storage containers are individual datastores. A single storage container can export multiple storage
capability profiles. As a result, virtual machines with diverse needs and different storage policy settings
can be a part of the same storage container.
Changing storage profiles must be an array-side operation, not a storage migration to another container.
Avoiding Over-Provisioning Your Storage Containers
When you provision a storage container, the space limits that you apply as part of the container
configuration are only logical limits. Do not provision the container larger than necessary for the
anticipated use. If you later increase the size of the container, you do not need to reformat or repartition it.
Using Storage-Speciļ¬c Management UI to Provision Protocol Endpoints
Every storage container needs protocol endpoints (PEs) that are accessible to ESXi hosts.
When you use block storage, the PE represents a proxy LUN defined by a T10-based LUN WWN. For
NFS storage, the PE is a mount point, such as an IP address or DNS name, and a share name.
Typically, configuration of PEs is array-specific. When you configure PEs, you might need to associate
them with specific storage processors, or with certain hosts. To avoid errors when creating PEs, do not
configure them manually. Instead, when possible, use storage-specific management tools.
No Assignment of IDs Above Disk.MaxLUN to Protocol Endpoint LUNs
By default, an ESXi host can access LUN IDs that are within the range of 0 to 1023. If the ID of the
protocol endpoint LUN that you configure is 1024 or greater, the host might ignore the PE.
vSphere Storage
VMware, Inc. 296