6.7
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Introduction to Storage
- Getting Started with a Traditional Storage Model
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Recommendations and Restrictions
- Configuring iSCSI Parameters for Adapters
- Set Up Independent Hardware iSCSI Adapters
- Configure Dependent Hardware iSCSI Adapters
- Configure the Software iSCSI Adapter
- Configure iSER Adapters
- Modify General Properties for iSCSI or iSER Adapters
- Setting Up Network for iSCSI and iSER
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Rescan Operations
- Identifying Device Connectivity Problems
- Enable or Disable the Locator LED on Storage Devices
- Erase Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Types of Datastores
- Understanding VMFS Datastores
- Upgrading VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Failovers with Fibre Channel
- Host-Based Failover with iSCSI
- Array-Based Failover with iSCSI
- Path Failover and Virtual Machines
- Pluggable Storage Architecture and Path Management
- Viewing and Managing Paths
- Using Claim Rules
- Scheduling Queues for Virtual Machine I/Os
- Raw Device Mapping
- Storage Policy Based Management
- Virtual Machine Storage Policies
- Workflow for Virtual Machine Storage Policies
- Populating the VM Storage Policies Interface
- About Rules and Rule Sets
- Creating and Managing VM Storage Policies
- About Storage Policy Components
- Storage Policies and Virtual Machines
- Default Storage Policies
- Using Storage Providers
- Working with Virtual Volumes
- About Virtual Volumes
- Virtual Volumes Concepts
- Virtual Volumes and Storage Protocols
- Virtual Volumes Architecture
- Virtual Volumes and VMware Certificate Authority
- Snapshots and Virtual Volumes
- Before You Enable Virtual Volumes
- Configure Virtual Volumes
- Provision Virtual Machines on Virtual Volumes Datastores
- Virtual Volumes and Replication
- Best Practices for Working with vSphere Virtual Volumes
- Troubleshooting Virtual Volumes
- Filtering Virtual Machine I/O
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Thin Provisioning and Space Reclamation
- Using vmkfstools
- vmkfstools Command Syntax
- The vmkfstools Command Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Removing Zeroed Blocks
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
Option Description
-f|--force
Force claim rules to ignore validity checks and install the rule anyway.
-h|--help
Show the help message.
-M|--model=string
Set the model string when adding SATP a claim rule. Vendor/Model rules are
mutually exclusive with driver rules.
-o|--option=string
Set the option string when adding a SATP claim rule.
-P|--psp=string
Set the default PSP for the SATP claim rule.
-O|--psp-option=string
Set the PSP options for the SATP claim rule.
-s|--satp=string
The SATP for which a new rule is added.
-R|--transport=string
Set the claim transport type string when adding a SATP claim rule.
-t|--type=string
Set the claim type when adding a SATP claim rule.
-V|--vendor=string
Set the vendor string when adding SATP claim rules. Vendor/Model rules are
mutually exclusive with driver rules.
Note When searching the SATP rules to locate a SATP for a given device, the NMP searches the
driver rules first. If there is no match, the vendor/model rules are searched, and finally the transport
rules. If there is still no match, NMP selects a default SATP for the device.
2 Reboot your host.
Example: Deļ¬ning an NMP SATP Rule
The following sample command assigns the VMW_SATP_INV plug-in to manage storage arrays with
vendor string NewVend and model string NewMod.
# esxcli storage nmp satp rule add -V NewVend -M NewMod -s VMW_SATP_INV
When you run the esxcli storage nmp satp list -s VMW_SATP_INV command, you can see the
new rule on the list of VMW_SATP_INV rules.
Scheduling Queues for Virtual Machine I/Os
By default, vSphere provides a mechanism that creates scheduling queues for every virtual machine file.
Each file, for example .vmdk, gets its own bandwidth controls.
This mechanism ensures that I/O for a particular virtual machine file goes into its own separate queue
and avoids interfering with I/Os from other files.
This capability is enabled by default. To turn it off, adjust the
VMkernel.Boot.isPerFileSchedModelActive parameter in the advanced system settings.
Edit Per File I/O Scheduling
The advanced VMkernel.Boot.isPerFileSchedModelActive parameter controls the per file I/O
scheduling mechanism. The mechanism is enabled by default.
vSphere Storage
VMware, Inc. 230