6.0.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- Dependent Hardware iSCSI Considerations
- Configure Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Refresh and Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Upgrading VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Working with Virtual Volumes
- Virtual Machine Storage Policies
- Upgrading Legacy Storage Profiles
- Understanding Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Default Storage Policies
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Filtering Virtual Machine I/O
- VMkernel and Storage
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Storage Thick and Thin Provisioning
- Using Storage Providers
- Using vmkfstools
- vmkfstools Command Syntax
- vmkfstools Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Example for Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Removing Zeroed Blocks
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Example for Cloning or Converting a Virtual Disk
- Migrate Virtual Machines Between DifferentVMware Products
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Example for Creating a Virtual Compatibility Mode RDM
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
- Index
Delete Hardware Acceleration Claim Rules
Use the esxcli command to delete existing hardware acceleration claim rules.
In the procedure, --server=server_name species the target server. The specied target server prompts you
for a user name and password. Other connection options, such as a conguration le or session le, are
supported. For a list of connection options, see Geing Started with vSphere Command-Line Interfaces.
Prerequisites
Install vCLI or deploy the vSphere Management Assistant (vMA) virtual machine. See Geing Started with
vSphere Command-Line Interfaces. For troubleshooting, run esxcli commands in the ESXi Shell.
Procedure
u
Run the following commands:
esxcli --server=server_name storage core claimrule remove -r claimrule_ID --claimrule-
class=Filter
esxcli --server=server_name storage core claimrule remove -r claimrule_ID --claimrule-
class=VAAI
Hardware Acceleration on NAS Devices
Hardware acceleration allows ESXi hosts to integrate with NAS devices and use several hardware
operations that NAS storage provides. Hardware acceleration uses VMware vSphere Storage APIs for Array
Integration (VAAI) to enable communication between the hosts and storage devices.
The APIs dene a set of storage primitives that enable the host to ooad certain storage operations to the
array. The following list shows the supported NAS operations:
n
Full File Clone. Enables NAS device to clone virtual disk les. This operation is similar to the VMFS
block cloning, except that NAS devices clone entire les instead of le segments.
n
Reserve Space. Enables storage arrays to allocate space for a virtual disk le in thick format.
Typically, when you create a virtual disk on an NFS datastore, the NAS server determines the allocation
policy. The default allocation policy on most NAS servers is thin and does not guarantee backing
storage to the le. However, the reserve space operation can instruct the NAS device to use vendor-
specic mechanisms to reserve space for a virtual disk. As a result, you can create thick virtual disks on
the NFS datastore.
n
Native Snapshot Support. Allows creation of virtual machine snapshots to be ooaded to the array.
n
Extended Statistics. Enables visibility to space usage on NAS devices and is useful for Thin
Provisioning.
With NAS storage devices, the hardware acceleration integration is implemented through vendor-specic
NAS plug-ins. These plug-ins are typically created by vendors and are distributed as VIB packages through
a web page. No claim rules are required for the NAS plug-ins to function.
There are several tools available for installing and upgrading VIB packages. They include the esxcli
commands and vSphere Update Manager. For more information, see the vSphere Upgrade and Installing and
Administering VMware vSphere Update Manager documentation.
Install NAS Plug-In
Install vendor-distributed hardware acceleration NAS plug-ins on your host.
This topic provides an example for a VIB package installation using the esxcli command. For more details,
see the vSphere Upgrade documentation.
Chapter 23 Storage Hardware Acceleration
VMware, Inc. 265