6.7

Table Of Contents
This sample command instructs the HPP to claim all devices with the vendor NVMe. Modify this rule
to claim the devices you specify. Make sure to follow these recommendations:
n
For the rule ID parameter, use the number within the 1–49 range to make sure that the HPP claim
rule precedes the build-in NMP rules. The default NMP rules 50–54 are reserved for locally
attached storage devices.
n
Use the --force-reserved option. With this option, you can add a rule into the range 0–100 that
is reserved for internal VMware use.
2 Reboot your host for your changes to take effect.
3 Verify that the HPP claimed the appropriate device.
esxcli storage core device list
mpx.vmhba2:C0:T0:L0
Display Name: Local NVMe Disk (mpx.vmhba2:C0:T0:L0)
...
Multipath Plugin: HPP
...
Set Latency Sensitive Threshold
When you use the HPP for your storage devices, set the latency sensitive threshold for the device, so that
I/O can avoid the I/O scheduler.
By default, ESXi passes every I/O through the I/O scheduler. However, using the scheduler might create
internal queuing, which is not efficient with the high-speed storage devices.
You can configure the latency sensitive threshold and enable the direct submission mechanism that helps
I/O to bypass the scheduler. With this mechanism enabled, the I/O passes directly from PSA through the
HPP to the device driver.
For the direct submission to work properly, the observed average I/O latency must be lower than the
latency threshold you specify. If the I/O latency exceeds the latency threshold, the system stops the direct
submission and temporarily reverts to using the I/O scheduler. The direct submission is resumed when
the average I/O latency drops below the latency threshold again.
Procedure
1 Set the latency sensitive threshold for the device by running the following command:
esxcli storage core device latencythreshold set --device=device name --latency-
sensitive-threshold=value in milliseconds
vSphere Storage
VMware, Inc. 216