6.5
Table Of Contents
- vSphere Command-Line Interface Concepts and Examples
- Contents
- About This Book
- vSphere CLI Command Overviews
- Introduction
- List of Available Host Management Commands
- Targets and Protocols for vCLI Host Management Commands
- Supported Platforms for vCLI Commands
- Commands with an esxcfg Prefix
- ESXCLI Commands Available on Different ESXi Hosts
- Trust Relationship Requirement for ESXCLI Commands
- Using ESXCLI Output
- Connection Options for vCLI Host Management Commands
- Connection Options for DCLI Commands
- vCLI Host Management Commands and Lockdown Mode
- Managing Hosts
- Managing Files
- Managing Storage
- Introduction to Storage
- Examining LUNs
- Detach a Device and Remove a LUN
- Reattach a Device
- Working with Permanent Device Loss
- Managing Paths
- Managing Path Policies
- Scheduling Queues for Virtual Machine I/O
- Managing NFS/NAS Datastores
- Monitor and Manage FibreChannel SAN Storage
- Monitoring and Managing Virtual SAN Storage
- Monitoring vSphere Flash Read Cache
- Monitoring and Managing Virtual Volumes
- Migrating Virtual Machines with svmotion
- Configuring FCoE Adapters
- Scanning Storage Adapters
- Retrieving SMART Information
- Managing iSCSI Storage
- iSCSI Storage Overview
- Protecting an iSCSI SAN
- Command Syntax for esxcli iscsi and vicfg-iscsi
- iSCSI Storage Setup with ESXCLI
- iSCSI Storage Setup with vicfg-iscsi
- Listing and Setting iSCSI Options
- Listing and Setting iSCSI Parameters
- Enabling iSCSI Authentication
- Set Up Ports for iSCSI Multipathing
- Managing iSCSI Sessions
- Managing Third-Party Storage Arrays
- Managing Users
- Managing Virtual Machines
- Managing vSphere Networking
- Introduction to vSphere Networking
- Retrieving Basic Networking Information
- Troubleshoot a Networking Setup
- Setting Up vSphere Networking with vSphere Standard Switches
- Setting Up Virtual Switches and Associating a Switch with a Network Interface
- Retrieving Information About Virtual Switches
- Adding and Deleting Virtual Switches
- Checking, Adding, and Removing Port Groups
- Managing Uplinks and Port Groups
- Setting the Port Group VLAN ID
- Managing Uplink Adapters
- Adding and Modifying VMkernel Network Interfaces
- Managing VMkernel Network Interfaces with ESXCLI
- Add and Configure an IPv4 VMkernel Network Interface with ESXCLI
- Add and Configure an IPv6 VMkernel Network Interface with ESXCLI
- Managing VMkernel Network Interfaces with vicfg-vmknic
- Add and Configure an IPv4 VMkernel Network Interface with vicfg-vmknic
- Add and Configure an IPv6 VMkernel Network Interface with vicfg-vmknic
- Setting Up vSphere Networking with vSphere Distributed Switch
- Managing Standard Networking Services in the vSphere Environment
- Setting the DNS Configuration
- Manage an NTP Server
- Manage the IP Gateway
- Setting Up IPsec
- Manage the ESXi Firewall
- Monitor VXLAN
- Monitoring ESXi Hosts
- Index
No vicfg- command exists for performing the operations. The ESXCLI commands for seing round robin
path options have changed. The commands supported in ESXi 4.x are no longer supported.
Procedure
1 Retrieve path selection seings for a device that is using the roundrobin PSP.
esxcli <conn_options> storage nmp psp roundrobin deviceconfig get --device na.xxx
2 Set the path selection. You can specify when the path should change, and whether unoptimized paths
should be included.
u
Use --bytes or --iops to specify when the path should change, as in the following examples.
esxcli <conn_options> storage nmp psp roundrobin deviceconfig set --type "bytes" -B
12345 --device naa.xxx
Sets the device specied by --device to switch to the next path each time 12345 bytes have been
sent along the current path.
esxcli <conn_options> storage nmp psp roundrobin deviceconfig set --type=iops --iops
4200 --device naa.xxx
Sets the device specied by --device to switch after 4200 I/O operations have been performed on a
path.
u
Use useano to specify that the round robin PSP should include paths in the active, unoptimized
state in the round robin set (1) or that the PSP should use active, unoptimized paths only if no
active optimized paths are available (0). If you do not include this option, the PSP includes only
active optimized paths in the round robin path set.
Scheduling Queues for Virtual Machine I/O
You can use ESXCLI to enable or disable per le I/O scheduling.
By default, vSphere provides a mechanism that creates scheduling queues for each virtual machine le. Each
le has individual bandwidth controls. This mechanism ensures that the I/O for a particular virtual machine
goes into its own separate queue and does not interfere with the I/O of other virtual machines.
This capability is enabled by default. You can turn it o by using the esxcli system settings kernel set -
s isPerFileSchedModelActive option.
n
Run esxcli system settings kernel set -s isPerFileSchedModelActive -v FALSE to disable per le
scheduling.
n
Run esxcli system settings kernel set -s isPerFileSchedModelActive -v TRUE to enable per le
scheduling.
Managing NFS/NAS Datastores
ESXi hosts can access a designated NFS volume located on a NAS (Network Aached Storage) server, can
mount the volume, and can use it for its storage needs. You can use NFS volumes to store and boot virtual
machines in the same way that you use VMFS datastores.
Capabilities Supported by NFS/NAS
An NFS client built into the ESXi hypervisor uses the Network File System (NFS) protocol over TCP/IP to
access a designated NFS volume that is located on a NAS server. The ESXi host can mount the volume and
use it for its storage needs.
vSphere supports versions 3 and 4.1 of the NFS protocol.
Chapter 4 Managing Storage
VMware, Inc. 57