6.5
Table Of Contents
- vSphere Command-Line Interface Concepts and Examples
- Contents
- About This Book
- vSphere CLI Command Overviews
- Introduction
- List of Available Host Management Commands
- Targets and Protocols for vCLI Host Management Commands
- Supported Platforms for vCLI Commands
- Commands with an esxcfg Prefix
- ESXCLI Commands Available on Different ESXi Hosts
- Trust Relationship Requirement for ESXCLI Commands
- Using ESXCLI Output
- Connection Options for vCLI Host Management Commands
- Connection Options for DCLI Commands
- vCLI Host Management Commands and Lockdown Mode
- Managing Hosts
- Managing Files
- Managing Storage
- Introduction to Storage
- Examining LUNs
- Detach a Device and Remove a LUN
- Reattach a Device
- Working with Permanent Device Loss
- Managing Paths
- Managing Path Policies
- Scheduling Queues for Virtual Machine I/O
- Managing NFS/NAS Datastores
- Monitor and Manage FibreChannel SAN Storage
- Monitoring and Managing Virtual SAN Storage
- Monitoring vSphere Flash Read Cache
- Monitoring and Managing Virtual Volumes
- Migrating Virtual Machines with svmotion
- Configuring FCoE Adapters
- Scanning Storage Adapters
- Retrieving SMART Information
- Managing iSCSI Storage
- iSCSI Storage Overview
- Protecting an iSCSI SAN
- Command Syntax for esxcli iscsi and vicfg-iscsi
- iSCSI Storage Setup with ESXCLI
- iSCSI Storage Setup with vicfg-iscsi
- Listing and Setting iSCSI Options
- Listing and Setting iSCSI Parameters
- Enabling iSCSI Authentication
- Set Up Ports for iSCSI Multipathing
- Managing iSCSI Sessions
- Managing Third-Party Storage Arrays
- Managing Users
- Managing Virtual Machines
- Managing vSphere Networking
- Introduction to vSphere Networking
- Retrieving Basic Networking Information
- Troubleshoot a Networking Setup
- Setting Up vSphere Networking with vSphere Standard Switches
- Setting Up Virtual Switches and Associating a Switch with a Network Interface
- Retrieving Information About Virtual Switches
- Adding and Deleting Virtual Switches
- Checking, Adding, and Removing Port Groups
- Managing Uplinks and Port Groups
- Setting the Port Group VLAN ID
- Managing Uplink Adapters
- Adding and Modifying VMkernel Network Interfaces
- Managing VMkernel Network Interfaces with ESXCLI
- Add and Configure an IPv4 VMkernel Network Interface with ESXCLI
- Add and Configure an IPv6 VMkernel Network Interface with ESXCLI
- Managing VMkernel Network Interfaces with vicfg-vmknic
- Add and Configure an IPv4 VMkernel Network Interface with vicfg-vmknic
- Add and Configure an IPv6 VMkernel Network Interface with vicfg-vmknic
- Setting Up vSphere Networking with vSphere Distributed Switch
- Managing Standard Networking Services in the vSphere Environment
- Setting the DNS Configuration
- Manage an NTP Server
- Manage the IP Gateway
- Setting Up IPsec
- Manage the ESXi Firewall
- Monitor VXLAN
- Monitoring ESXi Hosts
- Index
The following considerations apply.
n
A diagnostic partition cannot be located on an iSCSI LUN accessed through the software iSCSI or
dependent hardware iSCSI adapter. For more information about diagnostic partitions with iSCSI, see
General Boot from iSCSI SAN Recommendations in the vSphere Storage documentation.
n
A standalone host must have a diagnostic partition of 110 MB.
n
If multiple hosts share a diagnostic partition on a SAN LUN, congure a large diagnostic partition that
the hosts share.
n
If a host that uses a shared diagnostic partition fails, reboot the host and extract log les immediately
after the failure. Otherwise, the second host that fails before you collect the diagnostic data of the rst
host might not be able to save the core dump.
Diagnostic Partition Creation
You can use the vSphere Web Client to create the diagnostic partition on a local disk or on a private or
shared SAN LUN. You cannot use vicfg-dumppart to create the diagnostic partition. The SAN LUN can be
set up with FibreChannel or hardware iSCSI. SAN LUNs accessed through a software iSCSI initiator are not
supported.
C If two hosts that share a diagnostic partition fail and save core dumps to the same slot, the core
dumps might be lost.
If a host that uses a shared diagnostic partition fails, reboot the host and extract log les immediately after
the failure.
Diagnostic Partition Management
You can use the vicfg-dumppart or the esxcli system coredump command to query, set, and scan an ESXi
system's diagnostic partitions. The vSphere Storage documentation explains how to set up diagnostic
partitions with the vSphere Web Client and how to manage diagnostic partitions on a Fibre Channel or
hardware iSCSI SAN.
Diagnostic partitions can include, in order of suitability, parallel adapter, block adapter, FC, or hardware
iSCSI partitions. Parallel adapter partitions are most suitable and hardware iSCSI partitions the least
suitable.
I When you list diagnostic partitions, software iSCSI partitions are included. However, SAN
LUNs accessed through a software iSCSI initiator are not supported as diagnostic partitions.
Managing Core Dumps
With esxcli system coredump, you can manage local diagnostic partitions or set up core dump on a remote
server in conjunction with the ESXi Dump Collector.
For information about the ESXi Dump Collector, see the vSphere Networking documentation.
Manage Local Core Dumps with ESXCLI
You can use ESXCLI to manage local core dumps.
The following example scenario changes the local diagnostic partition by using ESXCLI. Specify one of the
options listed in “Connection Options for vCLI Host Management Commands,” on page 19 in place of
<conn_options>.
vSphere Command-Line Interface Concepts and Examples
162 VMware, Inc.