6.0.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- Dependent Hardware iSCSI Considerations
- Configure Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Refresh and Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Upgrading VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Working with Virtual Volumes
- Virtual Machine Storage Policies
- Upgrading Legacy Storage Profiles
- Understanding Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Default Storage Policies
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Filtering Virtual Machine I/O
- VMkernel and Storage
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Storage Thick and Thin Provisioning
- Using Storage Providers
- Using vmkfstools
- vmkfstools Command Syntax
- vmkfstools Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Example for Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Removing Zeroed Blocks
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Example for Cloning or Converting a Virtual Disk
- Migrate Virtual Machines Between DifferentVMware Products
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Example for Creating a Virtual Compatibility Mode RDM
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
- Index
d Click Add Row and add the following parameters:
Name Value
scsi#.returnNoConnectDuringAPD True
scsi#.returnBusyOnNoConnectStatus False
e Click OK.
Collecting Diagnostic Information for ESXi Hosts on a Storage Device
During a host failure, ESXi must be able to save diagnostic information to a precongured location for
diagnostic and technical support purposes.
Typically, a partition to collect diagnostic information, also called VMkernel core dump, is created on a local
storage device during ESXi installation. You can override this default behavior if, for example, you use
shared storage devices instead of local storage. To prevent automatic formaing of local devices, detach the
devices from the host before you install ESXi and power on the host for the rst time. You can later set up a
location for collecting diagnostic information on a local or remote storage device.
When you use storage devices, you can select between two options of seing up core dump collection. You
can use a precongured diagnostic partition on a storage device or use a le on a VMFS datastore.
n
Set Up a Device Partition as Core Dump Location on page 175
Create a diagnostic partition for your ESXi host.
n
Set Up a File as Core Dump Location on page 176
If the size of your available core dump partition is insucient, you can congure ESXi to generate core
dump as a le.
Set Up a Device Partition as Core Dump Location
Create a diagnostic partition for your ESXi host.
When you create a diagnostic partition, the following considerations apply:
n
You cannot create a diagnostic partition on an iSCSI LUN accessed through the software iSCSI or
dependent hardware iSCSI adapter. For more information about diagnostic partitions with iSCSI, see
“General Boot from iSCSI SAN Recommendations,” on page 107.
n
You cannot create a diagnostic partition on a software FCoE LUN.
n
Unless you are using diskless servers, set up a diagnostic partition on a local storage.
n
Each host must have a diagnostic partition of 2.5 GB. If multiple hosts share a diagnostic partition on a
SAN LUN, the partition should be large enough to accommodate core dumps of all hosts.
n
If a host that uses a shared diagnostic partition fails, reboot the host and extract log les immediately
after the failure. Otherwise, the second host that fails before you collect the diagnostic data of the rst
host might not be able to save the core dump.
Procedure
1 Browse to the host in the vSphere Web Client navigator.
2 Right-click the host, and select Add Diagnostic Partition.
If you do not see this option, the host already has a diagnostic partition.
Chapter 16 Working with Datastores
VMware, Inc. 175