6.0.1
Table Of Contents
- vSphere Storage
- Contents
- About vSphere Storage
- Updated Information
- Introduction to Storage
- Overview of Using ESXi with a SAN
- Using ESXi with Fibre Channel SAN
- Configuring Fibre Channel Storage
- Configuring Fibre Channel over Ethernet
- Booting ESXi from Fibre Channel SAN
- Booting ESXi with Software FCoE
- Best Practices for Fibre Channel Storage
- Using ESXi with iSCSI SAN
- Configuring iSCSI Adapters and Storage
- ESXi iSCSI SAN Requirements
- ESXi iSCSI SAN Restrictions
- Setting LUN Allocations for iSCSI
- Network Configuration and Authentication
- Set Up Independent Hardware iSCSI Adapters
- About Dependent Hardware iSCSI Adapters
- Dependent Hardware iSCSI Considerations
- Configure Dependent Hardware iSCSI Adapters
- About the Software iSCSI Adapter
- Modify General Properties for iSCSI Adapters
- Setting Up iSCSI Network
- Using Jumbo Frames with iSCSI
- Configuring Discovery Addresses for iSCSI Adapters
- Configuring CHAP Parameters for iSCSI Adapters
- Configuring Advanced Parameters for iSCSI
- iSCSI Session Management
- Booting from iSCSI SAN
- Best Practices for iSCSI Storage
- Managing Storage Devices
- Storage Device Characteristics
- Understanding Storage Device Naming
- Storage Refresh and Rescan Operations
- Identifying Device Connectivity Problems
- Edit Configuration File Parameters
- Enable or Disable the Locator LED on Storage Devices
- Working with Flash Devices
- About VMware vSphere Flash Read Cache
- Working with Datastores
- Understanding VMFS Datastores
- Understanding Network File System Datastores
- Creating Datastores
- Managing Duplicate VMFS Datastores
- Upgrading VMFS Datastores
- Increasing VMFS Datastore Capacity
- Administrative Operations for Datastores
- Set Up Dynamic Disk Mirroring
- Collecting Diagnostic Information for ESXi Hosts on a Storage Device
- Checking Metadata Consistency with VOMA
- Configuring VMFS Pointer Block Cache
- Understanding Multipathing and Failover
- Raw Device Mapping
- Working with Virtual Volumes
- Virtual Machine Storage Policies
- Upgrading Legacy Storage Profiles
- Understanding Virtual Machine Storage Policies
- Working with Virtual Machine Storage Policies
- Creating and Managing VM Storage Policies
- Storage Policies and Virtual Machines
- Default Storage Policies
- Assign Storage Policies to Virtual Machines
- Change Storage Policy Assignment for Virtual Machine Files and Disks
- Monitor Storage Compliance for Virtual Machines
- Check Compliance for a VM Storage Policy
- Find Compatible Storage Resource for Noncompliant Virtual Machine
- Reapply Virtual Machine Storage Policy
- Filtering Virtual Machine I/O
- VMkernel and Storage
- Storage Hardware Acceleration
- Hardware Acceleration Benefits
- Hardware Acceleration Requirements
- Hardware Acceleration Support Status
- Hardware Acceleration for Block Storage Devices
- Hardware Acceleration on NAS Devices
- Hardware Acceleration Considerations
- Storage Thick and Thin Provisioning
- Using Storage Providers
- Using vmkfstools
- vmkfstools Command Syntax
- vmkfstools Options
- -v Suboption
- File System Options
- Virtual Disk Options
- Supported Disk Formats
- Creating a Virtual Disk
- Example for Creating a Virtual Disk
- Initializing a Virtual Disk
- Inflating a Thin Virtual Disk
- Removing Zeroed Blocks
- Converting a Zeroedthick Virtual Disk to an Eagerzeroedthick Disk
- Deleting a Virtual Disk
- Renaming a Virtual Disk
- Cloning or Converting a Virtual Disk or RDM
- Example for Cloning or Converting a Virtual Disk
- Migrate Virtual Machines Between DifferentVMware Products
- Extending a Virtual Disk
- Upgrading Virtual Disks
- Creating a Virtual Compatibility Mode Raw Device Mapping
- Example for Creating a Virtual Compatibility Mode RDM
- Creating a Physical Compatibility Mode Raw Device Mapping
- Listing Attributes of an RDM
- Displaying Virtual Disk Geometry
- Checking and Repairing Virtual Disks
- Checking Disk Chain for Consistency
- Storage Device Options
- Index
Use the Predictive Scheme to Make LUN Decisions
When seing up storage for ESXi systems, before creating VMFS datastores, you must decide on the size
and number of LUNs to provision. You can experiment using the predictive scheme.
Procedure
1 Provision several LUNs with dierent storage characteristics.
2 Create a VMFS datastore on each LUN, labeling each datastore according to its characteristics.
3 Create virtual disks to contain the data for virtual machine applications in the VMFS datastores created
on LUNs with the appropriate RAID level for the applications' requirements.
4 Use disk shares to distinguish high-priority from low-priority virtual machines.
N Disk shares are relevant only within a given host. The shares assigned to virtual machines on
one host have no eect on virtual machines on other hosts.
5 Run the applications to determine whether virtual machine performance is acceptable.
Use the Adaptive Scheme to Make LUN Decisions
When seing up storage for ESXi hosts, before creating VMFS datastores, you must decide on the number
and size of LUNS to provision. You can experiment using the adaptive scheme.
Procedure
1 Provision a large LUN (RAID 1+0 or RAID 5), with write caching enabled.
2 Create a VMFS on that LUN.
3 Create four or ve virtual disks on the VMFS.
4 Run the applications to determine whether disk performance is acceptable.
If performance is acceptable, you can place additional virtual disks on the VMFS. If performance is not
acceptable, create a new, large LUN, possibly with a dierent RAID level, and repeat the process. Use
migration so that you do not lose virtual machines data when you recreate the LUN.
Choosing Virtual Machine Locations
When you’re working on optimizing performance for your virtual machines, storage location is an
important factor. A trade-o always exists between expensive storage that oers high performance and high
availability and storage with lower cost and lower performance.
Storage can be divided into dierent tiers depending on a number of factors:
n
High Tier. Oers high performance and high availability. Might oer built-in snapshots to facilitate
backups and point-in-time (PiT) restorations. Supports replication, full storage processor redundancy,
and SAS drives. Uses high-cost spindles.
n
Mid Tier. Oers mid-range performance, lower availability, some storage processor redundancy, and
SCSI or SAS drives. May oer snapshots. Uses medium-cost spindles.
n
Lower Tier. Oers low performance, lile internal storage redundancy. Uses low end SCSI drives or
SATA (serial low-cost spindles).
vSphere Storage
30 VMware, Inc.