White Papers

ME4 Series features
8 Dell EMC PowerVault ME4 Series and VMware vSphere | 3922-BP-VM
2.2.1 Automated tiered storage
Automated tiered storage (ATS) automatically moves data residing in one class of disks to a more appropriate
class of disks based on data access patterns, with no manual configuration necessary. Frequently accessed,
hot data can move to disks with higher performance, while infrequently accessed, cool data can move to disks
with lower performance and lower costs.
Each virtual disk group, depending on the type of disks it uses, is automatically assigned to one of the
following tiers:
Performance: This highest tier uses SSDs, providing the best performance but also the highest cost.
Standard: This middle tier uses enterprise-class SAS hard drives, which provide good performance with mid-
level cost and capacity.
Archive: This lowest tier uses nearline SAS hard drives, which provide the lowest performance with the
lowest cost and highest capacity.
A volume’s tier affinity setting enables tuning the tier-migration algorithm when creating or modifying the
volume so that the volume data automatically moves to a specific tier, if possible. If space is not available in a
volume's preferred tier, another tier will be used. There are three volume tier affinity settings:
No affinity: This is the default setting. It uses the highest available performing tiers first and only uses the
archive tier when space is exhausted in the other tiers. Volume data will swap into higher performing tiers
based on frequency of access and tier space availability.
Archive: This setting prioritizes the volume data to the lowest performing tier available. Volume data can
move to higher performing tiers based on frequency of access and available space in the tiers.
Performance: This setting prioritizes volume data to the higher performing tiers. If no space is available,
lower performing tier space is used. Performance affinity volume data will swap into higher tiers based on
frequency of access or when space is made available.
2.2.2 Read flash cache
Unlike tiering, where a single copy of specific blocks of data resides in either spinning disks or SSDs, the read
flash cache feature uses one or two SSD disks per pool as a read cache for hot or frequently read pages only.
Read cache does not add to the overall capacity of the pool to which it has been added, nor does it improve
write performance. Read flash cache can be added from the pool without any adverse effect on the volumes
and their data in the pool, other than to impact the read-access performance. A separate copy of the data is
always maintained on the HDDs. Taken together, these attributes have several advantages:
Controller read cache is effectively extended by two orders of magnitude or more.
The performance cost of moving data to read-cache is lower than a full migration of data from a lower
tier to a higher tier.
Read-cache is not fault tolerant, lowering system cost.
2.3 Asymmetric Logical Unit Access
ME4 Series storage uses Unified LUN Presentation (ULP), which can expose all LUNs through all host ports
on both controllers. The storage system appears as an active-active system to the host. The host can choose
any available path to access a LUN regardless of disk-group ownership. When ULP is in use, the controllers'