Administrator Guide
• Performance – This setting prioritizes volume data to the higher tiers of service. If no space is available, lower performing tier space is
used. Volume data moves into higher performing tiers based on the frequency of access and available space in the tiers.
NOTE: The Performance affinity setting does not require an SSD tier and uses the highest performance tier
available.
• Archive – This setting prioritizes the volume data to the lowest tier of service. Volume data can move to higher performing tiers based
on the frequency of access and available space in the tiers.
NOTE: Volume tier affinity is not the same thing as pinning and it does not restrict data to a given tier and capacity.
Data on a volume with Archive affinity can still be promoted to a performance tier when that data becomes in demand to
the host application.
Volume tier affinity strategies
Volume tier affinity acts as a guide to the system on where to place data for a given volume in the available tiers.
The standard strategy is to prefer the highest spinning disk tiers for new sequential writes and the highest tier available (including SSD)
for new random writes. As the host application accesses the data, it is moved to the most appropriate tier based on demand. Frequently
accessed data is promoted up towards the highest performance tier and infrequently accessed data is demoted to the lower spinning disk-
based tiers. The standard strategy is followed for data on volumes set to No Affinity.
For data on volumes set to the Performance affinity, the standard strategy is followed for all new writes. However, subsequent access to
that data has a lower threshold for promotion upwards. The lower threshold makes it more likely for that data to be available on the higher
performance tiers. Preferential treatment is provided to frequently accessed data that has performance affinity at the SSD tier. Archive or
no affinity data is demoted out of the SSD tier to make room for data with an affinity of Performance. The Performance affinity is useful
for volume data that you want to ensure has priority treatment for promotion to and retention in your highest performance tier.
For volumes that are set to the Archive affinity, all new writes are initially placed in the archive tier. If no space is available in the archive
tier, new writes are placed on the next higher tier available. Subsequent access to that data enables for its promotion to the performance
tiers as it is accessed more often. However, the data has a lower threshold for demotion. The data is moved out of the highest
performance SSD tier when there is a need to promote frequently accessed data up from a lower tier.
About initiators, hosts, and host groups
An initiator represents an external port to which the storage system is connected. The external port may be a port in an I/O adapter such
as an FC HBA in a server.
The controllers automatically discover initiators that have sent an inquiry command or a report luns command to the storage
system, which typically happens when a host boots up or rescans for devices. When the command is received, the system saves the
initiator ID. You can also manually create entries for initiators. For example, you might want to define an initiator before a controller port is
physically connected through a switch to a host.
You can assign a nickname to an initiator to make it easy to recognize for volume mapping. For a named initiator, you can also select a
profile specific to the operating system for that initiator. A maximum of 512 names can be assigned.
For ease of management, you can group 1 to 128 initiators that represent a server into a host. You can also group 1 to 256 hosts into a
host group. This fact enables you to perform mapping operations for all initiators in a host, or all initiators and hosts in a group, instead of
for each initiator or host individually. An initiator must have a nickname to be added to a host, and an initiator can be a member of only one
host. A host can be a member of only one group. A host cannot have the same name as another host, but can have the same name as any
initiator. A host group cannot have the same name as another host group, but can have the same name as any host. A maximum of 32
host groups can exist.
A storage system with iSCSI ports can be protected from unauthorized access via iSCSI by enabling Challenge Handshake Authentication
Protocol (CHAP). CHAP authentication occurs during an attempt by a host to log in to the system. This authentication requires an
identifier for the host and a shared secret between the host and the system. Optionally, the storage system can also be required to
authenticate itself to the host. This is called mutual CHAP. Steps involved in enabling CHAP include:
• Decide on host node names (identifiers) and secrets. The host node name is its iSCS Qualified Name (IQN). A secret must have 12–16
characters.
• Define CHAP entries in the storage system.
• Enable CHAP on the storage system. Note that this applies to all iSCSI hosts, in order to avoid security exposures. Any current host
connections will be terminated when CHAP is enabled and will need to be re-established using a CHAP login.
• Define CHAP secret in the host iSCSI initiator.
• Establish a new connection to the storage system using CHAP. The host should be displayable by the system, as well as the ports
through which connections were made.
If it becomes necessary to add more hosts after CHAP is enabled, additional CHAP node names and secrets can be added. If a host
attempts to log in to the storage system, it will become visible to the system, even if the full login is not successful due to incompatible
24
Getting started