Deployment Guide
Table Of Contents
- Dell EMC PowerVault ME4 Series Storage System Deployment Guide
- Contents
- Before you begin
- Mount the enclosures in the rack
- Connect to the management network
- Cable host servers to the storage system
- Cabling considerations
- Connecting the enclosure to hosts
- Host connection
- Connect power cables and power on the storage system
- Perform system and storage setup
- Record storage system information
- Using guided setup
- Web browser requirements and setup
- Access the PowerVault Manager
- Update firmware
- Use guided setup in the PowerVault Manager Welcome panel
- Perform host setup
- Host system requirements
- Windows hosts
- Configuring a Windows host with FC HBAs
- Configuring a Windows host with iSCSI network adapters
- Configuring a Windows host with SAS HBAs
- Linux hosts
- Configuring a Linux host with FC HBAs
- Configure a Linux host with iSCSI network adapters
- Attach a Linux host with iSCSI network adapters to the storage system
- Assign IP addresses for each network adapter connecting to the iSCSI network
- Register the Linux host with iSCSI network adapters and create volumes
- Enable and configure DM Multipath on the Linux host with iSCSI network adapters
- Create a Linux file system on the volumes
- SAS host server configuration for Linux
- VMware ESXi hosts
- Fibre Channel host server configuration for VMware ESXi
- iSCSI host server configuration for VMware ESXi
- Attach an ESXi host with network adapters to the storage system
- Configure the VMware ESXi VMkernel
- Configure the software iSCSI adapter on the ESXi host
- Register an ESXi host with a configured software iSCSI adapter and create and map volumes
- Enable multipathing on an ESXi host with iSCSI volumes
- Volume rescan and datastore creation for an ESXi hosts with iSCSI network adapters
- SAS host server configuration for VMware ESXi
- Citrix XenServer hosts
- Fibre Channel host server configuration for Citrix XenServer
- iSCSI host server configuration for Citrix XenServer
- Attach a XenServer host with network adapters to the storage system
- Configure a software iSCSI adapter on a XenServer host
- Configure the iSCSI IQN on a XenServer host
- Enable Multipathing on a XenServer host
- Register a XenServer host with a software iSCSI adapter and create volumes
- Create a Storage Repository for a volume on a XenServer host with a software iSCSI adapter
- SAS host server configuration for Citrix XenServer
- Troubleshooting and problem solving
- Locate the service tag
- Operators (Ops) panel LEDs
- Initial start-up problems
- Cabling for replication
- SFP+ transceiver for FC/iSCSI ports
- System Information Worksheet
- Setting network port IP addresses using the CLI port and serial cable
Cabling for replication
The following sections describe how to cable storage systems for replication:
Topics:
• Connecting two storage systems to replicate volumes
• Host ports and replication
• Example cabling for replication
• Isolating replication faults
Connecting two storage systems to replicate volumes
The replication feature performs asynchronous replication of block-level data from a volume in a primary system to a volume in a
secondary system.
Replication creates an internal snapshot of the primary volume, and copies the changes to the data since the last replication to
the secondary system using FC or iSCSI links.
The two associated standard volumes form a replication set, and only the primary volume (source of data) can be mapped for
access by a server. Both systems must be connected through switches to the same fabric or network (no direct attach). The
server accessing the replication set is connected to the primary system. If the primary system goes offline, a connected server
can access the replicated data from the secondary system.
Systems can be cabled to support replication using CNC-based and 10Gbase-T systems on the same network, or on different
networks.
NOTE: SAS systems do not support replication.
As you consider the physical connections of your system, keep several important points in mind:
● Ensure that controllers have connectivity between systems, whether the destination system is colocated or remotely
located.
● Qualified Converged Network Controller options can be used for host I/O or replication, or both.
● The storage system does not provide for specific assignment of ports for replication. However, this configuration can be
accomplished using virtual LANs for iSCSI and zones for FC, or by using physically separate infrastructure.
● For remote replication, ensure that all ports that are assigned for replication can communicate with the replication system
by using the query peer-connection CLI command. See the ME4 Series Storage System CLI Reference Guide for more
information.
● Allow enough ports for replication permits so that the system can balance the load across those ports as I/O demands
rise and fall. If controller A owns some of the volumes that are replicated and controller B owns other volumes that are
replicated, then enable at least one port for replication on each controller module. You may need to enable more than one
port per controller module depending on replication traffic load.
● For the sake of system security, do not unnecessarily expose the controller module network port to an external network
connection.
Conceptual cabling examples are provided addressing cabling on the same network and cabling relative to different networks.
NOTE:
The controller module firmware must be compatible on all systems that are used for replication.
A
90 Cabling for replication