Deployment Guide
Table Of Contents
- Dell EMC PowerVault ME4 Series Storage System Deployment Guide
- Contents
- Before you begin
- Mount the enclosures in the rack
- Connect to the management network
- Cable host servers to the storage system
- Cabling considerations
- Connecting the enclosure to hosts
- Host connection
- Connect power cables and power on the storage system
- Perform system and storage setup
- Record storage system information
- Using guided setup
- Web browser requirements and setup
- Access the PowerVault Manager
- Update firmware
- Use guided setup in the PowerVault Manager Welcome panel
- Perform host setup
- Host system requirements
- Windows hosts
- Configuring a Windows host with FC HBAs
- Configuring a Windows host with iSCSI network adapters
- Configuring a Windows host with SAS HBAs
- Linux hosts
- Configuring a Linux host with FC HBAs
- Configure a Linux host with iSCSI network adapters
- Attach a Linux host with iSCSI network adapters to the storage system
- Assign IP addresses for each network adapter connecting to the iSCSI network
- Register the Linux host with iSCSI network adapters and create volumes
- Enable and configure DM Multipath on the Linux host with iSCSI network adapters
- Create a Linux file system on the volumes
- SAS host server configuration for Linux
- VMware ESXi hosts
- Fibre Channel host server configuration for VMware ESXi
- iSCSI host server configuration for VMware ESXi
- Attach an ESXi host with network adapters to the storage system
- Configure the VMware ESXi VMkernel
- Configure the software iSCSI adapter on the ESXi host
- Register an ESXi host with a configured software iSCSI adapter and create and map volumes
- Enable multipathing on an ESXi host with iSCSI volumes
- Volume rescan and datastore creation for an ESXi hosts with iSCSI network adapters
- SAS host server configuration for VMware ESXi
- Citrix XenServer hosts
- Fibre Channel host server configuration for Citrix XenServer
- iSCSI host server configuration for Citrix XenServer
- Attach a XenServer host with network adapters to the storage system
- Configure a software iSCSI adapter on a XenServer host
- Configure the iSCSI IQN on a XenServer host
- Enable Multipathing on a XenServer host
- Register a XenServer host with a software iSCSI adapter and create volumes
- Create a Storage Repository for a volume on a XenServer host with a software iSCSI adapter
- SAS host server configuration for Citrix XenServer
- Troubleshooting and problem solving
- Locate the service tag
- Operators (Ops) panel LEDs
- Initial start-up problems
- Cabling for replication
- SFP+ transceiver for FC/iSCSI ports
- System Information Worksheet
- Setting network port IP addresses using the CLI port and serial cable
12 Gb HD mini-SAS host connection
To connect controller modules supporting HD mini-SAS host interface ports to a server HBA, using the SFF-8644 dual HD
mini-SAS host ports on a controller, select a qualified HD mini-SAS cable option. For information about configuring SAS HBAs,
see the SAS topics under Perform host setup on page 43.
A qualified SFF-8644 to SFF-8644 cable option is used for connecting to a 12Gb/s enabled host. Qualified SFF-8644 to
SFF-8644 options support cable lengths of 0.5 m (1.64'), 1 m (3.28'), 2 m (6.56'), and 4 m (13.12').
Connecting direct attach configurations
A dual-controller configuration improves application availability. If a controller failure occurs, the affected controller fails over to
the healthy partner controller with little interruption to data flow.
A failed controller can be replaced without the need to shut down the storage system.
NOTE: In the following examples, a single diagram represents CNC, SAS, and 10Gbase-T host connections for ME4 Series
controller enclosures. The location and sizes of the host ports are similar. Blue cables show controller A paths and green
cables show controller B paths for host connection.
Single-controller module configurations
A single controller module configuration does not provide redundancy if a controller module fails.
This configuration is intended only for environments where high availability is not required. If the controller module fails, the host
loses access to the storage data until failure recovery actions are completed.
NOTE: Expansion enclosures are not supported in a single controller module configuration.
Figure 19. Connecting hosts: ME4 Series 2U direct attach – one server, one HBA, single path
1. Server
2. Controller module in slot A
3. Controller module blank in slot B
NOTE:
If the ME4 Series 2U controller enclosure is configured with a single controller module, the controller module must
be installed in the upper slot. A controller module blank must be installed in the lower slot. This configuration is required to
enable sufficient air flow through the enclosure during operation.
Dual-controller module configurations
A dual-controller module configuration improves application availability.
If a controller module failure occurs, the affected controller module fails over to the partner controller module with little
interruption to data flow. A failed controller module can be replaced without the need to shut down the storage system.
In a dual-controller module system, hosts use LUN-identifying information from both controller modules to determine the data
paths are available to a volume. Assuming MPIO software is installed, a host can use any available data path to access a volume
that is owned by either controller module. The path providing the best performance is through the host ports on the controller
module that owns the volume . Both controller modules share one set of 1,024 LUNs (0-1,023) for use in mapping volumes to
hosts.
26
Cable host servers to the storage system