Deployment Guide
Table Of Contents
- Dell EMC PowerVault ME4 Series Storage System Deployment Guide
- Contents
- Before you begin
- Mount the enclosures in the rack
- Connect to the management network
- Cable host servers to the storage system
- Cabling considerations
- Connecting the enclosure to hosts
- Host connection
- Connect power cables and power on the storage system
- Perform system and storage setup
- Record storage system information
- Using guided setup
- Web browser requirements and setup
- Access the PowerVault Manager
- Update firmware
- Use guided setup in the PowerVault Manager Welcome panel
- Perform host setup
- Host system requirements
- Windows hosts
- Configuring a Windows host with FC HBAs
- Configuring a Windows host with iSCSI network adapters
- Configuring a Windows host with SAS HBAs
- Linux hosts
- Configuring a Linux host with FC HBAs
- Configure a Linux host with iSCSI network adapters
- Attach a Linux host with iSCSI network adapters to the storage system
- Assign IP addresses for each network adapter connecting to the iSCSI network
- Register the Linux host with iSCSI network adapters and create volumes
- Enable and configure DM Multipath on the Linux host with iSCSI network adapters
- Create a Linux file system on the volumes
- SAS host server configuration for Linux
- VMware ESXi hosts
- Fibre Channel host server configuration for VMware ESXi
- iSCSI host server configuration for VMware ESXi
- Attach an ESXi host with network adapters to the storage system
- Configure the VMware ESXi VMkernel
- Configure the software iSCSI adapter on the ESXi host
- Register an ESXi host with a configured software iSCSI adapter and create and map volumes
- Enable multipathing on an ESXi host with iSCSI volumes
- Volume rescan and datastore creation for an ESXi hosts with iSCSI network adapters
- SAS host server configuration for VMware ESXi
- Citrix XenServer hosts
- Fibre Channel host server configuration for Citrix XenServer
- iSCSI host server configuration for Citrix XenServer
- Attach a XenServer host with network adapters to the storage system
- Configure a software iSCSI adapter on a XenServer host
- Configure the iSCSI IQN on a XenServer host
- Enable Multipathing on a XenServer host
- Register a XenServer host with a software iSCSI adapter and create volumes
- Create a Storage Repository for a volume on a XenServer host with a software iSCSI adapter
- SAS host server configuration for Citrix XenServer
- Troubleshooting and problem solving
- Locate the service tag
- Operators (Ops) panel LEDs
- Initial start-up problems
- Cabling for replication
- SFP+ transceiver for FC/iSCSI ports
- System Information Worksheet
- Setting network port IP addresses using the CLI port and serial cable
CNC ports used for host connection
ME4 Series SFP+ based controllers ship with CNC ports that are configured for FC.
If you must change the CNC port mode, you can do so using the PowerVault Manager.
Alternatively, the ME4 Series enables you to set the CNC ports to use FC and iSCSI protocols in combination. When configuring
a combination of host interface protocols, host ports 0 and 1 must be configured for FC, and host ports 2 and 3 must be
configured for iSCSI. The CNC ports must use qualified SFP+ connectors and cables for the selected host interface protocol.
For more information, see SFP+ transceiver for FC/iSCSI ports on page 98.
Fibre Channel protocol
ME4 Series controller enclosures support controller modules with CNC host interface ports.
Using qualified FC SFP+ transceiver/cable options, these CNC ports can be configured to support Fibre Channel protocol in
either four or two CNC ports. Supported data rates are 8 Gb/sec or 16 Gb/s.
The controllers support Fibre Channel Arbitrated Loop (public or private) or point-to-point topologies. Loop protocol can be
used in a physical loop or for direct connection between two devices. Point-to-point protocol is used to connect to a fabric
switch. Point-to-point protocol can also be used for direct connection, and it is the only option supporting direct connection at
16 Gb/s.
The Fibre Channel ports are used for:
● Connecting to FC hosts directly, or through a switch used for the FC traffic.
● Connecting two storage systems through a switch for replication. See Cabling for replication on page 90.
The first option requires that the host computer must support FC and optionally, multipath I/O.
Use the PowerVault Manager to set FC port speed and options. See the topic about configuring host ports in the Dell EMC
PowerVault ME4 Series Storage System Administrator’s Guide. You can also use CLI commands to perform these actions:
● Use the set host-parameters CLI command to set FC port options.
● Use the show ports CLI command to view information about host ports.
iSCSI protocol
ME4 Series controller enclosures support controller modules with CNC host interface ports.
CNC ports can be configured to support iSCSI protocol in either four or two CNC ports. The CNC ports support 10 GbE but do
not support 1 GbE.
The 10 GbE iSCSI ports are used for:
● Connecting to 10 GbE iSCSI hosts directly, or through a switch used for the 10 GbE iSCSI traffic.
● Connecting two storage systems through a switch for replication.
The first option requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
See the topic about configuring CHAP in the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide.
Use the PowerVault Manager to set iSCSI port options. See the topic about configuring host ports in the Dell EMC PowerVault
ME4 Series Storage System Administrator’s Guide. You can also use CLI commands to perform these actions:
● Use the set host-parameters CLI command to set iSCSI port options.
● Use the show ports CLI command to view information about host ports.
iSCSI settings
The host should be cabled to two different Ethernet switches for redundancy.
If you are using switches with mixed traffic (LAN/iSCSI), then a VLAN should be created to isolate iSCSI traffic from the rest of
the switch traffic.
Cable host servers to the storage system
23