Part III Host and storage system rules Host and storage system rules are presented in these chapters: • “Heterogeneous server rules” (page 165) • “MSA storage system rules” (page 211) • “P6000/EVA storage system rules” (page 221) • “P9000/XP storage system rules” (page 235) • “SVSP storage system rules” (page 242) • “3PAR StoreServ storage rules” (page 249) • “Enterprise Backup Solution” (page 255)
10 Heterogeneous server rules This chapter describes platform configuration rules for SANs with specific operating systems and heterogeneous server platforms: • “SAN platform rules” (page 166) • “Heterogeneous storage system support” (page 166) • “HP FC Switches for the c-Class BladeSystem server environment” (page 167) • “HP 4 Gb Virtual Connect Fibre Channel module for c-Class BladeSystem” (page 169) • “BladeSystem with Brocade Access Gateway mode” (page 170) • “BladeSystem with Cisco N_Port Vi
SAN platform rules Table 76 (page 166) describes SAN platform rules for all SAN server configurations. Table 76 General SAN platform rules Rule number 1 SAN platform configuration Any combination of heterogeneous clustered or standalone servers with any combination of storage systems is supported.
HP FC Switches for the c-Class BladeSystem server environment Table 77 (page 167) lists supported switches for the HP c-Class BladeSystem server environment.
Virtual Connect FlexFabric modules are more efficient than traditional and other converged network solutions because they do not require multiple Ethernet and Fibre Channel switches, extension modules, cables, and software licenses. Also, built-in Virtual Connect wire-once connection management enables you to add, move, or replace servers in minutes. For more information, see the product QuickSpecs at: http://h18004.www1.hp.com/products/quickspecs/13652_div/13652_div.
The HP Virtual Connect 8 Gb 20-port Fibre Channel Module: • Simplifies server connections by separating the server enclosure from SAN • Simplifies SAN fabrics by reducing cables without adding switches to the domain • Allows you to change servers in minutes For more information, see the product QuickSpecs at: http://h18004.www1.hp.com/products/quickspecs/13421_div/13421_div.
Figure 58 HP Virtual Connect Fibre Channel configuration Server Bay 16 Server Bay 15 Server Bay 14 Server Bay 13 Server Bay 12 VC-FC - Module Server Bay 11 Server Bay 10 Server Bay 9 Server Bay 8 Server Bay 7 Server Bay 6 Server Bay 5 Server Bay 4 Server Bay 3 Server Bay 2 Server Bay 1 Blade enclosure with 16 servers VC-FC - Module Blade enclosure/ Server management N_Ports (NPIV) (uplinks) SAN/ Storage management FC Fabric (B-series, C-series, and H-series) (with NPIV F_Port support) F
Figure 59 Brocade 4Gb SAN Switch for HP c-Class BladeSystem in Access Gateway mode c-Class BladeSystem Access Gateway N_Port (host) Server Bay 1 F_Port (virtual) Uplink 1 Server Bay 2 N_Port (NPIV) Server Bay 3 Uplink 2 Server Bay 4 c-Class BladeSystem Access Gateway Server Bay 5 Server Bay 7 Server Bay 8 Server Bay 9 Server Bay 10 Uplink 3 N_Port (NPIV) Server HBA ports, N_Ports Server Bay 6 N_Port (NPIV) Uplink 4 Default server to uplink mapping (2:1) Server Bay 11 Server Bay 12 Server Bay
• No SAN management from the BladeSystem enclosure once the initial connections have been configured • No direct storage attachment (requires at least one external Fibre Channel switch) • Lacks Fibre Channel embedded switch features (ISL Trunking, dynamic path selection, and extended distances) with external links from AG to core switches • Managed separately from the BladeSystem, but if used with B-series switches, uses common Fabric OS • Cannot move servers without impacting the SAN (Virtual Conn
An NP_Port is an NPIV uplink from the NPV device to the core switch. Switches in NPV mode use NPIV to log in multiple end devices that share a link to the core switch. The Cisco MDS 9124e Fabric Switch is transparent to the hosts and fabric—they no longer function as standard switches. NOTE: This section describes HP c-Class BladeSystems. NPV mode is also supported on the Cisco MDS 9124 and MDS 9134 Fabric Switches. For more information, see the Cisco MDS 9000 Configuration Guide.
disruption when a link fails between an NP_Port on the NPV devices and an F_Port on the external fabric. To avoid disruption when an NP_Port goes online, the logins are not redistributed. NPV mode considerations Consider the following: • Nondisruptive upgrades are supported. • Grouping devices into different VSANs is supported. • A load-balancing algorithm automatically assigns end devices in a VSAN to one of the NPV core switch links (in the same VSAN) at initial login.
The configuration shown in Figure 62 (page 174) includes: • Redundant SANs, with each server connecting to one fabric through one NPV device • Connectivity to C-series and B-series fabrics • Support for up to six NPV devices per HP BladeSystem c7000 enclosure, or three NPV devices per HP BladeSystem c3000 enclosure NPV with FlexAttach The Cisco MDS 9124e Fabric Switch for HP c-Class BladeSystem, MDS 9124 switch, and MDS 9134 switch support NPV with FlexAttach.
Figure 63 Cisco MDS 9124e Fabric Switch for HP c-Class BladeSystem using NPV with FlexAttach SAN management Blade server management c-Class N_Port Virtualization MDS 9124e NPV mode Blade 1 Uplink 1 Blade 2 N_Port (NPIV) Blade 3 Uplink 2 N_Port (NPIV) Blade 4 Blade 6 Blade 7 Blade 8 Blade 9 Blade 10 Blade 11 Blade 12 Blade 13 Blade 14 Blade 15 Blade 16 Server HBA ports, N_Ports Blade 5 HBA aggregator Uplink 3 N_Port (NPIV) Uplink 4 N_Port (NPIV) Uplink 5 N_Port (NPIV) Server-to-uplink mapping (2
HBA N_Port ID Virtualization HBA NPIV is a Fibre Channel standard that allows multiple N_Ports to connect to a switch F_Port. HBA NPIV is used on servers running a VOS. You can assign a unique virtual port name to each VM that shares the HBA. NPIV is supported on all 8 Gb and 4 Gb Emulex and QLogic HBAs when using the vendor-supplied VOS drivers.
Figure 64 VOS with HBA NPIV enabled Server VM1 WWPN: 48:02:00:0c:29:00:00:1a Virtual OS VM2 WWPN: 48:02:00:0c:29:00:00:24 HBA WWPN: 20:00:00:00:c9:56:31:ba 48:02:00:0c:29:00:00:1a 48:02:00:0c:29:00:00:24 48:02:00:0c:29:00:00:2a VM3 WWPN: 48:02:00:0c:29:00:00:2a Port 8 Switch Domain ID: 37 Name Server : : FCID WWPN 370800 20:00:00:00:c9:56:31:ba 370801 48:02:00:0c:29:00:00:1a 370802 48:02:00:0c:29:00:00:24 370803 48:02:00:0c:29:00:00:2a Fabric 26411a When using HBA NPIV, consider the following: • Wh
NonStop servers Storage systems • S760, S76000 • S78, S780, S7800, S78000 • S86000, S88000 NS-series servers: • NS1000, NS1200 • NS14000, NS14200 • NS16000, NS16000CG, NS16200 NonStop Integrity servers: • NS2000, NS2000T/NS2000CG • NS2100 • NS2200, NS2200T/NS2200ST • NS2300 • XP10000, XP12000 (RAID500) • XP20000, XP24000 (RAID600) • P9500 • XP7 • NS2400, NS2400T, NS2400ST • NS3000AC • NS5000T/NS5000CG NonStop Integrity BladeSystem servers: • NB50000c, NB50000c-cg • NB54000c, NB54000c-cg • NB56000c, NB56
NOTE: Consider the following VIO requirements: • For NS1000 and NS1200 servers, expanded ports are available only to customers who have the HP ESS. • The VIO enclosure software is not backward compatible and is supported only on H06.08 and later RVUs. • Prior to December 2006, the NS1000 and NS14000 servers were shipped with a limited IOAME configuration known as the IO Core, which consisted of an IOAME with six adapter slots rather than the usual ten slots.
Table 80 (page 181) describes supported NonStop server configurations with VIO enclosures.
Table 83 (page 182) describes storage system configuration rules for NonStop servers. Table 83 NonStop server configuration rules Rule number Description 1 Requires a minimum of one XP storage system for storage connectivity. 2 Requires a minimum of one IOAME on the server. For the NS1000, NS1200, NS14000, and NS14200 servers using VIO, two VIO enclosures are used instead of the IOAME. For BladeSystems using CLIMs, two CLIMs are used instead of the IOAME.
Table 83 NonStop server configuration rules (continued) Rule number Description • The 2 Gb Fibre Channel PICs (VIO) are supported with 1 Gb/2 Gb CHIPs for XP 10000/12000/20000/24000 and with 4 Gb CHIPs for XP10000/12000/20000/24000. • The 4 Gb Fibre Channel HBAs (in CLIMs) are supported with 1 Gb/2 Gb CHIPs for XP 10000/12000/20000/24000 and with 4 Gb CHIPs for XP10000/12000/20000/24000. 15 High-availability SAN • Requires dual-redundant SAN fabrics (level 4, NSPOF high-availability SAN configuration).
Figure 65 (page 184) shows a minimum direct host attach configuration with an IOAME. Figure 65 Minimum direct host attach IOAME configuration for XP storage systems FCSA FCSA X Y IOAME p mb b m XP Array P M CL1 CL2 25122a Figure 66 (page 184) shows a minimum direct host attach configuration with VIO enclosures.
Figure 67 (page 185) shows a minimum direct host attach configuration with CLIMs. Figure 67 Minimum direct host attach CLIM configuration for XP storage systems CLIM X CLIM Y XP Array m mb b p P M CL1 CL2 26489a Figure 68 (page 185) shows a minimum SAN configuration with an IOAME.
Figure 69 (page 186) shows a minimum SAN configuration with VIO enclosures. Figure 69 Minimum SAN VIO configuration for XP storage systems (NS1000, NS14000) VIO Enclosure Y VIO Enclosure X mb p m b XP Array m mb b p P M CL1 CL2 25280a Figure 70 (page 186) shows a minimum SAN configuration with CLIMs.
Figure 71 (page 187) shows a configuration with physical IOAME redundancy. Figure 71 SAN IOAME configuration with logical and physical redundancy for XP storage systems Y X b2 Y IOAME IOAME p FCSA FCSA FCSA FCSA X mb mb2 m2 m b p2 XP Array m2 mb p p2 mb2 m M P b2 b M2 P2 CL1 CL2 25124a Figure 72 (page 187) shows a SAN configuration with VIO Fibre Channel PIC redundancy.
Figure 73 (page 188) shows a SAN configuration with CLIM physical redundancy. Figure 73 SAN CLIM configuration with logical and physical redundancy for XP storage systems CLIM Y CLIM X mb p m2 CLIM X m b mb2 b2 p2 XP Array m2 p2 m b mb p CLIM Y M P b2 mb2 M2 P2 CL1 CL2 26491a Figure 74 (page 188) shows a configuration with physical IOAME redundancy.
Figure 75 (page 189) shows a SAN configuration (two cascaded switches) with VIO Fibre Channel PIC redundancy.
Figure 76 (page 190) shows a SAN (two cascaded switches) configuration with CLIM physical redundancy.
HP-UX SAN rules This section describes the SAN rules for HP-UX. For current storage system support, see the SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access. Table 84 (page 191) describes the SAN configuration rules for HP-UX. Table 85 (page 192) describes support for HP-UX storage, HBA, and multipathing coexistence. Table 84 HP-UX SAN configuration rules Storage systems1 HP-UX SAN rules • Supports HP Serviceguard Clusters.
1 Unlisted but supported storage systems have no additional SAN configuration restrictions. For the latest support information, contact an HP storage representative. Table 85 HP-UX storage system, HBA, and multipath software coexistence support1 P2000 G3 XP7 P9500 MSA2000fc G2 P63xx/P65xx EVA4 10000/12000/ (MSA2300fc) EVA4x00/6x00/8x003 20000/24000 2 Notes SVSP 3.
HP OpenVMS SAN rules This section describes the SAN rules for HP OpenVMS. For current storage system support, see the SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access. Table 86 (page 193) describes the SAN configuration rules for HP OpenVMS. Table 87 (page 194) describes support for HP OpenVMS storage, HBA, and multipathing coexistence.
1 Unlisted but supported storage systems have no additional SAN configuration restrictions. For the latest support information, contact an HP storage representative.
Table 88 Tru64 UNIX SAN configuration rules Storage systems1 Tru64 UNIX SAN rules • Zoning is required when Tru64 UNIX is used in a heterogeneous SAN with other operating systems. • Supports TruCluster Server. All supported • Supports boot from SAN. For more information, see “P6000/EVA SAN boot support” (page 229) and “P9000/XP SAN boot support” (page 238). • Supports multipathing high-availability configuration in multiple fabrics or in a single fabric with zoned paths.
Table 89 HP Tru64 UNIX storage system, HBA, and multipath software coexistence support1 EVA4100/6100/81002 XP Native multipathing driver XP24000/20000,XP12000/10000 Native multipathing driver 1 Legend: S = same server and HBA 2 EVA4100/6100/8100 requires XCS firmware 6.cx (or later). S S For more information about storage system coexistence, see “Heterogeneous SAN storage system coexistence” (page 208). Apple Mac OS X SAN rules This section describes the SAN rules for Apple Mac OS X.
Table 90 Apple Mac OS X SAN configuration rules (continued) Storage systems Apple Mac OS X SAN rules • XCS 6.100 (or later) (EVA4100/6100/8100) • XCS 09000000 (or later) (EVA4400) • Command View 6.0.2 (or later) (EVA4100/6100/8100) • Command View 8.0 (or later) (EVA4400) P6550 EVA • Command View host entry operating system: Custom, custom type "00000002024000A8" • HP P6000 Command View 9.4 (or later) (P6300/P6500) Notes: For XCS 6.
IBM AIX SAN rules This section describes the SAN rules for IBM AIX. For current storage system support, see the SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access. Table 91 (page 198) describes the SAN configuration rules for IBM AIX. Table 92 (page 199) describes support for IBM AIX storage, HBA, and multipathing coexistence. Table 91 IBM AIX SAN configuration rules Storage systems1 IBM AIX SAN rules • Supports HACMP/ES Clusters.
Table 92 IBM AIX storage system, HBA, and multipath software coexistence support1 P63xx/P65xx EVAEVA 4x00/6x00/8x002 EVA4x00/6x00/ 8x002 P9500, XP7, XP24000/20000, XP12000/10000 SVSP 3.0 3PAR3 3PAR3 SVSP 3.
Table 93 Linux SAN configuration rules Storage systems1 All supported P2000 G3 FC MSA2000fc G2 (MSA2300fc) MSA2000fc Linux SAN rules • Supports multipathing high-availability configuration in multiple fabrics or in a single fabric with zoned paths. • Zoning is required when Linux is used in a heterogeneous SAN with other operating systems. • For HBA parameter settings, see “MSA 2040 SAN, MSA 1040 FC, P2000 G3 FC, MSA2000fc G2 and MSA2000fc storage system rules” (page 213).
Table 94 Linux storage system, HBA, and multipath software coexistence support1 P2000 G3 P9500, XP7 MSA2000fc XP24000/20000/ P63xx/P65xx EVA G2(MSA2300fc) EVA4x00/6x00/8x00 12000/10000 3PAR Device-Mapper Multipath P2000 G3MSA2000fc G2(MSA2300fc) P63xx/P65xx EVA EVA4x00/6x00/8x00 S S S S S S S S S S S S S S S Device-Mapper Multipath P9500, XP7 XP24000/20000/12000/10000 3PAR 1 S Legend: D = same server and different HBA; S = same server and HBA; — = not supported Microsoft Windows SAN
Table 95 Microsoft Windows SAN configuration rules (continued) Storage systems1 Windows SAN rules P6300 EVA • Supports boot from SAN. For more information, see “P6000/EVA SAN boot support” (page 229). P6350 EVA • For HP P6000 Continuous Access configuration information, see “HP P6000 Continuous Access SAN integration” (page 228). P6500 EVA P6550 EVA • Zoning is required when Windows is used in a heterogeneous SAN with other operating systems. SVSP 3.
Table 96 Microsoft Windows storage system, HBA, and multipath software coexistence support1 P2000 G3 XP 7 XP10000/12000XP MSA2000fc G2(MSA2300fc) P63xx/P65xx EVA MSA2000fc EVA4x00/6x00/8x002 20000/24000 P9500 MS MPIO DSM3 HP MPIO FF4 MS MPIO DSM S S S S S HP MPIO FF4 S S — S MS MPIO DSM S — S S S S MS MPIO DSM SVSP 3.
Oracle Solaris SAN rules This section describes the SAN rules for Oracle Solaris. For current storage system support, see the SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access. Table 97 (page 204) describes the SAN configuration rules for Oracle Solaris. Table 98 (page 206) describes support for Oracle Solaris storage, HBA, and multipathing coexistence.
1 Unlisted but supported storage systems have no additional SAN configuration restrictions. For the latest support information, contact an HP storage representative.
Table 98 Oracle Solaris storage system, HBA, and multipath software coexistence support1 P2000 G3 MSA2000fc G2(2300fc) Notes3 P63xx/P65xx EVAEVA4x00/ 6x00/8x002 P9500, XP7, XP24000/20000, XP12000/10000 SVSP 3.
VMware ESX SAN rules This section describes the SAN rules for VMware ESX. For current storage system support, see the SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access. Table 99 (page 207) describes the SAN configuration rules for VMware ESX. Table 99 VMware ESX SAN configuration rules Storage systems1 All supported ESX SAN rules Zoning is required when ESX is used in a heterogeneous SAN with other operating systems.
Citrix Xen SAN rules This section describes the SAN rules for Xen. For current storage system support, see the SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access. Table 100 (page 208) describes SAN rules for Xen Table 100 Xen SAN configuration rules Storage systems1 • Supports multipathing high-availability configuration in multiple fabrics or in a single fabric with zoned paths.
Common SAN coexistence To configure different HP storage system types or third-party storage systems for coexistence in a common SAN, without common access from the same server, define a separate zone for each storage system family.
Connection to a common server with different HBA vendor products requires separate HBA zones for each storage system: • All Fibre Channel HBA zones must contain HBAs from the same vendor. • A zone can contain different HBA models if they are all from the same HBA vendor. • A Fibre Channel HBA can be a member in more than one zone. • All HBA members in the same zone can reside in different servers, but must be the same operating system type.
11 MSA storage system rules This chapter describes specific rules for the following entry-level storage systems: • HP MSA 2040 SAN • HP MSA 1040 FC • MSA P2000 FC, MSA P2000 FC/iSCSI • Modular Smart Array 2000fc G2 • Modular Smart Array 2000fc For the iSCSI rules for the MSA2000i, see “HP StorageWorks MSA family of iSCSI SAN arrays” (page 325). HP MSA storage system configurations Table 101 (page 211) describes the configurations for the MSA family.
Table 102 MSA2000fc controller configurations Storage system Description MSA2012fc single controller Used in a direct connect or SAN connect configuration with a standard single controller MSA2012fc dual controller Used in a direct connect or SAN connect configuration with standard dual controllers MSA2212fc enhanced dual controller Used in a direct connect or SAN connect configuration with enhanced dual controllers Heterogeneous SAN support The MSA 2040/1040, P2000 G3, and MSA2000 families support
For the latest information on firmware versions and MSA storage system support, see the SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access. Configuration rules Table 104 (page 213) describes the MSA 2040 SAN, MSA 1040 FC, P2000 G3 FC, MSA2000fc and MSA2000fc G2 storage system SAN configuration rules.
Table 105 MSA 2040 SAN, MSA 1040 FC, P2000 G3 FC, FC/iSCSI, MSA2000fc G2, and MSA2000fc maximum configurations Storage systems Operating systems Drives Hosts Snapshots and clones1 LUNs LUN size 512 Up to 64 TB depending on vdisk configuration 64 standard (maximum 512 snapshots) 512 Up to 64 TB depending on vdisk configuration 64 standard (maximum 512 snapshots) 64 512 Up to 64 TB 64 standard depending (maximum 512 on vdisk snapshots, clones or configuration remote snaps) 64 512 Up to 64 TB
P2000 data migration The P2000 G3 Fibre Channel storage system supports data migration using the HP StorageWorks MPX200 Multifunction Router data migration feature. This feature provides for block (LUN) level data movement between source and destination storage systems. MPX200 Multifunction Router with data migration The MPX200 Multifunction Router supports iSCSI, FCoE, data migration, and FCIP.
Table 106 P2000 data migration source-destination storage systems Source storage systems P2000 destination storage systems • All HP MSA (Fibre Channel) and P6000/EVA models • P9500/XP24000/20000, XP12000/10000 • SVSP • 3PAR S-Class Third-party array models: • Dell EqualLogic family (iSCSI), Compellent Series 30 and 40 Controllers • EMC CLARiiON AX series, CX Series, Symmetrix DMX Series, Symmetrix VMAX SE, VNX5500 P2000 G3 FC • Fujitsu ETERNUS DX400, DX440 S2, DX8400 • Hitachi Data Systems V series, AMS
For the latest data migration storage system, operating system, and version support, see the SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access. Management software support The MSA 2040 SAN, MSA 1040 FC, P2000 G3 FC, FC/iSCSI, MSA2000fc G2, and MSA2000fc support target-based management interfaces, including Telnet (CLI), FTP, and a web-based interface. The web-based interface is supported with Microsoft Internet Explorer and Mozilla Firefox.
12 HP StoreVirtual storage system rules This chapter describes specific rules for management groups with at least two of the following storage systems: • HP StoreVirtual 4730 FC Storage • HP StoreVirtual 4330 FC Storage This chapter describes the following topics: • “Fibre Channel on HP StoreVirtual 4000 Storage” (page 218) • “Campus cluster support” (page 218) • “Heterogeneous SAN support” (page 219) • “Configuration rules” (page 219) • “Configuration parameters” (page 220) • “Data migration”
NOTE: 500 MB/s (4,000 Mb/s) of bandwidth per storage node pair needs to be allocated on the 10 GbE network between the two locations. Network latency among storage nodes cannot exceed 1 ms. The two Fibre Channel fabrics between the two sites can be stretched using any native and transparent fabric extension technology, such as long-range optics and DWDM. SAN extension using other intermediate protocols, like IP, is not supported in campus cluster configurations.
For information about configuring HP StoreVirtual 4000 Storage using the Centralized Management Console, see HP StoreVirtual 4000 Storage User Guide. Configuration parameters For configuration settings for Fibre Channel on HP StoreVirtual 4000 Storage, see HP StoreVirtual 4000 Storage User Guide, and configuration sets on SPOCK. Data migration Fibre Channel on HP StoreVirtual 4000 Storage does not currently support data migration using the MPX200.
13 P6000/EVA storage system rules This chapter describes specific rules for the following storage systems: • EVA4100 • EVA4400 • EVA6100 • EVA8100 • EVA6400/8400 • P6300/P6500 EVA • P6350/P6550 EVA IMPORTANT: HP P6000 storage was formerly called the HP Enterprise Virtual Array product family. General references to HP P6000 can also refer to earlier versions of HP EVA products.
Heterogeneous SAN support P6000/EVA HSV-based controller storage systems support shared access with any combination of operating systems listed in Table 109 (page 222).
Table 110 P6000/EVA storage system rules (continued) Rule number 7 Description EVA4400 (without the embedded switch module) with XCS 09x and EVA4100/6100/6400/8100/8400 are supported with 8 Gb/s, 4 Gb/s, or 2 Gb/s switch or HBA direct connectivity only (see rule 9). • EVA6400/8400 requires XCS 095x minimum. • EVA4100/6100/8100 requires XCS 6.2x minimum. • P6300/P6500 EVA requires XCS 10001000 minimum. • P6350/P6550 EVA requires XCS 11001000 minimum.
Table 110 P6000/EVA storage system rules (continued) Rule number 10 Description All P6000/EVA host ports must contain a cable or a loopback connector; otherwise, host port error events will persist. If the P6000/EVA host port is empty, perform the following steps: • From the OCP or WOCP, set the port to direct connect mode. • Insert a loopback connector when a P6000/EVA host port is not connected to a switch or an HBA (for direct connect). 11 Supports connection of single HBA servers.
LUNs #161 through #192 are presented to 4-node cluster = 0128 LUN presentations LUNs #193 through #200 are presented to single host = 0008 LUN presentations When all LUNs are presented to all hosts, the number of LUNs multiplied by the number of hosts must not exceed 8,192. Table 111 (page 225) lists the maximum number of EVA storage systems that can be configured on a single server. There is no limit on the maximum number of EVA storage systems in a SAN.
P6000/EVA data migration The P6000/EVA family of Fibre Channel storage systems supports data migration using the HP StorageWorks MPX200 Multifunction Router data migration feature. This feature provides for block (LUN) level data movement between source and destination storage systems. MPX200 Multifunction Router with data migration The MPX200 Multifunction Router supports iSCSI, FCoE, data migration, and FCIP.
Table 112 P6000/EVA data migration source-destination storage systems Source storage systems P6000/EVA destination storage systems • All HP MSA (Fibre Channel) and P6000/EVA models • P9500/XP24000/20000, XP12000/10000 • SVSP • 3PAR S-Class Third-party array models: • Dell EqualLogic family (iSCSI), Compellent Series 30 and 40 Controllers • EVA4400/4400 with embedded switch • EMC CLARiiON AX series, CX Series, Symmetrix DMX • EVA4100/6100/8100 Series, Symmetrix VMAX SE, VNX5500 • EVA6400/8400 • Fujitsu ET
For current data migration storage system support and up-to-date operating system version support, see the SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access. Data migration considerations MPX200 connectivity to P6000/EVA storage as a data migration destination array is obtained through a Fibre Channel switch configured in the same fabric as the MPX200 Fibre Channel ports.
Table 114 HP P6000 Continuous Access heterogeneous SAN configuration rules (continued) Rule number Description 7 The HP P6000 Continuous Access link supports mixed heterogeneous SAN, HP P6000 Continuous Access, and OpenVMS host-based shadowing traffic. 8 Two Storage Management Appliance Command View element managers are required: one active and one either active in standby mode or in powered-off, passive mode.
For HP P6000 Continuous Access, if the operating system supports boot from SAN, replication of the boot disk is supported. SAN boot through the B-series MP Router is not supported. Storage management server integration A management server is required to manage an P6000/EVA storage system. The management server can be an SMA, GPS, management station (dedicated server), or HP Storage Server. The management server communicates with storage systems in-band through a Fibre Channel connection.
Cabling This section describes cabling options for high-availability multipathing configurations for P6000/EVA storage systems. Level 4 NSPOF configuration Figure 77 (page 231) through Figure 80 (page 233) show cabling options when implementing a level 4, high-availability, NSPOF configuration. For a description of availability levels, see “Data availability” (page 38).
Figure 78 EVA4400 9x straight-cable configuration A B 26408a Figure 79 (page 232) shows the cabling scheme for both non-HP P6000 Continuous Access and HP P6000 Continuous Access configurations for EVA8100 storage systems. Figure 79 EVA8100 straight-cable configuration 25131b Figure 80 (page 233) shows an EVA8100 configuration in which all controller host ports support two independent, dual-redundant SANs. In this configuration, SAN 1 represents a dual-redundant SAN with Fabric A and Fabric B.
Figure 80 EVA8100 two independent, dual-redundant SAN configuration Fabric A Fabric C SAN 1 SAN 2 Fabric B Fabric D 25132a Dual-channel HBA configurations Use dual-channel HBAs when the number of server PCI slots is limited. Most installations are configured as shown in Figure 81 (page 233) or Figure 82 (page 233).
Figure 83 (page 234) shows a sample NSPOF solution with two dual-channel HBAs. This availability solution is equivalent to using two single-channel HBAs. For more information, see “Data availability” (page 38). Figure 83 Two dual-channel HBAs (NSPOF) Dual-channel HBA Port 1 Targets A, B,... Port 2 Targets C, D,... 1 Dual-channel HBA Port 1 Port 2 Targets A, B,... Targets C, D,...
14 P9000/XP storage system rules This chapter describes specific rules for the following storage systems: • XP7 • XP12000 • P9500 • XP10000 • XP24000 • XP20000 This chapter describes the following topics: • “P9000/XP storage systems” (page 235) • “P9000/XP SAN boot support” (page 238) • “LUN Configuration and Security Manager XP support” (page 239) • “P9000/XP data migration” (page 239) P9000/XP storage systems Before implementation, contact an HP storage representative for information about su
Table 117 P9000/XP heterogeneous SAN support (continued) Storage systems Firmware version1 Switches2, 3 Operating systems3 Citrix 5.6 XP20000 HP-UX 60x XP24000 IBM AIX Microsoft Windows XP12000 50x XP10000 B-series OpenVMS C-series Red Hat Linux H-series5 Oracle Solaris SUSE Linux Tru64 UNIX VMware ESX 1 Contact an HP storage representative for the latest firmware version support. 2 XP7 and P9500 storage systems are not supported with 2 Gb/s switches.
Table 118 P9000/XP storage system rules (continued) Rule number 6 Description XP24000/20000 and XP12000/10000 storage systems support F_Port, FL_Port, and NL_Port connectivity.
Figure 84 P9000/XP storage systems with tape storage in a shared fabric HP-UX Windows Solaris AIX VMware All supported switches XP P9500 FC bridge 25140c P9000/XP SAN boot support P9000/XP LUNs can be booted from the SAN using B-series, C-series, and H-series switches. switches. SAN boot through the B-series MP Router is not supported.
3 XP12000/10000 boot on OpenVMS and Tru64 requires Alpha Server console 6.9 (or later). 4 Not all storage systems or operating systems listed are supported with H-series switches.
migration, except where noted. Table 121 (page 241) describes the operating system support for online data migration. For information about configuring the MPX200 for data migration, see the HP MPX200 Multifunction Router Data Migration Solution Guide.
Table 121 Online data migration operating system support MPX200 online data migration support1 Online data migration destination storage system and firmware (minimum) • P2000 G3 FC (TS251P002-04) • P4000 (9.0) • P6350/P6550 (11001000) • EVA8000/6000/4000 (6.200) • EVA8100/6100/4100 (6.220) • HP-UX 11iv3, 11iv2, Clusters (Service Guard) 2 • IBM AIX 6.1, 5.
15 SVSP storage system rules This chapter describes the HP SVSP storage system rules.
• Bidirectional, asynchronous remote replication with automated initial normalization between source and destination (up to 150 ms one-way latency for asynchronous mirroring) NOTE: HP SVSP uses built-in iSCSI; therefore, no additional devices are required to connect HP SVSP to the intersite IP network. For network requirements for asynchronous replication, see Table 124 (page 246). • Synchronous mirroring across 100 km or (0.
• IP-based intersite links (for SVSP Continuous Access) • Minimum of one host • Minimum of two HBAs per host or one dual-channel HBA per host NOTE: Long-distance asynchronous remote mirroring requires an additional domain at the other site and sufficient IP-based bandwidth between sites. For network requirements for asynchronous replication, see Table 124 (page 246). For more information, see “Level 4: multiple fabrics and device paths (NSPOF)” (page 39).
Table 123 SVSP heterogeneous SAN storage rules (continued) Rule number Description the switch hop limits, including the host-to-local storage link, the local storage-to-remote storage link, and the local host-to-remote storage link.
SVSP data migration SVSP is supported as a data migration source storage system when using the HP StorageWorks MPX200 Multifunction Router data migration feature. This feature provides for block (LUN) level data movement between source and destination storage systems. For data migration from SVSP to P2000, see “P2000 data migration” (page 215) and for SVSP to P6000/EVA, see “P6000/EVA data migration” (page 226).
Table 124 SVSP inter-site network requirements for long distance gateways (continued) Specification 1 Maximum latency Description 150 ms IP network delay one-way or 300 ms round-trip 2 Average packet-loss ratio 3 Latency jitter Must not exceed 0.5% averaged over a 5-minute window Must not exceed plus or minus 10% over a 5-minute window 1 Pre-existing restriction 2 A high packet-loss ratio indicates the need to retransmit data across the inter-site link.
• ◦ 49,152 with three DPM groups ◦ 65,536 with four DPM groups DPM BE paths 4,096 maximum per DPM (regardless of pairs, all must see same storage) NOTE: For Large LUNs, every 2 TB counts as one LUN. For example, a Large LUN of 8 TB would count as 4 LUNs against the maximum of 2,047 LUNs per EVA. For additional information about SVSP maximums, see HP StorageWorks SAN Virtualization Services Platform Release Notes at http://www.hp.com/go/SVSP.
16 3PAR StoreServ storage rules This chapter describes specific rules for the following storage systems: • 3PAR F200/F400 Storage • 3PAR T400/T800 Storage • 3PAR StoreServ 10400/10800 Storage • 3PAR StoreServ 7200/7400 Storage • 3PAR StoreServ 7450 Storage This chapter describes the following topics: • “3PAR StoreServ storage” (page 249) • “3PAR data migration” (page 251) • “3PAR storage management” (page 254) 3PAR StoreServ storage Before implementation, contact an HP storage representative f
4 5 Apple Mac OS X is supported only with 10400/10800, 7200/7400, and 7450 storage systems, running a minimum 3PAR OS version of 3.1.3. Apple Mac OS X is supported only with B-series and C-series switches. Configuration rules Table 126 (page 250) describes HP 3PAR StoreServ Storage system SAN configuration rules. NOTE: For information about 3PAR StoreServ FCoE target support, see the HP SPOCK 3PAR FCoE configuration sets on the SPOCK website at http://www.hp.com/storage/spock.
For configuration settings for the InServ ports, see the HP 3PAR implementation guide for each of the supported operating systems. Virtual Connect Direct-attach Fibre Channel for 3PAR Storage HP supports Virtual Connect Direct-attach Fibre Channel for 3PAR storage using the Virtual Connect FlexFabric 10 Gb/24-port Module. This provides connectivity between HP c-Class BladeSystems and 3PAR StoreServ Storage systems without a Fibre Channel switch or fabric.
Table 127 HP 3PAR Online Import Utility for EMC Storage host support matrix Host O/S Server type Fibre Channel HBA Source array Windows 2008 R2 EMC CX4-120 Windows 2012 EMC CX4-240 HP ProLiant:Intel/AMD x86/x64 RHEL 6 U3, U4, U5 HP BladeSystem c-Class EMC CX4-480 3PAR 7200 EMC CX4-960 3PAR 7400 EMC VNX5100 3PAR 7450 HP EMC Destination array EMC VNX5300 3PAR 10400 EMC VNX5500 3PAR 10800 EMC VNX5700 EMC VNX7500 MPX200 Multifunction Router with data migration The MPX200 Multifunction Rou
Table 128 3PAR data migration source-destination storage systems Source storage systems 3PAR destination storage systems • All HP MSA (Fibre Channel) and EVA models • P9500/XP24000/20000, XP12000/10000 • SVSP • 3PAR S-Class Third-party array models1: • Dell EqualLogic family (iSCSI), Compellent Series 30 and 40 Controllers • EMC CLARiiON AX series, CX Series, Symmetrix DMX Series, Symmetrix VMAX SE, VNX5500 • Fujitsu ETERNUS DX400, DX440 S2, DX8400 3PAR StoreServ 10400/10800; 3PAR StoreServ 7450; 3PAR St
For the latest data migration storage system, operating system, and version support, see the SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access. 3PAR data migration considerations MPX200 connectivity to 3PAR storage as a data migration destination array is obtained through a Fibre Channel switch configured in the same fabric as the MPX200 Fibre Channel ports.
17 Enterprise Backup Solution One of the most significant benefits of a SAN is the ability to share the SAN infrastructure for both disk and tape. With a SAN backup solution, you get all the benefits of the SAN, such as, consolidated storage, centralized management, and increased performance. Additionally, implementing a SAN backup solution lays the foundation for advanced data protection features such as serverless backup and backup to disk. The HP solution is the HP Enterprise Backup Solution (EBS).