HP StorageWorks XP Disk Array Mainframe Host Attachment and Operations Guide XP24000, XP20000 Part number: A5951–96151 First edition: September 2007
Legal and notice information © Copyright 2007 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents About this guide ................................................................................... 9 Intended audience ...................................................................................................................... 9 Related documentation ................................................................................................................ 9 Document conventions and symbols ...........................................................................................
Viewing Path Information .............................................................................................. 34 Physical Host Connection Specifications ................................................................................ 35 Logical Host Connection Specifications .................................................................................. 36 3 z/OS Operations ............................................................................ 39 Data Management Functions ...............
Index .................................................................................................
Figures 1..Fibre Channel Device Addressing ............................................................................. 21 2..Fiber Connectors .................................................................................................... 26 3..Mainframe Logical Paths (Example 1) ........................................................................ 28 4..Mainframe Logical Paths (Example 2) ........................................................................ 29 5..
Tables 1..Document conventions ............................................................................................... 9 2..Channel Adapter Specifications ................................................................................ 20 3..Comparing ESCON and FICON Physical Specifications .............................................. 25 4..Comparing ESCON and FICON Logical Specifications .............................................. 26 5..Operating Environment Required for FICON ................
About this guide This guide provides information about: • Requirements and procedures for connecting an XP disk array to a host system • Configuring the disk array for use with the mainframe operating system Intended audience This guide is intended for system programmers, system operators, and storage administrators with knowledge of: • Host hardware • Mainframe operating system • XP disk arrays Related documentation The following documents provide related information: • • • • • • HP StorageWorks HP Stor
Convention Element • GUI elements that are clicked or selected, such as menu and list items, buttons, tabs, and check boxes Italic text Text emphasis Monospace text • File and directory names • System output • Code • Commands, their arguments, and argument values Monospace, italic text • Code variables • Command variables Monospace, bold text Emphasized monospace text WARNING! Indicates that failure to follow directions could result in bodily harm or death.
• 1 KB (kilobyte) = 1,024 bytes • 1 MB (megabyte) = 1,0242 bytes • 1 GB (gigabyte) = 1,0243 bytes • 1 TB (terabyte) = 1,0244 bytes • 1 block = 512 bytes Graphical interface illustrations The GUI illustrations in this guide were created using a Windows computer with the Internet Explorer browser. Actual windows may differ depending on the operating system and browser used. GUI contents also vary with licensed program products, storage system models, and firmware versions.
To make comments and suggestions about product documentation, please send a message to storagedocsFeedback@hp.com. All submissions become the property of HP.
1 Overview of Mainframe Operations This chapter provides an overview of mainframe host attachment issues, functions, and operations. Mainframe Compatibility and Functionality The XP24000/XP20000 disk arrays support 3990-6, 3990-6E, and 2105 and 2107 control unit (CU) emulation types and can be configured with multiple concurrent logical volume image (LVI) formats, including 3390-3, 3390-3R, 3390-9, and larger. For additional information on mainframe environments, contact your HP representative.
devices (LDEVs). Each physical ExSA channel interface (port) supports up to 512 logical paths (32 host paths × 16 CUs) for a maximum of 32,768 logical paths per storage system. ExSA connection provides transfer rates of up to 17 MB/sec. NOTE: Current addressing limitations for ESCON interface are 1024 Unit Addresses (UA) per channel. With FICON interface, addressability is increased to 65,536 UA.
the event of a disaster or system failure, the secondary copy of data can be rapidly invoked to allow recovery with a very high level of data integrity. Universal Replicator for Mainframe can also be used for data duplication and migration tasks. Universal Replicator for Mainframe represents a unique and outstanding disaster recovery solution for large amounts of data which span multiple volumes.
Resource Management XP External Storage Software XP External Storage Software is a feature first introduced on the XP disk array that enables users to realize the virtualization of the storage system. Users can connect other storage systems to the XP disk array and access the data on the external storage system over virtual devices on the XP disk array. Functions such as TrueCopy for Mainframe and Cache Residency can be performed on the external data.
Volume Retention Manager Volume Retention Manager (also Data Retention Utility for z/OS) (formerly LDEV Guard) allows you to protect your data from I/O operations performed by mainframe hosts. Volume Retention Manager enables you to assign an access attribute to each logical volume to restrict read and/or write operations. Using Volume Retention Manager, you can prevent unauthorized access to your sensitive data.
NOTE: Although mainframe platforms do not use LUNs, XP Auto LUN can be used in mainframe environments because it migrates volumes or LDEVs. For details on XP Auto LUN, see the HP StorageWorks XP24000/XP20000 Auto LUN Software User's Guide, or contact your HP representative.
2 FICON and ESCON Host Attachment This chapter describes the process followed when attaching the storage system to a mainframe host using a FICON or ESCON adapter. Overview The storage system supports all-mainframe, all-open system, and multiplatform operations and offers the following types of host channel connections: • FICON: The storage system supports up to 112 FICON ports capable of data transfer speeds of up to 400 MB/sec (4 Gbps).
card. When configured with shortwave Fibre Channel adapters, the storage system can be located up to 500 meters (2750 feet) from the open-system host(s). When configured with longwave Fibre Channel adapters, the storage system can be located up to 10 kilometers from the open-system host(s). • iSCSI: The storage system supports a maximum of 112 iSCSI interfaces. The iSCSI channel interface boards provide data transfer speeds of up to 100 MB/sec.
NOTE: * When the number of devices per CHL image is limited to a maximum of 1024, 16 CU images can be assigned per CHL image. If one CU involves 256 devices, the maximum number of CUs per CHL image is limited to 4. Figure 1 Fibre Channel Device Addressing FICON and ESCON considerations for zSeries hosts This describes some things you should consider before you configure your system with FICON or ESCON adapters for zSeries hosts.
For example, for ESCON, you can configure four or eight paths per path group from a host to a storage unit. For ESCON, you can configure at least four paths in the path group to maximize performance. Most ESCON controllers initiate channel command execution that partially synchronizes the lower disk drive interface with the upper channel interface. This channel command allows you only a very short time to reconnect. The consequence of limited time is a reconnection that can fail.
NOTE: ESCON host channels limit the number of devices per channel to 1024. To fully access 4096 devices on a storage unit, it is necessary to connect a minimum of four ESCON host channels to the storage unit. You can access the devices through a switch to a single storage unit ESCON port. This method exposes four control-unit images (1024 devices) to each host channel.
• Host channel controller v Peer controller host channel • Peer controller with appropriate equipment NOTE: Appropriate retention hardware to support cable attachments that control bend-radius limits comes with each ESCON host attachment. Logical paths and path groups for ESCON adapters on zSeries hosts Each ESCON adapter card supports two ESCON ports or links. Each port supports 64 logical paths. At the maximum of 32 ESCON ports, the number of logical paths is 2048.
• • • • • • • Full-duplex bi-directional data transfers Multiple I/O processes Allows small and large data transfers to be transmitted at the same time High-bandwidth transfer (up to 4 Gbps) Greater throughput rates over longer distances than ESCON Lowered interlock between the disk controller and channel Executing pipeline CCW FICON Physical Specifications The following table compares the physical specifications between ESCON and FICON.
Figure 2 Fiber Connectors FICON Logical Specifications The following table compares the logical specifications between FICON and ESCON. Table 4 Comparing ESCON and FICON Logical Specifications Item ESCON CCW handling FICON Handshaking Pipelining Protocol interlock is reduced.
Items Specification Operating system OS/390 Rev. 2.6 and later released Z-OS R1.V1 and later released VM/ESA 2.4.0 VM/ESA 2.3.0 with Authorized Problem Analysis Report (APAR) VSE/ESA 2.3 with APAR Z/VM Version 3 Release 1 Controller emulation type 2105-F20 or later 2107 NOTE: For more information about the FICON operating environment, including supported hosts and operating environments, see the IBM Redbooks.
Parameter 8MFS 8MFLR Number of Ports per Storage System on DKA boards (DKA Slot Used) 8/ 16 / 24 / 32 / 40/ 48 / 56 / 64 / 72 / 80 / 88/ 96 / 104 / 112 / 16 / 24 / 32 / 40/ 48 / 56 / 64 / 72 / 80 / 88/ 96 / 104 / 112 Small Form Pluggable (SFP) Support Short Wave Long Wave Maximum Cable Length 500m / 300m / 150m 10km (See note) NOTE: The maximum cable length values for the 16MFSR apply when 50/125µm multi-mode fiber cable is used. If 62.
Figure 4 Mainframe Logical Paths (Example 2) The FICON channel adapters provide logical path bindings that map user-defined FICON logical paths. Specifically: • Each CU port can access 255 CU images (all CU images of logical storage system). • To the CU port, each logical host path (LPn) connected to the CU port is defined as a channel image number and a channel port address. The LPn identifies the logical host path by the channel image number and channel port address (excluding CU image number).
• The maximum number of logical paths per storage system is 131072 (2048 X 48). Figure 6 FICON Channel Adapter Support for Logical Paths (Example 2) Supported Topologies Point-to-Point Topology A channel path that consists of a single link interconnecting a FICON channel in FICON native (FC) mode to one or more FICON control unit images (logical control units) forms a point-to-point configuration.
can be attached to the Fibre Channel switch in any combination, depending on configuration requirements and available Fibre Channel switch resources. Sharing a control unit through a Fibre Channel switch allows communication between a number of channels and the control unit to occur either: • Over one switch-to-CU link, such as when a control unit has only one link to the Fibre Channel switch, or • Over multiple-link interfaces, such as when a control unit has more than one link to the Fibre Channel switch.
Figure 9 Example of a Cascaded FICON Topology In a Cascaded FICON topology, one or two switches reside at the topmost (root) level, between the CHL and DKC ports (see Figure 10. With this configuration, multiple channel images and multiple control unit images can share the resources of the Fibre Channel link and Fibre Channel switches, so that multiplexed I/O operations can be performed.
Figure 10 Example of Ports in a Cascaded FICON Topology Required High-Integrity Features FICON directors in a cascaded configuration require switches that support the following high-integrity features: • Fabric binding: This feature lets an administrator control the switch composition of a fabric by registering WWNs in a membership list and explicitly defining which switches are capable of forming a fabric. In this way, an operator can deny non-authorized switches access into a fabric.
Figure 11 Required High-Integrity Features for Cascaded Topologies Viewing Path Information Depending on the microcode version installed on the storage system, the Link values displayed on the SVP Logical Path Status screen may appear as 3-byte values for cascaded topologies: • The first byte corresponds to the switch address. • The second byte corresponds to the port address. • The third byte corresponds to the FC-AL port (this is a constant value in FICON).
Figure 13 Example of Cascaded and Non-cascaded Topologies Figure 14 Example of Confirming a Cascaded Topology Physical Host Connection Specifications The following table lists the physical specifications associated with host connections.
RAID XP disk array Bandwidth Host z-900 z990 Z9 G5/G6 Native FICON FICON Express FICON Express FICON Express2 FICON Express4 Native FICON / 4 Gbps Gbps (Auto-negotiation) Connector Cable LC-Duplex SC-Duplex LC-Duplex LC-Duplex SC-Duplex Single Mode Single Mode Single Mode Single Mode Single Mode Multi Mode Multi Mode Multi Mode Multi Mode Multi Mode Logical Host Connection Specifications The following figures show examples of two logical host connections.
Figure 16 Logical Host Connections (Example 2 - FICON) XP Disk Array Mainframe Host Attachment and Operations Guide 37
FICON and ESCON Host Attachment
3 z/OS Operations This chapter discusses the operations available for z/OS hosts. Data Management Functions The storage system provides many features and functions that increase data availability and improve data management. The following table lists the data management functions for mainframe data. See the appropriate user documentation for more details.
Controlled by: Feature Name XP Remote Web Console? Host OS? Licensed Software? User Document Cache LUN XP Yes Yes Yes HP StorageWorks Cache LUN XP user guide Cache Manager Yes Yes No Hitachi Cache Manager User's Guide Compatible PAV Yes Yes Yes HP StorageWorks XP24000/XP20000 for Compatible Parallel Access Volumes Software User's Guide Volume Security Yes No Yes HP StorageWorks XP24000/XP20000 Volume Security User's Guide Volume Retention Manager Yes Yes Yes HP StorageWorks XP24
Subsystem IDs (SSIDs) Subsystem IDs (SSIDs) are used for reporting information from the CU (or controller) to the operating system. The SSIDs are assigned by the user and must be unique to all connected host operating environments. Each group of 64 or 256 volumes requires one SSID, so there are one or four SSIDs per CU image. The user-specified SSIDs are assigned during installation. The following table lists the SSID requirements.
IMPORTANT: ESCON interfaces support 1024 unit addresses per channel and up to 4096 devices for each set of 16 CU images using CUADD=0 through CUADD=F in the CNTLUNIT statement. ESCON channel interfaces can be assigned to access individual groups of 16 control unit images. The following example shows a sample IOCP definition for a storage system configured with: • 2105 ID • Four FICON channel paths. Two channels paths are connected to a FICON switch.
The following example shows a sample IOCP hardware definition for a storage system configured with: • 3990 ID • Two (2) LPARs called PROD and TEST sharing 4 ExSA (ESCON) channels connected over 2 ESCDs to the storage system • Each switch has ports C0 and C1 attached to the storage system • Four (4) control unit images (0, 1, 2, 3) with 256 LVIs per control unit • Two (2) CU statements per control unit image • To protect data integrity due to multiple operating systems sharing these volumes, these devices re
NOTE: If you maintain separate IOCP definitions files and create your SCDS or IOCDS manually by running the IZP IOCP or ICP IOCP program, you must define each CU image on a storage system using one CNTLUNIT statement in IOCP. While it is possible to define a CU image on a storage system using multiple CNTLUNIT statements in IOCP, the resulting input deck cannot be migrated to HCD due to an IBM restriction allowing only one CNTLUNIT definition.
Parameter Value Number of devices 64 Device type 3390 Connected to CUs Specify the control unit number(s). Table 11 HCD Definition for 256 LVIs Parameter Value Control Frame: Control unit number Specify the control unit number. Control unit type NOCHECK* Use UIM 3990 for more than 128 logical paths. Use UIM 3990-6 for 128 or fewer logical paths.
San Diego OS/390 R2.8 Master MENU OPTION ===> HC SCROLL ===> PAGE USERID - HDS TIME - 20:23 IS P IP R SD HC BMB BMR BMI SM IC OS OU S X ISMF PDF IPCS RACF SDSF HCD BMR BLD BMR READ BMR INDX SMP/E ICSF SUPPORT USER SORT EXIT F1=HELP F7=UP 2.
| channel paths | | 4. Control units | | 5. I/O devices | | | | F1=Help F2=Split F3=Exit F9=Swap F12=Cancel | '-----------------------------------------------------------------------' 4. On the Control Unit List panel, if a 2105 type of control unit already exists, then an “Add like” operation can be used by inputting an A next to the 2105 type control unit and pressing Enter. Otherwise, press F11 to add a new control unit.
Goto Filter Backup Query Help .---------------------- Select Processor / Control Unit ----------------------. | Row 1 of 1 More: > | | Command ===> ___________________________________________ Scroll ===> PAGE | | | | Select processors to change CU/processor parameters, then press Enter. | | | | Control unit number . . : 8000 Control unit type . . . : 2105 | | | | Log. Addr. -------Channel Path ID . Link Address + ------- | | / Proc. ID Att.
| | | | | | | | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset | | F6=Previous F7=Backward F8=Forward F9=Swap F12=Cancel | '-----------------------------------------------------------------------------' 9. On the Control Unit List panel, add devices to the new Control Unit, input an S next to CU 8000, and then press Enter.
| | | | | Specify or revise the following values. | | | | Device number . . . . . . . . 8000 (0000 - FFFF) | | Number of devices . . . . . . 128 | | Device type . . . . . . . . . 3390B________ + | | | | Serial number . . . . . . . . __________ | | Description . . . . . . . . . 3390 Base addresses 8000-807F__ | | | | Volume serial number . . . . . ______ (for DASD) | | | | Connected to CUs . .
14. On the Define Processor / Definition panel, verify that the proper values are displayed, and press Enter. .-------------------- Device / Processor Definition --------------------. | Row 1 of 1 | | Command ===> _____________________________________ Scroll ===> PAGE | | | | Select processors to change device/processor definitions, then press | | Enter. | | | | Device number . . : 8000 Number of devices . : 128 | | Device type . . .
| WLMPAV Yes Device supports work load manager | | SHARED Yes Device shared with other systems | | SHAREDUP No Shared when system physically partitioned | | ***************************** Bottom of data ****************************** | | | | | | F1=Help F2=Split F3=Exit F4=Prompt F5=Reset | | F7=Backward F8=Forward F9=Swap F12=Cancel | '-----------------------------------------------------------------------------' 17. When the Define Device to Operating System Configuration panel appears, press F3. 18.
enables high levels of concurrent data access across multiple channel paths. For further information on TPF and MPLF operations, see the IBM documentation. Note on 2105 emulation: There are available PTFs to implement exploitation mode for TPF version 4.1. For further information, see the IBM documentation.
devices. The following table lists ICKDSF commands that are specific to the storage system, as contrasted to RAMAC. Table 12 ICKDSF Commands for XP Disk Array Contrasted to RAMAC Command INSPECT Argument KEEPIT CC = 12 Invalid parameter(s) for device type. XP disk array CC = 12, F/M = 04 (EC=66BB). RAMAC CC = 4 Parameter ignored for device type. XP disk array CC = 12, F/M = 04 (EC=66BB) Unable to establish primary and alternate track association for track CCHH=xxxx.
Command Argument DATA, NODATA CONTROL INIT REFORMAT CPVOLUME AIXVOL Storage System Return Code XP disk array CC = 12, F/M = 04 (EC=66BB) Error, not a data check. Processing terminated. RAMAC CC = 0, Data/Nodata parameter not allowed. XP disk array CC=0 RAMAC CC = 0, ALT information not displayed. XP disk array CC = 0, ALT information not displayed. RAMAC CC = 0, ALT information not displayed. XP disk array CC = 0 RAMAC CC = 0, ALT information not displayed.
//LIST JOB. . . . . . //COUNT1 EXEC PGM=IDCAMS //SYSPRINT DD SYSOUT=A //SYSIN DD * LISTDATA COUNTS VOLUME(VOL000) LISTDATA COUNTS VOLUME(VOL064) LISTDATA COUNTS VOLUME(VOL128) LISTDATA COUNTS VOLUME(VOL192) /* UNIT(3390) UNIT(3390) UNIT(3390) UNIT(3390) SUBSYSTEM SUBSYSTEM SUBSYSTEM SUBSYSTEM • Subsystem counter reports. The cache statistics reflect the logical caching status of the volumes.
DEVSERV PATHS. The DEVSERV PATHS command is defined as the number of LVIs that can be specified by an operator (from 1 through 99).
• • • • SGI IRIX HP Tru64 UNIX HP OpenVMS HP NonStop PC server platforms: • Windows 2000 • Windows 2003 • Novell NetWare Linux platforms: • Red Hat Linux • SuSE Linux • VMware Configuring the Fibre Channel Ports The LUN Manager software enables users to configure the Fibre Channel ports for the connected operating system and operational environment (for example, FC-AL or fabric).
Figure 17 Fibre Port-to-LUN Addressing Virtual LVI/LUN Devices The Virtual LVI/LUN software enables users to configure multiple custom volumes (LVIs or LUs) under a single LDEV. Open-system users define Virtual LVI/LUN devices by size in MB* (minimum device size = 35 MB). Mainframe users define Virtual LVI/LUN devices by number of cylinders.
EPO switch: The Emergency Power-Off switch for the storage system is located on the top right of the rear panel (see rear view of DKC in the following figure). Figure 18 Control Panel NOTE: This illustration does not show the service port under the EPO switch. Table 13 Control Panel Item 1 60 Name SUBSYSTEM READY z/OS Operations Type LED (Green) Description When lit, indicates that input/output operation on the channel interface is possible. Applies to both storage clusters.
Item Name Type Description 2 SUBSYSTEM ALARM LED (Red) When lit, indicates that low DC voltage, high DC current, abnormally high temperature, or a failure has occurred. Applies to both storage clusters. 3 SUBSYSTEM MESSAGE LED (Amber) On: Indicates that a SIM (Message) was generated from either of the clusters. Applied to both storage clusters. Blinking: Indicates that the SVP failure has occurred.
Item Name Type Description LOCAL position: The storage system is powered on/off by the PS ON / PS OFF switch. Applies to both storage clusters. 15 LED TEST/CHK RESET Switch LED TEST position: The LEDs on Control Panel go on. CHK RESET position: The PS ALARM and TH ALARM are reset. Emergency Power-Off (EPO) The disk array EMERGENCY POWER OFF (EPO) switch is located in the rear of the DKC (see the following figure). Use this switch only in case of an emergency.
4 Linux Operations This chapter describes storage system operations in a Linux host environment. The storage system supports attachment to the following mainframe Linux operating systems: • Red Hat Linux for S/390 and zSeries • SuSE Linux Enterprise Server for IBM zSeries (contact HP for availability) For information on supported versions, Linux kernel levels, and other details, contact your HP representative.
Attaching Fibre Channel adapters to zSeries hosts running Linux Linux solutions are available for the 31- and 64-bit environments. The availability of this option depends on the zSeries model and the Linux distribution. Fibre Channel support is available for both direct and switched attachment. Linux for S/390 (31-bit) Linux for S/390 is a 31-bit version of Linux. It also runs on zSeries models in 31-bit mode. The 31-bit limitation limits the addressable main storage to 2 GB.
• • • • • Host name of the server hosting the Linux system. Device address and CHPID of the FCP port that is attached to the Linux machine Worldwide port name (WWPN) of the FCP port on the zSeries WWPN of the Fibre Channel port on the storage unit Fibre Channel port on the storage unit This information can be obtained from the hardware management console (HMC), the IBM TotalStorage DS Storage Manager, and the SAN switch.
Setting up a Linux system to use Fibre Channel Protocol devices on zSeries hosts Begin by collecting the following software configuration information to prepare a Linux system to access the storage unit through a Fibre Channel: • • • • Host name of the server hosting the Linux system Device address (and CHPID) of the FCP port that is attached to the Linux machine Worldwide port name (WWPN) of the FCP port on the zSeries Fibre Channel port on the storage unit v WWPN of the Fibre Channel port on the storage
2. 3. • To add more than one device to your SCSI configuration, write a small script that includes all the parameters included. This is an optional step. • Alternatively, you can also add SCSI devices to an existing configuration with the add_map command. After using this command, you must manually map the devices in the SCSI stack.
Figure 20 Basic ESCON Configuration The channels are grouped into a channel-path group for multi-pathing capability to the storage unit ESCON adapters. ESCON to FICON migration example for a zSeries or S/390 host The following figure shows another example of a S/390 or zSeries host system with four ESCON channels. In this example, two FICON channels are added to an S/390 or zSeries host.
Figure 21 Four Channel ESCON system with two FICON channels added This figure shows four ESCON adapters and two ESCON directors. The illustration also shows the channel path group and FICON directors through which the two FICON adapters are installed in the storage unit. The two FICON directors are not required. You can improve reliability by eliminating a single point of failure. The single point of failure might be present if both FICON channels are connected to a single FICON director.
FICON configuration example for a zSeries or S/390 host The following figure provides a FICON configuration example for a zSeries or S/390 host that illustrates how to remove the ESCON paths. The S/390 or zSeries host has four ESCON channels connected to two ESCON directors. The S/390 or zSeries host system also has two FICON channels. Figure 22 Non-Disruptive ESCON Channel Removal You can remove the ESCON adapters non-disruptively from the storage unit while I/O continues on the FICON paths.
Migrating from a FICON bridge to a native FICON attachment FICON bridge overview The FICON bridge is a feature card of the ESCON Director 9032 Model 5. The FICON bridge supports an external FICON attachment and connects internally to a maximum of eight ESCON links. The volume on these ESCON links is multiplexed on the FICON link. You can perform the conversion between ESCON and FICON on the FICON bridge. FICON bridge configuration The following figure shows an example of a FICON bridge.
Figure 24 FICON Mixed Channel Configuration In the example, one FICON bridge was removed from the configuration. The FICON channel that was connected to that bridge is reconnected to the new FICON director using the storage unit FICON adapter. The channel-path group was changed to include the new FICON path. The channel-path group is now a mixed ESCON and FICON path group. I/O operations continue to the storage unit across this mixed path group.
Figure 25 Native FICON Configuration Registered state-change notifications (RSCNs) on zSeries hosts McDATA and CNT switches ship without any configured zoning. This unzoned configuration enables the default zone on some McDATA switches. This configuration enables all ports in the switch with a Fibre Channel connection to communicate with each other and to receive registered state change notifications about each other. You can set the zones.
Linux Operations
5 Troubleshooting This chapter provides error codes, troubleshooting guidelines and customer support contact information. Troubleshooting The storage system provides continuous data availability and is not expected to fail in any way that would prevent access to user data. The READY LED on the control panel must be ON when the storage system is operating online. The following table lists potential error conditions and provides recommended actions for resolving each condition.
Service Information Messages (SIMs) The storage system generates service information messages (SIMs) to identify normal operations (for example, XP Continuous Access pair status change) as well as service requirements and errors or failures. For assistance with SIMs, call your HP support representative. SIMs can be generated by the front-end and back-end directors and by the SVP. All SIMs generated by the storage system are stored on the SVP for use by HP personnel, logged in the SYS1.
Figure 26 Typical SIM Showing Reference Code and SIM Type XP Disk Array Mainframe Host Attachment and Operations Guide 77
Troubleshooting
Glossary AL-PA Arbitrated loop physical address. array group A group of 4 or 8 physical hard disk drives (HDDs) installed in an XP disk array and assigned a common RAID level. RAID1 array groups consist of 4 (2D+2D) or 8 HDDs (4D+4D). RAID5 array groups include a parity disk but also consist of 4 (3D+1P) or 8 HDDs (7D+1P). All RAID6 array groups are made up of 8 HDDs (6D+2P).
OPEN-E). The number of resulting LDEVs depends on the selected emulation mode. The term LDEV is often used synonymously with the term volume. LUN Logical unit number. A LUN results from mapping a SCSI logical unit number, port ID, and LDEV ID to a RAID group. The size of the LUN is determined by the emulation mode of the LDEV and the number of LDEVs associated with the LUN. For example, a LUN associated with two OPEN-3 LDEVs has a size of 4,693 MB.
VSC Volume size customization. Synonymous with CVS. WWN World Wide Name. A unique identifier assigned to a Fibre Channel device. XP Remote Web Console HP StorageWorks XP Remote Web Console. A browser-based program installed on the SVP that allows you to configure and manage the disk array.
Glossary
Index A activity bottlenecks, 17 algorithms dynamic cache mgmt, 55 alias address range, 45 asynchronous xrc remote copy, 15 audience, 9 C cache loading mode, 56 statistics, 55 storing data, 16 CNTLUNIT statement, 44 commands cache, 57 idcams setcache, 56 listdata, 55 listdata status, 56 tso, 15 write inhibit, 61 conventions document, 9 storage capacity values, 10 text symbols, 10 D data sequential access pattern , 56 data integrity feature=shared, 43 data transfer max speed, 19 using exsa channels, 15 dat
longwave location from host, 20 time interval mih max, 40 track transfer counters report, 56 M U L minimal init job, 53 mode exploitation, 53 O open-to-open operations, 15 operation add like, 47 overwriting volumes, 17 P parameters cuadd, 41 wlmpav, 51 ports storage system, 16 R Red Hat Linux, 13 related documentation, 9 reports erep sim, 57 S shortwave location from host, 20 statements add cuu cuu eckd, 52 storage nonvolatile, 56 storage capacity values conventions, 10 Subscriber's Choice, HP, 11