Bull ESCALA EPC Series EPC Connecting Guide ORDER REFERENCE 86 A1 65JX 03
Bull ESCALA EPC Series EPC Connecting Guide Hardware October 1999 BULL ELECTRONICS ANGERS CEDOC 34 Rue du Nid de Pie – BP 428 49004 ANGERS CEDEX 01 FRANCE ORDER REFERENCE 86 A1 65JX 03
The following copyright notice protects this book under the Copyright laws of the United States of America and other countries which prohibit such actions as, but not limited to, copying, distributing, modifying, and making derivative works. Copyright Bull S.A. 1992, 1999 Printed in France Suggestions and criticisms concerning the form, content, and presentation of this book are invited. A form is provided at the end of this book for this purpose.
About This Book Typical Powercluster configurations are illustrated, together with the associated sub-systems. Cabling details for each configuration are tabulated, showing cross-references to the Marketing Identifiers (MI) and the Catalogue. Reference numbers associated with the configuration figure titles correspond to those in the Catalogue.
Document Overview This manual is structured as follows: iv Chapter 1 Introducing the Escala Powercluster Series Introduces the Powercluster family of Escala racks. Chapter 2 EPC400 Describes the Escala RT Series rack with an Escala RT drawer. Chapter 3 EPC800 Describes the Escala RM Series rack with a CPU rack drawer. Chapter 4 EPC1200 Describes the Escala RL470 Basic System which consists of two racks (a computing rack with a CPU drawer and an expansion rack with an I/O drawer.
Terminology The term “machine” is used to indicate the proprietary hardware, in this case the Escala family of multi–processors. The term “Operating System” is used to indicate the proprietary operating system software, in this case AIX.
• Escala S Series System Service Guide Reference: 86 A1 91JX • Escala Mxxx Installation & Service Guide Reference: 86 A1 25PN • Escala Rxxx Installation & Service Guide Reference: 86 A1 29PN • Escala RL470 Installation Procedures for Drawers Reference: 86 A1 29PX • DLT4000/4500/dlt4700 Catridge Tape Subsystem Product Manual Reference: Quantum 81-108336-02 (Jan 96). • Disk & Tape Devices Configuration Information Reference: 86 A1 88GX.
Table of Contents About This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Chapter 1. Introducing the Escala Powercluster Series . . . . . . . . . . . . . . . . . . . . Introducing Powercluster Servers (Cluster Nodes) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiple Node Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixed Configurations . . . . . . . . . . . .
Chapter 6. Console Cabling Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Console Cabling Requirements – Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System Console and Graphics Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of MIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Components . . . . . . . . . . . . . . . . . . . .
Chapter 7. Fast Ethernet Interconnect Requirements . . . . . . . . . . . . . . . . . . . . . . Fast Ethernet Interconnect Requirements – Overview . . . . . . . . . . . . . . . . . . . . . . . . Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples of Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cabling Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . .
x Chapter 10. Disk Subsystems Cabling Requirements . . . . . . . . . . . . . . . . . . . . . . Disk Subsystems Cabling Requirements – Overview . . . . . . . . . . . . . . . . . . . . . . . . . SSA Disk Subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MI List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General Information . . . . . . . . . . . . . . . . . . . .
Chapter 1. Introducing the Escala Powercluster Series Introducing the Powercluster family of Escala racks. Introducing Powercluster Servers (Cluster Nodes) The Powercluster offer is made up of Escala rack-mountable servers. Three uni-node server models are available: • EPC400, an Escala RT Series rack with an Escala RT drawer, see page 2-1. • EPC800, an Escala RM Series rack with a CPU rack drawer, see page 3-1.
1-2 EPC Connecting Guide
Chapter 2. EPC400 Series Describing the Escala RT Series rack with an Escala RT drawer. EPC400 Series – Profile These models, contained in a 19” RACK (36U), are RT Nodes, including: Configuration EPC400 CPXG210–0000 EPC430 CPXG225–0000 EPC440 CPXG226–0000 Power Supply Redundant Redundant Hot Swapping Slots for CPU board 4 4 2 (up to 8GB) Ultra SCSI Bus for Medias 1 1 1 Ultra SCSI Bus for Disks 1 1 (ultra–2/LVD) 1 Floppy Disk 1.
List of Drawers for EPC400 Series Rack The drawers, with their rack-mount kits, that can be mounted into the EPC400 rack are as follows. Legend The following conventions are used: – not applicable. Yes fitted at manufacture. Customer Fitted at customer’s site by Customer services. No Equipment is not fitted in this rack. Yes | No A mounting kit is offered in option. If ordered, the corresponding drawer is mounted into the rack by manufacturing and transported.
Chapter 3. EPC800 Describing the Escala RM Series rack with a CPU rack drawer. EPC800 – Profile M.I. (CPXG211–0000) This model, contained in a 19” RACK (36U), is one RM Node, including: • 1 dual CPU module Power PC 604e @200Mhz – 2MB L2 cache / CPU • 3 slots for additional dual CPU modules • 4 memory slots • Expandable Memory up to 4GB • 1 floppy disk 1.44MB 3”1/2 • 1 LSA board: SCSI2 F/W SE port + 1 Ethernet port • 3 Async.
List of Drawers for EPC800 Rack The drawers, with their rack-mount kits, that can be mounted into the EPC 400 rack are as follows. Legend The following conventions are used: – not applicable. Yes fitted at manufacture. Customer Fitted at customer’s site by Customer services. No Equipment is not fitted in this rack. Yes | No A mounting kit is offered in option. If ordered, the corresponding drawer is mounted into the rack by manufacturing and transported. Otherwise the equipement is not put into the rack.
Chapter 4. EPC1200/1200A and 2400 Describing the Escala EPC1200/1200A and 2400 Systems which consist of two racks (a computing rack with a CPU drawer and an expansion rack with an I/O drawer).
Standard Adapters/Cables One CBLG105–1800 serial cable is automatically generated for every drawer. Two CBL1912 adapter cables (9 pin, 25 pin) are systematically provided and mounted with any CPU drawer. An 8–port asynchronous board is per default generated for every CPU drawer: M.I. (DCCG130–0000) 8–Port Async RS232/422 PCI Adapter with fan-out box. This multi–port board is mandatory as the S1 and S2 native serial ports are reserved for other use.
List of Drawers for EPC1200 Rack The drawers, with their rack-mount kits, that can be mounted into the EPC1200 rack are as follows. Legend The following conventions are used: – not applicable. Yes fitted at manufacture. Customer Fitted at customer’s site by Customer services. No Equipment is not fitted in this rack. Yes | No A mounting kit is offered in option. If ordered, the corresponding drawer is mounted into the rack by manufacturing and transported. Otherwise the equipement is not put into the rack.
4-4 EPC Connecting Guide
Chapter 5. Subsystems Introduces the different types of subsystems. Subsystems – Summary There are several types of subsystems: • User consoles, on page 5-2. • Serial Networks, on page 5-4. • Interconnect, on page 5-5. • HA Library, on page 5-6. • DAS SCSI, see page 5-7. • DAS FC-AL, see page 5-8.
User Consoles There are 4 terminal types, see page 6-1. • System Console, an ASCII terminal (BQ306) • Graphics Display (on all models except on the EPC800 model) that substitutes to an ASCII system console with graphical capabilities • Cluster Console, a self–bootable X Terminal • PowerConsole, an Escala S Series. For administration purpose, there are two private networks: • A serial network for configuration with more than 2 nodes.
Number of nodes Administration Hub Console Concentrator Console Type 1 0 0 2 0 2 Dedicated Admin.
Serial Networks There are two type of serial networks: A first one is used by HACMP to monitor the nodes. Nodes periodically exchange keep alive messages (heart beat) in particular through this network. A second one is used to wire the nodes on a console concentrator, if any. It enables a single terminal connected to the console concentrator to be the system console of every node. A node provides 2 or 3 native serial ports: S1 (or COM1) port is used to connect a system console.
Interconnect There are two interconnect types: • FDDI interconnect, on page 7-2 • Fast Ethernet interconnect, on page 9-2. For 2–node configuration of same node type, there is a Fast Ethernet Full Kit (2 Fast Ethernet adapters plus a crossed Ethernet cable), as well a FDDI Full Kit (2 FDDI adapters and two FDDI cables). A Fast Ethernet Full Kit and a FDDI Full Kit are defined for each node type.
HA Library For details, see page 10-69.
DAS SCSI For details, see page 10-23.
DAS FC-AL Not on EPC800. For details, see page 10-36.
Chapter 6. Console Cabling Requirements Describes cabling requirements for control consoles. Console Cabling Requirements – Overview More details in: • System Console and Graphics Display, on page 6-2. • Cluster Administration Hub, on page 6-11. • Console Concentrator, on page 6-14. • Cluster Console, on page 6-26. • Cluster PowerConsole, on page 6-35.
System Console and Graphics Display Details in: • List of MIs, on page 6-2. • Hardware Components, on page 6-3. • Examples of Use, on page 6-5. • General Cabling Diagrams, on page 6-5. • Cabling Diagrams, on page 6-8. • Cabling Legend, on page 6-8. • Configuration Procedure for a 2-Node Powercluster, on page 6-10.
Hardware Components System Console (France) CSKU101–1000 (AZERTY) Identificator Description DTUK016–01F0 BQ306 Screen and logic – Europe Power cord 1 KBU3033 BQ306 AZERTY French Keyboard 1 CBLG104–2000 Cable, local RS232 (25F/25M) 15m 1 CBLG106–2000 Cable, remote RS232 (25M/25F) 15m 1 MB323 Interposer (25M/25M) – BE Length Quantity 1 System Console (Europe) CSKU101–2000 (QWERTY) Identificator Description Length Quantity DTUK016–01F0 BQ306 Screen and logic – Europe Power cord 1 KB
System Console (Germany) CSKU101–000G (QWERTY) 6-4 Identificator Description DTUK016–01F0 BQ306 Screen and logic – Europe Power cord 1 KBU3034 BQ306 QWERTY German Keyboard 1 CBLG104–2000 Cable, local RS232 (25F/25M) 15m 1 CBLG106–2000 Cable, remote RS232 (25M/25F) 15m 1 MB323 Interposer (25M/25M) – BE EPC Connecting Guide Length Quantity 1
Examples of Use System Console The System Console (ASCII terminal) is offered in the following cluster configurations: • Uni-node Escala EPC: the system console is attached through serial port S1 of the node. • Two–node Escala EPC: the System Console can be used alone. In this case the System Console is connected to a node’s S1 port, as shown on Figure 10. There can be two System Consoles, one per node, each one connected to a node’s S1 port.
Figure 6. PWCCF01: 2–node Escala EPC – (one System Console). Note: The 8 async ports board is not mandatory. In this case, use S2 or S3 port to link the two nodes. Figure 7. PWCCF08: 2–node Escala EPC – (two System Consoles).
Figure 8. PWCCF08: 2–node EPC – (1 System Console, 1 Graphic Display). Note: The Ethernet cable is not mandatory.
Cabling Legend Item M.I. Designation Length FRU CBL1912 Cable, Adapter RS232 (9M/25M) 0.3m 76958073-002 CBLG104-2000 Cable, local RS232 (25F/25M) 15m 90232001-001 CBLG105-1800 Cable, local RS232 (25F/25F) 7.
Cabling of the System Console to the Console Concentrator Figure 11. Cabling of the System Console to the Console Concentrator The graphics display is connected to the node of the ordered ESCALA EPC model (EPC400/430/440 or EPC1200 / EPC1200A, EPC2400). There is no graphics for an ESCALA EPC800 model. Cabling of the System Console with a serial link (2-node EPC) Figure 12. PWCCF01: Cabling Schema for a 2–node Powercluster Notes: 1.
Configuration Procedure for a 2-Node EPC with 1 System Console The procedure is performed from the ASCII terminal connected to S1 plug of node#1 and allows to switch from node #1 system console to node #2 system console, when a multi–port async board is present and the two nodes are linked with a serial cable as depicted in the previous figure. Configuration 1. Log in as root user. 2.
Cluster Administration Hub Details in: • Hardware Components, on page 6-11. • Examples of Use, on page 6-12. • Cabling Legend, on page 6-12. • Management Module, on page 6-12. • Cabling Diagrams, on page 6-12.
Examples of Use The Cluster Administration Hub is used to set up a dedicated administration network (10Base-T Ethernet network). The Cluster Administration Hub is used for Escala EPC configurations with a Cluster Console, or a Cluster PowerConsole. The administration network utilizes the LSA adapter of an EPC800 node, an ethernet board on an EPC1200/EPC1200A node, the integrated ethernet card on an EPC400 node or on the Powerconsole. A Cluster Administration Hub has 12 ports.
Figure 13. Cluster Administration Hub Ethernet Connections To ClusterConsole or PowerConsole Figure 14.
Console Concentrator Details in: • Hardware Components, on page 6-14. • Usage cases DCKU115–2000, on page 6-15. • Usage cases DCKU119–2000, on page 6-17. • Cabling Diagrams, on page 6-16. • Cabling Legend, on page 6-16. • Cabling Instructions, on page 6-16. • Console Concentrator Configuration, on page 6-19. Hardware Components Console Concentrator DCKU115–2000 Identificator Description 3C5411–ME Base Unit – CS/2600 (10 ports, disk–based) 1 3C5440E Protocol Packs TCP/OSI/TN3270 Version 6.2.
Usage cases DCKU115–2000 The Console Concentrator is used with: • a Powerconsole whatever is the number of nodes in the Escala EPC configuration. See Figure . • a Cluster Console if the number of nodes is more than two nodes. See Figure 17. If there is a cluster Hub (case of dedicated administration network), the Console Concentrator is connected to it. Otherwise, the Console Concentrator is connected to the Customer’s Ethernet Network.
Cabling Diagrams DCKU115–2000 Figure 15. Console Concentrator Cabling DCKU115–2000 Cabling Legend Item M.I. Designation Length FRU CBL1912 Cable, Adapter RS232 (9M/25M) 0.3m 76958073-002 CBLG104-2000 Cable, local RS232 (25F/25M) 15m 90232001-001 CBLG105-1800 Cable, local RS232 (25F/25F) 7.5m 90233002-001 CBLG106-2000 Cable, remote RS232 (25M/25F) 15m 90234001-001 CBL2101 V24/V28 conn. cable (25M/25F) 3.
Usage cases DCKU119–2000 The Console Concentrator is used with: • a Powerconsole whatever is the number of nodes in the Escala EPC configuration. See Figure . • a Cluster Console if the number of nodes is more than two nodes. See Figure 17. If there is a cluster Hub (case of dedicated administration network), the Console Concentrator is connected to it. Otherwise, the Console Concentrator is connected to the Customer’s Ethernet Network.
Cabling Diagrams DCKU119–2000 Figure 16. Console Concentrator Cabling DCKU119–2000 Cabling Legend Item M.I. 6-18 Designation Length FRU CBL1912 Cable, Adapter RS232 (9M/25M) 0.3m 76958073-002 CBLG104-2000 Cable, local RS232 (25F/25M) 15m 90232001-001 CBLG105-1800 Cable, local RS232 (25F/25F) 7.
Console Concentrator Configuration The configuration of the console concentrator is undertaken by Customer Services. This configuration procedure is provided as a reference only. Initial Conditions The configuration of the console concentrator (CS/2600) is done through the ASCII console (BQ306). The ASCII console must be connected to the J0 port of the CS/2600 server, to setup the console baud rate to 9600.
Example 1 of Label Format Admin Network Example Example of of Value Label Network Mask 255.0.0.0 N/A Powerconsole IP@ 1.0.0.20 PWC Console Concentrator IP@ 1.0.0.10 CS/2600 IP@ of Node #1 1.0.0.1 CS/2600 Port #1 (J1) 1.0.0.11 Node1_admin Node1_cons IP@ of Node #2 1.0.0.2 CS/2600 Port #2 (J2) 1.0.0.12 Node1_admin Node1_cons IP@ of Node #3 1.0.0.3 CS/2600 Port #3 (J3) 1.0.0.13 Node1_admin Node1_cons IP@ of Node #4 1.0.0.4 CS/2600 Port #4 (J4) 1.0.0.14 Node1_admin Node1_cons IP@ of Node #5 1.
Console Concentrator Configuration Procedure Before you start, note that: • The Console Concentrator (CS/2600) is configured through the ASCII console (BQ306). The ASCII console has to be connected to the J0 port of the CS/2600 server, to set the console baud rate to 9600. However, the Console Concentrator (CS/2600) can also be configured through the PowerConsole provided that it is connected to the J0 port instead of the ASCII console.
3. Only if Powerconsole is used Establish the connection between the workstation and CS/2600 on the serial port J0 – using the cu command on the PowerConsole side (see step 1). 4. Switch the CS/2600 to monitor mode: – make a hardware reset (button on the left side of the CS/2600) as described in the CS/2600 Installation Guide. 5. Wait few seconds, then press two–three times during regular interval of 1 second, the following prompt must be displayed: 3Com Corporation CS/2600 Series Monitor > 6.
– and key in few times to obtain the following prompt: Welcome to the 3Com Communication [1] CS> 15.Add the necessary privileges for network management – with the command (after the CS>): [1] CS> set pri = nm – The CS/2600 server asks for a password, key in as initially no password is set. – The CS/2600 server displays the prompt: [2] cs# 16.For any additional configurations such as: Setting up the date and the time, the system name, the password, etc.
[10] cs# sh !2 dp – etc. 22.Check the CS/2600 network connection: – Make for example a ping to the PowerConsole station, using the command ping (after cs#): [10] cs# ping @IP_PWC – The ping command must respond with the message: – pinging ... @IP_PWC is alive 23.Check that the different ports are in ”LISTEN” state.
PowerConsole Configuration Procedure Update the file /etc/hosts with the different addresses configured on the CS/2600 server: @IP0 (CS/2600 server address), @IP1 (J1/Node1 S1 address), @IP2 (J1/Node2 S1 address), etc. For example: 120.184.33.10 CS–2600 # @IP0 120.184.33.11 Node1_cons # @IP1 120.184.33.12 Node2_cons # @IP2 Examples of Use 1.
Cluster Console Details in: • Hardware Components, on page 6-26. • Examples of Use, on page 6-26. • Cabling Diagrams, on page 6-27. • Cabling Legend, on page 6-27. • Cabling Diagrams for a 2–node Configuration, on page 6-29. • Cabling Diagrams For Configuration With More Than 2 Nodes, on page 6-31. • Cabling Instructions, on page 6-33.
If there is no Cluster Administration Hub, that is to say no dedicated administration network, the Console Concentrator and the Cluster Console will be connected to the customer’s LAN network (an Ethernet network) in the customer’s premises. In the case that the customer’s network is COAXIAL THICK or COAXIAL THIN then the Customer is in charge of connecting the Console Concentrator and the Cluster Console to his network with his own cables (As usual for all the Escala platforms).
Figure 18. Cluster Console with Console Concentrator Figure 19.
Cabling Diagrams for a 2–node Configuration There are two CBLG105–1800 cables . A first one is generated automatically and systematically in any Escala EPC order. The second one is included in the Cluster Console component. For connecting the nodes to the cluster hub, please use the native integrated Ethernet board. Temporary Replacement of a Cluster Console with a System Console The Cluster Console acts both as system console and administration graphical console.
Figure 21. Alternative Cabling of Cluster Console and System Console – Common administration graphical interface Figure 22.
Cabling Diagrams For Configuration With More Than 2 Nodes With a dedicated–administration network, use the integrated Ethernet board for connecting the nodes to the cluster administration hub. Figure 23.
With no dedicated–administration network, the Console Concentrator and the Cluster Console (X Terminal) must be connected to the customer’s Ethernet–based public network (a single Ethernet LAN @10Mbps). Figure 24.
Cabling Instructions Documentation References Installing Your Explora Family System 17” Professional Color Monitor – User’s Guide Workstations BQX 4.0. Installation Warning: Do not plug the power cords on the X Terminal box and on the monitor front side before being asked to do so: 1. Install the memory extension and the PCMCIA board which has been previously writeenabled. See section §5 of Installing Your Explora Familly System documentation. 2.
16.Once the prompt> is displayed, if the Boot is not automatic, then: type >BL and press ENTER 17.Two or three windows appear after the starting has completed: a window of Setup and Configuration (upper left), a telnet window, a system console window corresponding to the serial line RS232 (S1 plug of a EPC node) provided that the X terminal is directly wired to a node’s S1 port. 18.Inside the Setup and Configuration window: select Window Manager and click on the icon to run NCD Window Manager. 19.
Cluster PowerConsole Cluster PowerConsole is provided by an AIX workstation from the following: – Escala S Series Details in: • Hardware Components (Escala S Series), on page 6-36. • Examples of Use, on page 6-38. • Cabling Legend, on page 6-40. • Cabling Diagrams, on page 6-40. • Example of Cable Usage (for a 2–node Powercluster), on page 6-44. • Cabling Instructions, on page 6-44. • Remote Maintenance Connections, on page 6-44. • Configuration Rules (Escala S Series Extensions), on page 6-45.
Hardware Components (Escala S Series) Cluster PowerConsole Extensions are listed on page 6-37.
CBL1912 RS232 CABLE 9M/25M Pins 0.
CBLG179–1900 Cable, RJ45 Ethernet for HUB connection 10m 1 VCW3630 Cable, Ethernet ”Thin” (15M, 15F) to Transceiver 5m 1 Examples of Use The PowerConsole with the Cluster Assistant GUI is a cluster administration graphics workstation which is available to setup, install, manage, and service the EPC nodes and the EPC cluster. The PowerConsole hardware is an S100 workstation running AIX 4.3.2. Escala S100 running AIX 4.3.1 was the previous AIX stations used as PowerConsole hardware.
Figure 25. PowerConsole With a Dedicated Administration Network Figure 26. PowerConsole Without Dedicated Administration Network Within an Escala EPC, a node pertains to an HACMP cluster or it is a standalone node (without HACMP). There can be zero, one or more HACMP clusters, as there can be zero, one or more standalone nodes. For implementing IP address takeover, the nodes of an HACMP cluster need to be connected to the same LAN (subnet).
Cabling Legend Item M.I. Designation Length FRU CBL1912 Cable, Adapter RS232 (9M/25M) 0.3m 76958073-002 CBLG104-2000 Cable, local RS232 (25F/25M) 15m 90232001-001 CBLG105-1800 Cable, local RS232 (25F/25F) 7.
Figure 28. PowerConsole to Console Concentrator and Administration Hub or Figure 29.
Cabling Pattern (without Modems) Cabling to be used if there is a dedicated–administration network. Customer’s LAN (Ethernet 10) Optional Figure 30.
Cabling to be used when there is no dedicated–administration network. The Console Concentrator and the PowerConsole will be connected to the customer’s LAN network (an ethernet network). Customer’s LAN (Ethernet 10) Figure 31.
Example of Cable Usage (for a 2–node Powercluster) Type Function Cabling From – To CBLG 106–2000 Link CS2600/J1 to Node 1 CS2600/J1–>CBL1912 Description QTY RS232 Direct 25M/25F 3 Link CS2600/J2 to Node 2 CS2600/J2–>CBL1912 Link CS2600/J0 to ASCII console CBL 1912 CS2600/J0–>Interposer Link CS2600/J1 to Node 1 CBLG106–>S1 Node 1 M/M RS232 Direct 25M/9F 2 console Interposer 25M/25M Direct 1 Link CS2600/J2 to Node 2 CBLG106–>S1 Node 2 Interposer Link CS2600/J0 Modem / MB 323 concentrator to ASC
Configuration Rules for PowerConsole 2 Extensions Additional internal disk drives and media drives must be placed in this PowerConsole according to the following rules: Five bays are available in Escala S100: – three of them are already used by: floppy, one CD–ROM 20X and one system disk of 4.5GB. – Two bays are free (one 1’’ and one 1.6’’). They can be used for two disk drives or for one media and one disk drive. The disk drives are 1’’ high, 7200tpm, with a capacity of 4.3GB and 9.1GB.
6-46 EPC Connecting Guide
Chapter 7. Fast Ethernet Interconnect Requirements Describing particular cabling for Fast Ethernet applications.
Hardware Components Fast Ethernet Interconnect Full Kit (2 EPC400-N) DCKG009–0000 DCKG009–0000 component is only used to link two EPC400 nodes with a single Ethernet link without switch.
Fast Ethernet Interconnect Base Kit (EPC1200-N) DCKG012–0000 Identificator Description Quantity DCCG137–0000 PCI Ethernet 10&100 Mb/s Adapter (2986) 1 CBLG179–1900 10m Ethernet Cable – RJ45 / RJ45 – category 5 1 Ethernet Single Switch Kit (Models of 3 to 8 Nodes) DCKU117–0000 Identificator Description Quantity 3C16981–ME SuperStack II Switch 3300 10/100 12–Port 1 GCORSECA01 Internal Power Cord (to PDB) – [90228002–001] 1 GPOWSFBUK1 UK Power Cord – [90399222–001] 1 GPOWSFBUS1 US Power C
7-4 EPC Connecting Guide
Advanced switch usage When two fast ethernet interconnects are ordered between the same group of nodes, two cross–over Ethernet RJ45/RJ45 cables (CBLG161–1900), if provided, can be used to establish a resilient link pair between the two switches, and to set up in that way a redundant interconnect. Refer to SuperStack II Fast Ethernet Switch User Guide to configure them.
• Full Duplex must be used only for connections thru a SWITCH or POINT to POINT connections. In particular you can use Full Duplex for 2 nodes interconnect using crossed cable RJ45/RJ45 MI CBLG161. When using Full Duplex, collision detection is disabled.
Figure 36.
Figure 37.
Figure 38.
Cabling Instructions Between 2 Nodes (node #1 and node #2) Connect one end of the cross–over cable (CBLG161–1900) to the RJ45 port on the Ethernet 10/100 adapter on node #1, and the other end to the RJ45 port on the Ethernet 10/100 adapter on node #2. With a Hub First of all, a SuperStack II Hub 10 Management Module (3C16630A) has to be fitted to each Hub 10 12 Port TP unit (3C16670A) to provide SNMP management. Refer to the vendor publication Superstack II Hub 10 Management User Guide.
General Configuration Procedure The following steps describe the network configuration phase of an interconnect. Note: This procedure is the same whatever the interconnect type (Ethernet switch, or FDDI hub). Configure IP addresses Ping and Rlogin between nodes.
In order to configure the other adapters on a node, please use the SMIT Further Configuration menus. Otherwise the HOSTNAME would be changed.
On node #2 ping every node and check reachability with every node # ping node1_X # rsh node1_X uname –a which returns AIX node1_X # ping node3_X # rsh node3_X uname –a which returns AIX node3_X and so on. and proceed the same with all the other nodes.
7-14 EPC Connecting Guide
Chapter 8. Gigabit Ethernet Interconnect Requirements Describing particular cabling for Gigabit Ethernet applications.
Hardware Components Gigabit Ethernet Interconnect Full Kit (2 EPC400-N) DCKG029–0000 DCKG029–0000 component is only used to link two EPC400 nodes with a single Ethernet link without switch.
Local management can be performed via an RS–232 (DB–9 port) line, as well as out–of–band management via an RJ45 port. For the latter, the Gigabit Ethernet switch 9300 can be connected to the Cluster Administration Hub, if any; take a cable CBL179 provided with the Cluster Hub, or to the customer’s 10Base–T Ethernet LAN. Switch 9300 Physical Characteristics Figure 40. Switch 9300 – Front View Figure 41.
Cabling Diagrams Figure 42.
Figure 43 depicts an interconnect where each node has a single attachment. For nodes having dual gigabit ethernet adapters for HACMP purpose, there are two SC–SC links between a node and the switch. Figure 43.
Quick Installation Guide Audience: The following provides quick procedures for installing the SuperStack 9300. It is intended for trained technical personnel only who has experience installing communications equipment. To install the SuperStack information on each setup task, see the related sections in this guide or complete details in the indicated documents. Determine Site Requirements: Install the SuperStack II Switch 9300 system in an area that meets the requirements in Figure 44. Figure 44.
Warning: Hazardous energy exists within the SuperStack II Switch 9300 system. Always be careful to avoid electric shock or equipment damage. Many installation and troubleshooting procedures should be performed only by trained technical personnel. Install optional power supply The system operates using a single power supply assembly and is shipped with one power supply installed. You can add an uninterruptible power supply (UPS) to the system. The additional power supply is orderable and shipped separately.
Figure 46.
Administer and Operate the system See the Administration Guide for information for solving any problems See also Appendix D: Technical Support in the Getting Started Guide for your system. For information on how to administer and operate the SuperStack II Switch 9300, see the Administration Guide on the Documentation CD and the Software Installation and Release Notes.
8-10 EPC Connecting Guide
Chapter 9. FDDI Interconnect Requirements Describes particular cabling for FDDI applications.
Hardware Components FDDI Interconnect Full Kit (2 EPC400-N) DCKG013–0000 DCKG013–0000 component is only used to link two EPC400 nodes with a double FDDI link without hub.
FDDI Interconnect Base Kit (EPC1200-N) DCKG016–0000 Identificator Description Quantity DCCG124–0000 PCI FDDI Fibre Dual Ring Adapter (2742) 1 CBLG171–1800 FDDI Fibre SC–MIC cable (6m) 2 FDDI Hub Kit (models of 3 to 6 nodes) DCKU109–0000 Identificator Description Quantity CBLG160–1800 FDDI Fibre MIC-MIC Cable (6m) 2 3C781 LinkBuilder FDDI Management Module 2 3C782 LinkBuilder FDDI Fibre-Optic Module (4 ports, MIC) 4 3C780–ME LinkBuilder FDDI Base Unit 2 GCORSECA01 Internal Power Cord
Cabling Diagrams INTCF05 FDDI Interconnect for 2 Nodes Figure 48. INTCF05: FDDI Interconnect for 2 Nodes. Case: EPC800 Nodes. Figure 49. INTCF05: FDDI Interconnect for 2 Nodes. Case: EPC1200, EPC1200A and/or EPC400 Nodes.
Figure 50. INTCF05: FDDI Interconnect for 2 Nodes. Mixed Case: (MCA * PCI nodes) with an EPC800 node. Figure 51.
Figure 52.
Cabling Legend Item M.I.
Cabling Instructions Dual homing configuration provides two attachments to FDDI network. One of them functions as a backup link if the primary link fails. This type of attachment is especially useful for connecting to mission–critical devices. The hub chassis provides slots for one management module (required) and three media modules. The LinkBuilder FDDI Management Module must be inserted in slot 0. This module provides management and configuration functions through a console interface.
General Configuration Procedure The network configuration phase of an interconnect are standard. Note: This procedure is the same whatever the interconnect type (Ethernet hub single or double, Ethernet switch single or double, FDDI hub, FDDI switch, or FCS). See General Configuration Procedure, on page 7-11. The network configuration phase differs, and is given below. Network Configuration 1.
9-10 EPC Connecting Guide
Chapter 10. Disk Subsystems Cabling Requirements Describing particular cabling for Disk Drive applications. Disk Subsystems Cabling Requirements – Overview More details in: • SSA Disk Subsystem, on page 10-2. • Disk Array Subsystems (DAS), on page 10-23. • JDA Subsystems, on page 10-54. • EMC2 Symmetrics Disk Subsystems, on page 10-64. • HA Library, on page 10-69.
SSA Disk Subsystem You will find: • MI List, on page 10-2. • General Information, on page 10-2. • Cabling Diagrams, on page 10-4. • Cabling Instructions, on page 10-16. • Optic Fibre Extender, on page 10-17. MI List Identificator Description SSAG007–0000 SSA DISK SUBSYSTEM RACK w/ four 4.5 GB Disk Drives SSAG009–0000 SSA DISK SUBSYSTEM RACK w/ four 9.
Put the disks in the increasing order of their bar–code number, starting by the front (slot on the left hand) and continuing by the rear (slot on the left hand). The loop includes an adapter on each node. Additional diagrams illustrate the possible use of Optic Fibre Extender for implementing Powercluster with 500m distance between peer nodes and peer SSA disk subsystems as an answer to disaster recovery.
2. For PCI nodes (EPC400 and EPC1200) and for mixed configurations, sharing of a SSA loop is limited to 2 nodes with PCI adapters (6215) and MCA adapters (6219). Cabling Diagrams SSACF01: Cabling For 1 to 4 Nodes, With 1 SSA Cabinet and 1 to 4 Segments Figure 54. SSACF01: Base mounting diagram (1 to 4 nodes, 1 SSA cabinet, 1 to 4 segments).
Figure 55. SSACF01: Loop diagram: 1 to 4 nodes, 1 SSA cabinet, 1 to 4 segments.
Figure 56. SSACF01: Cabling example for 4 nodes, 1 SSA cabinet and 16 disks. Parts List Cabling example for 4 nodes, 1 SSA cabinet and 16 disks. 10-6 Item M.I.
SSACF02: Cabling For 1 to 6 Nodes, With 2 SSA Cabinets Figure 57. SSACF02: Base mounting diagram (1 to 6 nodes, two SSA cabinets, 1 to 8 segments). Figure 58. SSACF02: Loop diagram: 1 to 6 nodes, 2 SSA cabinets, 5 to 8 segments.
Figure 59. SSACF02: Cabling example for 6 nodes, 2 SSA cabinets and 32 disks. At least 8 disk drives are mandatory. Parts List Cabling example for 6 nodes, 2 SSA cabinets and 32 disks. 10-8 Item M.I.
SSACF03: Cabling For 5 to 8 Nodes with 1 SSA Cabinet Figure 60. SSACF03: Base mounting diagram (5 to 8 nodes, 1 SSA cabinet, 1 to 4 segments). Figure 61. SSACF03: Loop diagram: 5 to 8 nodes, 1 SSA cabinet, 1 to 4 segments.
Figure 62. SSACF03: Cabling example for 8 nodes, 1 SSA cabinet and 16 disks. Parts List Cabling example for 8 nodes, 1 SSA cabinet and 16 disks. 10-10 Item M.I.
As soon as there is more than one node connected to a single port on the SSA cabinet, the internal bypass must be suppressed. The operation to manipulate the switch by–pass is manual. Do not forget to plug out the cabinet before intervening. For an 8–node configuration there is no by–pass at all. For a 7–node configuration there is one by–pass (between port 8 and port 9). For a 6–node configuration there are two by–passes – one between port 8 and port 9, the other one between port 1 and port 16.
SSACF04: Cabling For 7 to 8 Nodes With 2 SSA Cabinets Figure 63. SSACF04: Base mounting diagram (7 to 8 nodes, 2 SSA cabinets, up to 8 segments). Figure 64. SSACF04: Loop diagram: (7 to 8 nodes, 5 to 8 segments).
Figure 65. SSACF04: Cabling example for 8 nodes, 2 SSA cabinets and 32 disks. Parts List Cabling example for 8 nodes, 2 SSA cabinets and 32 disks. Item M.I.
SSACF05: Cabling 1 to 8 Nodes With 3 SSA Cabinets Figure 66. SSACF05: Base mounting diagram (1 to 8 nodes, 3 SSA cabinets, up to 12 segments). Figure 67. SSACF05: Loop diagram: (1 to 8 nodes, 9 to 12 segments).
Figure 68. SSACF05: Cabling example for 8 nodes, 3 SSA cabinets and 48 disks. At least 12 disk drives are required. Parts List Cabling example for 8 nodes, 3 SSA cabinets and 48 disks. Item M.I.
Cabling Instructions The cabling instruction lines are generated by the ordering document. • The nodes are named by N1, N2, .. N8. • U1, U2, U3 designate the SSA units. • P1, P4, P5, P8, P9, P12, P13, P16 and so on designate the ports on an SSA cabinet. • An instruction line specifies (NiAj) on which adapter of which node a cable end has to be plugged, and (UkPl) on which port of which SSA unit the other cable end must be plugged.
Optic Fibre Extender Usage Cases CAUTION: The solutions suggested here are not offered in standard. With the introduction of Optic Fibre Extender, an SSA loop can be extended enabling to construct an architecture for disaster recovery where the Powercluster configuration is spread over two buildings within a campus. The maximum length of a fibre link between two optic fibre extenders is 600 meters.
Figures 69 and 70 illustrate disaster recovery solutions which differ in terms of number of nodes and shared SSA cabinets. They are extensions of configurations SSACF01 and SSACF02. In these extended configurations two physical loops are implemented. Figure 69 shows an implementation with one SSA cabinet per loop, Figure 70 with two cabinets per loop. In the first case, there is an extended optic fibre link between each node and the distant cabinet.
Figure 70. Optic Fibre Extender: Global diagram (1 pair of 2 nodes, 2 cabinets). Cabling Diagram With 1 or 2 Nodes, 1 SSA Cabinet on Each Side Figures 71 and 72 show configurations with two loops and one adapter per node. For higher availability it is better to have two adapters, one per loop. Figure 71. SSACF01 Configuration with two loops (1 or 2 nodes, 1 SSA cabinet on each side).
Figure 72. Cabling schema with FIbre Optical Extenders (1 or 2 nodes, 1 SSA cabinet on each side).
Cabling Diagram With 1, 2 or 3 Nodes, 2 SSA Cabinets on Each Side Figures 73 and 74 show configurations with two loops and one adapter per node. For higher availability it is better to have two adapters, one per loop. Figure 73. SSACF02 Configuration with two loops (1, 2 or 3 nodes, 2 SSA cabinets on each side).
Figure 74. Cabling diagram with FIbre Optical Extenders (1, 2 or 3 nodes, 2 SSA cabinets on each side).
Disk Array Subsystems (DAS) You will find: • MI List on page 10-23 • Usage Cases for SCSI Technology on page 10-26 • Cabling Diagrams for SCSI Technology on page 10-27 • Cabling for Configuration & Management on page 10-34 • Examples of Use for Fibre Channel on page 10-36 • Cabling Diagrams for Fibre Channel on page 10-44 MI List IDENTIFICATOR DESCRIPTION DASG016–0100 DAS 1300 RAID Subsystem – 10 Drive Rack Chassis DASG026–0000 DAS 2900 RAID Subsystem – 20 Drive Rack Chassis DASG028–0000 DAS 3200 RA
IDENTIFICATOR DESCRIPTION CKTG0105–0000 Add’nal Link Control Card (DAE 5000) CKTG0106–0000 Add’nal Link Control Card (DAS 57xx) PSSG023–0000 Base Battery Backup Rack (DAS 57xx) PSSG024–0000 Add’nal Battery Backup Rack (DAS 57xx) PSSG025–0000 Battery Backup Desk (DAS 57xx) PSSG026–0000 Dual Battery Backup Rack (DAS 57xx) PSSG027–0000 Dual Battery Backup Desk (DAS 57xx) PSSG028–0000 Add’nal PDU – Single Phase (for EPC1200/1200A) PSSG029–0000 Add’nal PDU Desk (DAS 57xx/DAE 5000) PSSG032–000
IDENTIFICATOR DESCRIPTION MSUG100–0D00 17.8GB HI Speed SCSI-2 Disk for DAS MSUG101–0D00 17.8GB HI Speed SCSI-2 Disk for DAS (OVER 10*8.8GB) MSUG102–0D00 17.8GB HI Speed SCSI-2 Disk for DAS (OVER 20*8.
IDENTIFICATOR DESCRIPTION DCCG140–0000 PCI Enhanced Fibre Channel Adapter DCCG147–0000 PCI 64–bit Copper Fibre Channel Adapter DCCG148–0000 PCI 64–bit Optical Fibre Channel Adapter Copper and Fiber cables, MIA, Hub and Extender Links for Fiber Channel Attachments IDENTIFICATOR DESCRIPTION DCOQ001-0000 FC MIA 1/M5/DCS LNCQ001-0000 FC-AL Hub 1GB 9-Ports RCKQ003-0000 Rack Kit/1 LNCQ001 RCKQ004–0000 Rack Kit/2 LNCQ001 FCCQ001-1800 Cord 2FO/M5/DSC 5M FCCQ001-2100 Cord 2FO/M5/DSC 15M FCDF001
Cabling Diagrams for SCSI Technology Parts List Item M.I. Designation Length FRU CKTG070–0000 Y SCSI cable (68MD/68MD) 1m 909920001–001 CKTG049–0000 16 Bit Y-cable – IBM52G4234 CBLG137–1200 SCSI-2 F/W adapter to DAS – 3 3m DGC005–041274–00 CBLG137–1800 SCSI-2 F/W adapter to DAS – 6 6m DGC005–041275–00 CBLG097–1000 Wide SP cable DAS to DAS 0.5m DGC005–040705 CBLG111–1000 DE F/W Node to Node cable 0.6m IBM52G4291 CBLG112–1400 DE F/W Node to Node cable 2.
DASCF02: Cabling for: Single SP / Single SCSI with 1 node – Daisy chained DAS Figure 76. DASCF02: Single SP / Single SCSI with 1 node – Daisy chained DAS. DASCF03: Cabling for: Dual SP / Dual SCSI with 1 node – 1 DAS Figure 77. DASCF03: Dual SP / Dual SCSI with 1 node – 1 DAS.
DASCF04: Cabling for: Dual SP / Dual SCSI with 1 node – Daisy chained DAS Figure 78. DASCF04: Dual SP / Dual SCSI with 1 node – Daisy chained DAS. DASCF05: Cabling for: Single SP / Single SCSI with up to 4 nodes – one DAS (1) Figure 79. DASCF05: Single SP / Single SCSI with up to 4 nodes – one DAS (1). See also Figure 80.
DASCF06: Example of Single SP / Single SCSI with up to 4 nodes – one DAS (2) Figure 80. DASCF06: Example of Single SP / Single SCSI with up to 4 nodes – one DAS (2). See also Figure 79. DASCF07: Cabling for: Single SP / Single SCSI with up to 4 nodes – Daisy chained DAS (1) Figure 81. DASCF07: Single SP / Single SCSI with up to 4 nodes – Daisy chained DAS (1).
DASCF08: Cabling for: Single SP / Single SCSI with up to 4 nodes – Daisy chained DAS (2) Figure 82. DASCF08: Single SP / Single SCSI with up to 4 nodes – Daisy chained DAS (2). DASCF9: Cabling for: Dual SP / Dual SCSI with up to 4 nodes – 1 DAS (1) Figure 83. DASCF09: Dual SP / Dual SCSI with up to 4 nodes – 1 DAS (1).
DASCF10: Cabling for Dual SP / Dual SCSI with up to 4 nodes – 1 DAS (2) Figure 84. DASCF10: Dual SP / Dual SCSI with up to 4 nodes – 1 DAS (2). DASCF11: Cabling Dual SP / Dual SCSI with up to 4 nodes – Daisy chained DAS (1) Figure 85. DASCF11: Dual SP / Dual SCSI with up to 4 nodes – Daisy chained DAS (1).
DASCF12: Cabling Dual SP / Dual SCSI with up to 4 nodes – Daisy chained DAS (2) Figure 86. DASCF12: Dual SP / Dual SCSI with up to 4 nodes – Daisy chained DAS (2).
Cabling for Configuration & Management EPC800, EPC1200 EPC1200A, EPC2400, EOC430 and EPC440 Nodes The following cabling configuration requires a serial multi-port asynchronous card. Connect the RS232 cable to a port left on the multiple ways asynchronous boards of a node that shares the DAS. For a single DAS with one SP, connect the DAS to the first node, as shown. Figure 87. Cabling for Configuration & Management, 1 DAS, 1 SP.
EPC400 Node A multi–port asynchronous board is not appropriate. A serial port is suitable for the DAS management through a serial line. DAS Management Through SCSI Links In any case, DAS management can be performed thru SCSI links from nodes the DAS subsystem is attached to, by using the Navisphere application from a graphical terminal. The remote maintenance option consumes the S2 serial port on a node when an external modem is connected to that node.
Examples of Use for Fibre Channel The following only applies to PCI nodes (EPC400/430/440, EPC1200, EPC1200A and EPC2400) with the Clariion DAS fibre models. This includes DAS 3500, DAS 57xx. DAS5300 (DPE) and its associated DAE. Thre are four types of Clariion storage systems available in a rackmount version. • DAS5700: 10 to 120 disks RAID subsystems. A DAS5700 includes one DPE and additional DAEs.
Figure 91.Disk Array Enclosure – DAE DAS management software is the Navisphere application. The communication bridge between the Navisphere application and the DPE array is the Navisphere agent. The Navisphere agent resides on every DPE array’s node and communicates directly with the storage system firmware. It requires a graphical interface for setting up configuration parameters. In an EPC configuration with a PowerConsole, the Navisphere application is integrated in the ClusterAssistant launch board.
The following table describes the intended uses of the different configurations. Diagram number Nb of Nb of Nb of loops nodes adapters per node HACMP on each node ATF on each node Nb of Nb of DAS SPs per DAS Nb of Notes hubs SLOOP00 1 1 1 or 2 No No 1 or 2 0 0 4. SLOOP01 1 1 1 No No 1 1 or 2 linked 0 or 1 3. SLOOP02 1 2 1 No No 1 1 0 – SLOOP03 1 N>1 1 No No D>1 1 or 2 linked 1 1. SLOOP04 2 2 2 Yes No 2 0 2 4.
2. DAS 5300 model with a RAID subsystem, including: • Either a single–SP DPE or a dual–SP including a DAE disk drawer • One or two chained DAE disk drawers inside an EPC1200/A/2400/400/430 I/O rack, or in a rack containing an EPC440 drawer) • One power supply per SP and as many additional power supplies as chained DAE drawers 3.
Rack 400: CKTG109-0000 – Rackmount option (DAE 5000) Rack 1200: CKTG110-0000 – Rackmount option (DAE 5000) Disk Drives: MSUG110-0F00 – 8.8GB Fibre DAE Disk (10 000rpm) MSUG111-0F00 – 17.8GB Fibre DAE Disk (10 000rpm) Attachment: 1 x DCCG141-0000 – PCI Fibre Channel Adapter 1 x DCCG147-0000 – PCI Fibre Channel Adapter 1 x DCCG148-0000 – PCI Fibre Channel Adapter 1 x FCCQ002-2000 – Cord 2CU/DB9 10M Fibre Channel Hub The FC–AL hub of Escala offer is the Gadzoox’s FCL1063TW hub.
The micro-modem referenced ME62AF (said mini-driver) in Blackbox catalogues is an example of what you can purchase to extend RS232 lines. The physical characteristics are: • Protocol asynchronous • Speed 9.6 kbps • Transmission Line 2 twisted pair (Wire gauge: 24-AWG, i.e. 0.5mm) • Operation Full duplex, 4-wire • Connectors DTE/DCE DB-25 female • Size 1.3cm x 5.3cm x 10.9cm • Weight 0.1kg The maximum length of a fibre link between two MIA is 500 meters.
• Size 1.3cm x 5.3cm x 10.9cm • Weight <0.1kg Installation of micro-modem Even if the micro-modem are not delivered with EPC product, the following indicates the simple steps to install a micro-modem model ”Mini Driver ME762A-F”. 1. Connect the 4-wire telephone line to the unit’s 5-screw terminal block. 2. Set the DCE/DTE switch to the DCE position, since you are connecting the micro-modem to node (a DTE). 3. Cabling: XMT + on RCV +, XMT – on RCV – a.
Figure 97.DAS Fibre Channel – Configuration for Micro-modem Multi-Function Line Drivers For setting an extended serail line between a Console Concentrator and the S1 port of a distant node, you can use a pair of micro-modems. A micro-modem ME762A-M or ME657A-M on the Console Concentrator side and a micro-modem ME762A-F or ME657A-F on the node side.
Cabling Diagrams for Fibre Channel Parts List Item M.I.
SLOOP00: Single Loop, 1 Node, 1 or 2 DAE Figure 99.SLOOP00: Single Loop, 1 Node, 1 or 2 DAE. SLOOP01: Single Loop, 1 Node, 1 DAS with 1 SP) Figure 100.SLOOP01: Single Loop, 1 Node, 1 DAS with 1 SP).
SLOOP02: Single Loop, 2 Nodes, 1 DAS (1 SP) Figure 101.SLOOP02: Single Loop, 2 Nodes, 1 DAS (1 SP). SLOOP03: Single Loop, 1 Hub, N Nodes, D DAS with 1 SP 2 < n + D < 10 Figure 102.SLOOP03: Single Loop, 1 Hub, N Nodes, D DAS (1 SP).
SLOOP04: Two Loops, 2 Nodes, 2 DAEs (1 LCC) The following applies to EPC400/430/440 and EPC1200A/2400 HA packages. It is to be used with HACMP/ES 4.3 cluster software. Figure 103.SLOOP04: Two Loops, 2 Nodes, 2 DAEs (1 LCC).
DLOOP01: Dual Loop, 1 Node with 2 Adapters, 1 DAS with 2 SPs Figure 104.DLOOP01: Dual Loop, 1 Node with 2 Adapters, 1 DAS with 2 SPs. DLOOP04: Two Loops, 2 Nodes, 1 DAS with 2 SPs Figure 105.DLOOP04: Two Loops, 2 Nodes, 1 DAS with 2 SPs.
DLOOP02: Dual Loop, 2 Nodes, 1 DAS with 2 SPs Figure 106.DLOOP02: Dual Loop, 2 Nodes, 1 DAS with 2 SPs.
DLOOP03: Dual Loop, Two Hubs, N Nodes, D DAS with 2 SPs Figure 107.DLOOP03: Dual Loop, Two Hubs, N Nodes, D DAS with 2 SPs.
XLOOP01: 1 Node, Single or Dual Loop, 1 Deported DAS Figure 108.XLOOP01: 1 Node, Single or Dual Loop, 1 Deported DAS. XLOOP02: 2 Nodes, Dual Loop, 2 Hubs, 2 DAS (one Deported) Figure 109.XLOOP02: 2 Nodes, Dual Loop, 2 Hubs, 2 DAS (one Deported).
XLOOP02: 2 Nodes, Dual Loop, 4 Hubs, 2 DAS Figure 110.XLOOP02: 2 Nodes, Dual Loop, 4 Hubs, 2 DAS.
DSWITCH01: Dual Switch, N Nodes, D DAS with 2 SPs Figure 111.DSWITCH01: Dual Switch, N Nodes, D DAS with 2 SPs.
JDA Subsystems AMDAS JDA disk subsystems (End Of Life) are only available on EPC800 nodes. You will find: • MI List, on page 10-54 • Examples of Use, on page 10-54 • Cabling Diagrams, on page 10-55 • Configuration Procedure, on page 10-60 • Using AMDAS JBOD disks as system disk extension, on page 10-62 MI List M.I. Designation Length FRU DRWF006–0000 Just a Bunch of Disks Array Drawer MSUF070–0J00 4.2GB Hi Speed Disk Drive (JDA) MSUF073–0J00 9.
When it is common to two nodes the disk cabinet can be used as system disk extension or shared disk subsystem. In the former case the disks are not shared. Each node possesses its own SCSI bus. In the latter the configuration allows to support a node failure. There are two SCSI adapters per node, where each adapter is connected to a distinct SP. The first adapter of a node allows to access the disks on plates (e.g.
Figure 113. JDACF02: 1 node + 1 controller, 1 SCSI bus, 2 plates. JDACF03 2 nodes + 2 controllers (1 per node), 2 SCSI buses, 2 plates Figure 114. JDACF03: 2 nodes + 2 controllers (1 per node), 2 SCSI buses, 2 plates. JDACF04 2 nodes + 2 controllers (1 per node), 2 SCSI buses, 4 plates Figure 115. JDACF04: 2 nodes + 2 controllers (1 per node), 2 SCSI buses, 4 plates.
JDACF05 1 node + 2 controllers, 2 SCSI buses, 2 plates Figure 116. JDACF05: 1 node + 2 controllers, 2 SCSI buses, 2 plates. JDACF06 1 node + 2 controllers, 2 SCSI buses, 4 plates Figure 117. JDACF06: 1 node + 2 controllers, 2 SCSI buses, 4 plates.
JDACF07 2 nodes + 1 controller per node – HA mode , 1 shared SCSI bus, 1 plate Figure 118. JDACF07: 2 nodes + 1 controller per node – HA mode , 1 shared SCSI bus, 1 plate. JDACF08 2 nodes + 1 controller per node – HA mode , 1 shared SCSI bus, 2 plates Figure 119. JDACF08: 2 nodes + 1 controller per node – HA mode , 1 shared SCSI bus, 2 plates.
JDACF09 2 nodes + 2 controllers per node – HA mode, 2 shared SCSI buses, 2 plates Figure 120. JDACF09: 2 nodes + 2 controllers per node – HA mode, 2 shared SCSI buses, 2 plates. JDACF10 2 nodes + 2 controllers per node – HA mode, 2 shared SCSI buses, 4 plates Figure 121. JDACF10: 2 nodes + 2 controllers per node – HA mode, 2 shared SCSI buses, 4 plates.
Configuration Procedures The following gives the installation procedure of AMDAS JDA in an EPC800 configuration. This procedure does not pretend to replace the set of documentation which deals with AMDAS, from which those procedure are extracted from.
Configurations for JDACF03 & JDACF04 1. Follow the steps 1 to 3 of procedure above for each node to connect to an AMDAS. 2. Link the other branch of the Y cable of a node and the first SCSI bus (J03) on AMDAS with a CBLG 157 cable. Link the other branch of the Y cable of the other node and the first SCSI bus (J07) on AMDAS with a CBLG 157 cable. 3. Connect the two terminators to the connectors J04 and J08 on the AMDAS. 4.
Using AMDAS JBOD Disks as a System Disk Extension Building a System Disk 1. Stop gracefully HACMP smit clstop 2. Stop all the applications 3. Make a system backup of hdisk0 that is currently running 4. Reboot the node on the AIX installation CD–Rom in service mode 5. If the maintenance screen appears, type: 6 {System Boot} 0 {Boot from List} 6. Answer the questions: choice of system console and language used for installation 7.
4. Build a copy of each other logical volumes of hdisk0 on hdisk1: Mklvcopy hd1 2 hdsik1 # Filesystem /home Mklvcopy hd2 2 hdsik1 # Filesystem /usr Mklvcopy hd3 2 hdsik1 # Filesystem /tmp Mklvcopy hd4 2 hdsik1 # Filesystem /(root) Mklvcopy hd6 2 hdsik1 # paging space Mklvcopy hd8 2 hdsik1 # jfslog Mklvcopy hd9 2 hdsik1 # Filesystem /var 5. Build also a copy of all user filesystems. 6.
EMC2 Symmetrics Disk Subsystem MI List M.I. Designation DRWF006–0000 Just a Bunch of Disk Array Drawer CDAF333–1800 CDA3330–18 up to 32DRV–8SLT CDAF343–9000 CDA3430–9 up to 96DRV–12SLT CDAF370–2300 CDAF3700–23 up to 128DRV–20SLT MSUF303–1802 DRV3030–182 18X2GB 3,5” MSUF303–2302 DRV3030–232 23X2GB 5,25” CMMF001–0000 512MB Cache Mem. Init. Order CMMF002–0000 768MB Cache Mem. Init. Order CMMF003–0000 1024MB Cache Mem. Init. Order CMMF004–0000 1280MB Cache Mem. Init.
M.I. Designation Length FRU CMOF006–0000 1024MB Cache UPG.for CDA4000 MSOF303–9002 DRV3030–92 9GBX2 3,5” MSOF303–2302 DRV3030–232 23GBX2 5,25” CBLF017–1800 6M SCSI /non AIX(SUN,HP,DEC..
Examples of Use Point-to-point Connection One port of a Symmetrix box is connected to an Escala server through a single adapter (either Ultra Wide Differential SCSI or Fibre Channel). As there is no redundancy of any component on the link, a single failure (cable, adapter, Channel Director) may cause the loss of all data. Figure 122.
Figure 123. Multiple connection of an EMC2 Symmetrix subsystem Base Configuration with HACMP The usual HA configuration with Symmetrix subsystems is to duplicate the point to point connection and to configure the Symmetrix in order to make the data volumes available to both servers through the two separate host ports. Figure 124.
Configuration with HACMP and Powerpath (multiple paths) Figure 125. Configuration of an EMC2 Symmetrix subsystem with Powerpath Powrepath is a software driver which allows multiple paths between a node and a Symmetrix subsystem to provide path redundancy, and improve performance and availability.
HA Library MI List IDENTIFICATOR DESCRIPTION MTSG014–0000 MTSG017–0000 CKTG080–0000 EXTERNAL ADD’L MEDIA (DLT4000 & DLT7000) 20/40GB EXTERNAL DLT DRIVE 35/70GB EXTERNAL DLT DRIVE START & CLEAN UP KIT for DLT CTLF026–V000 CTSF007–V000 LXB 4000 LibXpr LXB RackMount w/ 1 DLT4000 DLT4000 for LibXpr LXB CTLF028–V000 CTSF008–V000 LXB 7000 LibXpr LXB RackMount w/ 1 DLT7000 DLT7000 for LibXpr LXB MSCG023–0000 MSCG020–0000 MSCG030–0000 CKTG049–0000 CKTG050–0000 CKTG070–0000 CKTF003–0000 CBLG157–1700 CBLG102
HA DLT7000 (DE – 68mD) Y cable / adapter [CKTG070–0000 CKTG049–0000 [CKTG070–0000 6m Cable / adapter CBLG157–1700 CBLG157–1700 CBLG157–1700 DLT not shared Cable for DLT7000 CBLG157–1700 Cable for DLT4000 CBLG158–1700 CBLG102–1700 CBLG152–1900 CBLG157–1700 CBLG158–1700 Case of the Shared Library 1. In addition to the Y–cable there is a terminator feed thru included in CKTF003 that allows to plug the 68mD cable (CBLG157–1700) into the DLT4000 (50mD). 2.
Case of a Shared Library The following depicts a configuration example of an EPC400 with 2 nodes sharing an LXB for high availability only. Figure 127. Overall Diagram – EPC400 with 2 nodes sharing an LXB. Cabling Legend Item M.I.
Cabling Examples for Non Shared LIbraries No Y cables are used. An external terminator is used to terminate a SCSI chain. One external terminator is included in the library in standard. A second external terminator (90054001-001) should also be provided in a library with two drives. For performance considerations, it is not recommended to chain the drives in a LBX7000 library. Cabling for: 1 Node – 1 SCSI Adapters – 1 Attached Library – 1 or 2 Drives Figure 128.
Cabling Examples for Shared LIbraries Cabling for: 2 Nodes – 1 Adapter per Node – 1 Drive 1 1 4 4 2 2 3 Figure 130. LIBCF01: 2 Nodes – 1 Adapter per Node – 1 Drive Cabling for: 2 Nodes – 1 Adapter per Node – 2 Drives 1 1 4 4 2 2 3 Figure 131.
Cabling for: 2 Nodes – 2 Adapters per Node – 2 Drives 1 1 4 1 4 4 1 4 2 2 2 2 3 Figure 132.
Chapter 11. Tape Subsystems Cabling Requirements Summarizing tape drive applications. Tape Subsystems – Overview Two tape subsystems are available for shelf mounting with the Escala Powercluster series: • DLT 4000 (MI MTSG014) • VDAT Mammoth (MI MTSG015). The DLT 4000 drive can be connected to EPC400 only. The VDAT Mammoth can be connected to EPC400 and EPC800 only.
11-2 EPC Connecting Guide
Chapter 12. Remote Maintenance Describes remote maintenance solutions. Remote Maintenance – Overview Details in: • Modems in Powercluster Configurations. • Parts List, on page 12-2. • Modem on PowerConsole, on page 12-3. • Modem on a Node’s S2 Plug, on page 12-5. • Using Two Modems, on page 12-7. Modems in Powercluster Configurations RSF (Remote Services Facilities) performs system error monitoring and handles communications for remote maintenance operations.
For configuration RMCF02, the internal modem of the S100 is prepared and configured at manufacture. In other configurations, the integrated modem of any EPC400 is also prepared at manufacture (configuration of the modem and RSF dial-in). The external modem is provided, installed and configured on the client site by the Customer Service.
Modem on PowerConsole Cabling Diagram with Console Concentrator Diagram with Escala S100 Figure 133 shows an example which is relevant for any Powercluster configuration with an Escala S100 based PowerConsole, though this figure shows a configuration with a dedicated–administration network. In that case the modem is prepared and configured (RSF callscarf module on S100, and RSF cluster module on every node). Figure 133.
Diagram with Escala S100 and one modem by node Figure 134 shows an example which is relevant for any Powercluster configuration with an Escala S100 based PowerConsole 2, though this figure shows a configuration with a dedicated–administration network. In that case you may have one modem on the PowerConsole and/or one modem by node. This solution is more safety because in case of the PowerConsole is out of service you will be able to use RSF facilities. Figure 134.
Modem on a Node’s S2 Plug Basic Cabling for a Uni-node Configuration • On an EPC 800 node the modem is external. • On an EPC 400 node the modem is integrated (ISA board) inside the drawer. • On an EPC1200 or EPC1200A system, the modem is external. For ECP800, the modem support is put into the rack. An external modem is connected to the native serial port S2 on an EPC800 or EPC1200 node. The integrated modem of an EPC400 node is configured together with RSF software.
Figure 136. RMCF03: Remote maintenance: Modem on a Node’s S2 plug w/ Console Concentrator Example of Use This solution is recommended: • when there is a local ClusterConsole (as depicted in the figure) • or when the Powerconsole is not wired to the Console Concentrator. In a multiple-node EPC400 configuration there should be one node with an integrated modem.
Using Two Modems Two modems are provided with every 2-node configuration which does not include any console concentrator. When extending a uni-node configuration with an additional node, an external modem is added. An original uni-node EPC RT model is provided with a modem integrated in the CPU drawer. In any EPC400 configuration, there should be at least one node with an integrated modem.
12-8 EPC Connecting Guide
Appendix A. Marketing Identifier Cross-References Provides a way to trace the use, in this document, of Marketing Identifiers (M.I.) associated with EPC cabling. M.I.s to page numbers.
CBLG102-1700, 10-51 CBLG102-1700, 10-40 CBLG104-2000, 7-3, 7-7 CBLG104-2000, 7-15, 7-26, 7-38, 12-2 CBLG105-1800, 7-7 CBLG105-1800, 7-15, 7-23, 7-26, 7-38, 12-2 CBLG106-2000, 7-14, 7-15, 12-2 CBLG106-2000, 7-3, 7-7, 7-23, 7-24, 7-26, 7-33, 7-34, 7-35, 7-36, 7-38, 7-43, 8-5, 9-7, 9-8 CBLG111-1000, 10-23, 10-24 CBLG112-1400, 10-23, 10-24 CBLG137-1200, 10-23, 10-24 CBLG137-1800, 10-23, 10-24 CBLG152-1900, 10-51 CBLG157-1300, 10-40 CBLG157-1700, 10-40, 10-51 CBLG157-1900, 10-40 CBLG157-2100, 10-40 CBLG158-1700,
CKTG080-0000, 10-51 CKTG094-0000, 7-2 CMMF001-0000, 10-50 CMMF002-0000, 10-50 CMMF003-0000, 10-50 CMMF004-0000, 10-50 CMMF005-0000, 10-50 CMMF006-0000, 10-50 CMMF007-0000, 10-50 CMMF008-0000, 10-50 CMMF009-0000, 10-50 CMMF010-0000, 10-50 CMMF011-0000, 10-50 CMMF012-0000, 10-50 CMMG024-0000, 10-23 CMMG025-0000, 10-23 CMMG037-0000, 10-23 CMMG047-0000, 10-23 CMMG059-0000, 7-35, 7-36 CMMG065-0000, 7-35, 7-36 CMMG112-0000, 7-33, 7-34 CMOF004-0000, 10-50 CMOF005-0000, 10-50 CMOF006-0000, 10-50 CMOG043-0000, 10-23
CSKU103-2100, Cluster PowerConsole (Europe), 7-36 CSKU103-P100, Cluster PowerConsole (UK), 7-35 CSKU103-U100, Cluster PowerConsole (US), 7-36 CSKU105-1000, Cluster PowerConsole (Escala S Series based) (France), 7-33 CSKU105-2000, Cluster PowerConsole (Escala S Series based) (Europe), 7-33 CSKU105-P000, Cluster PowerConsole (Escala S Series based) (UK), 7-33 CSKU105-U000, Cluster PowerConsole (Escala S Series based) (US), 7-34 CSKU115-2100, Console Concentrator (Europe), 7-14 CSKU116-2000, 7-10 CSKU116-P000,
DCDF007-0000, 10-50 DCKG010-0000, 8-2 DCKG011-0000, 8-2 DCKG012-0000, 8-2 DCKG013-0000, 9-2 DCKG014-0000, 9-2 DCKG015-0x00, 9-2 DCKG016-0000, 9-2 DCKU101-0100, 8-2 DCKU102-0100, 8-2 DCKU107-0000, 9-2 DCKU108-0100, 9-2 DCKU109-0000, 9-3 DCKU110-0000, 9-3 DCKU117-0000, 8-3 DCOQ001-0000, 10-33 DCUG001-000D, Cluster PowerConsole Extensions (Escala S Series), 7-34 DCUG001-000E, Cluster PowerConsole Extensions (Escala S Series), 7-34 DCUG001-000F, Cluster PowerConsole Extensions (Escala S Series), 7-34 DCUG001-00
G GTFG039-0000, 7-2 GTFG039-0100, 7-2 GTFG042-0000, 7-33, 7-34 GTFG043-0000, 7-35, 7-36 GTFG044-0000, 7-35, 7-36 GTFG045-0100, 7-2 I INTCF01, 8-4 INTCF05, 9-4 INTCF06, 9-6 INTCF09, 8-4 INTCF10, 8-6 K KBU3031, 7-3 KBU3032, 7-3 KBU3033, 7-3 KBU3400, 7-23 KBU3405, 7-23 KBUG003-000F, 7-2 KBUG003-000B, 7-2 KBUG003-000E, 7-2, 7-33, 7-36 KBUG003-000F, 7-33, 7-35 KBUG003-000G, 7-2 KBUG003-000H, 7-2, 7-33, 7-34, 7-35, 7-36 KBUG003-000K, 7-2 KBUG003-000N, 7-2 KBUG003-000P, 7-2 KBUG003-000S, 7-2 KBUG003-000T, 7-2 KB
MSCG023-0000, 10-23, 10-51 MSCG024-0000, 10-2 MSCG029-0000, 10-2 MSCG030-0000, 10-23, 10-51 MSCG032-0000, 10-23 MSCG036-0000, 10-2 MSCG038-0000, 10-2 MSCG039-0000, 10-2 MSCU101-0000, 10-2 MSKF005-0000, 10-40 MSKG006-0000, 10-23 MSOF303-9002, 10-50 MSOF303-2302, 10-50 MSPG003-0000, 10-23 MSPG003-0100, 10-23 MSPG005-0000, 10-23 MSPG006-0000, 10-23 MSPG007-0000, 10-23 MSUF070-0J00, 10-40 MSUF073-0J00, 10-40 MSUF303-9002, 10-50 MSUF303-2302, 10-50 MSUG013-0000, Cluster PowerConsole Extensions (Escala S Series),
MTUG029-0P00, Cluster PowerConsole Extensions (Escala S Series), 7-34 MTUG032-0P00, Cluster PowerConsole Extensions (Escala S Series), 7-34 P PDUG008-0000, 7-2 PDUG008-0000, 7-33, 7-34, 7-35, 7-36 PSSF007-0000, 10-40 PSSG002-0100, 10-23 PSSG004-0000, 10-23 PSSG005-0000, 10-23 PSSG006-0000, 10-23 PWCCF02, 7-26 PWCCF03, 7-28 PWCCF04, 7-41 PWCCF05, 7-29 PWCCF06, 7-42 S SISF004-0000, 10-50 SISF005-3300, 10-50 SISF005-3400, 10-50 SISF005-3700, 10-50 SISF006-0000, 10-50 SISF007-0000, 10-50 SSAG004-0000, 10-2 SS
V VCW3630, 7-7, 7-15, 7-26, 7-33, 7-34, 7-35, 7-36, 7-37, 7-38, 12-2 X XSMK003-0000, 7-24 XSMK004-0000, 7-23 XSTK412-04HE, 7-23 XSTK415-04HE, 7-23 Appendix A.
A-10 EPC Connecting Guide
Appendix B. Technical Support Bulletins Where to find Technical Support Bulletins: linking M.I.s to spare parts; where are M.I.s used; history of Part Nos. Technical Support Bulletins – Overview Support Bulletins are available on-line, via the Web, providing up-to-date sources of data, including: • correspondence between M.I.s and Spare Parts • correspondence between M.I.s and Cables • history of changes to Part Numbers • complete spare parts catalogue (provided as a down-loadable compressed file).
B-2 EPC Connecting Guide
Appendix C. PCI/ISA/MCA Adapter List Lists of adapters (controllers) and their identification labels. Adapter Card Identification Adapter cards are identified by a label visible on the external side of the metallic plate guide. For further details, about controllers description, configuration upgrading and removal procedures, refer to Controllers in the Upgrading the System manual. A list of controller cards supported by your system is provided below.
ISA Bus Label Description B5-2 B5-A B5-B B5-C B5-D B5-E ISDN Controller Internal Modem ISA FRANCE Internal Modem ISA UK Internal Modem ISA BELGIUM Internal Modem ISA NETHERLAND Internal Modem ISA ITALY MCA Bus Label Description 4-D 4-G 4-M C-2 SSA 4 Port Adapter Enhanced SSa 4 Port Adapter SSA Multi-Initiator/RAID EL Adapter EPC Connecting Guide
Appendix D. Cable and Connector Identification Codes Details in: • Cable Identification Markings • Connector Identification Codes Cable Identification Markings Each end of any cable connecting two items has a FROM–TO label conforming to a specific format and object identification rules. Figure shows the format of a FROM–TO label and an example of labeled cable between a DAS and a CPU. DAS1 CPU1 FROM–TO Labels: FROM : R – – – Figure 139. Cable Identification Codes on Labels.
Object Identification for FROM–TO Labels CPU PCI CEC I/O CONS PWCONS SSA DAS JBOD LXB CS2600 CSCONS HUB FC–AL DISK VDAT DLT QIC CPU Drawer PCI Expansion drawer (EPC400) Computing Rack (EPC1200) I/O (EPC1200) System Console Power Console SSA Disk Sub-system DAS Disk Sub-system AMDAS/JBOD Tape Drive Sub-system CS2600 Concentrator CS2600 Concentrator Administration Console Ethernet or FDDI Hub Fibre Channel Hub Media Drawer MAMMOTH VDAT DLT4000/7000 QIC MLR1 Reader Each object in a cabinet is identified with
DAS 3500 Disk Sub-system SPA/1 SPB/1 SPA/RS232 SPB/RS232 Fibre Fibre RS232 RS232 channel connector of channel connector of of Service processor of Service processor Service processor A Service processor B A B JBOD Disk Sub-system J21,J22,J31 J01 à J08 Asynchronous Console SCSI Bus SSA Disk Sub-system A1,A2,B1,B2 Jx Connector output Adapter SSA SSA Disk sub-system connector (01
D-4 EPC Connecting Guide
Glossary This glossary contains abbreviations, key–words and phrases that can be found in this document. ATF Application-Transparent Failover. MDI Media Dependent Interface. CPU Central Processing Unit. MI Marketing Identifier. DAS Disk Array Subsystem. MIA Media Interface Adapter. EPC Escala Power Cluster. PCI Peripheral Component Interconnect (Bus). FC–AL Fibre Channel Abritrated Loop. PDB Power Distribution Board. FDDI Fibre Distributed data Interface. PDU Power Distribution Unit.
G-2 EPC Connecting Guide
Index Numbers 8-Port Asynch. M.I. DCCG067-0000, 3-1 M.I.
Console concentrator, 12-1 Controller, List.
EMC2 Symmetrics Disk Subsystem, 10-64 HA Library, 10-69 JDA Disk Subsystem, 10-54 SSA Disk Subsystem, 10-2 System Console & Graphics Display, 6-2 Micro modem, 10-17 Modem Node’s S2 Plug, 12-5 PowerConsole, 12-3 Use with PowerConsole, 6-44 Using Two Modems, 12-7 MSCG012-0000, 10-65 MSCG020-0000, 10-65 MSCG023-0000, 10-65 MSCG030-0000, 10-65 MSUG110-0F00, 10-40 MSUG111-0F00, 10-40 N Name Directories, Updating, 7-12 NetBackup, 6-38 Network Interfaces, Configuring, 7-11 Network Parameters, Settings for Testing
X-4 EPC Connecting Guide
Vos remarques sur ce document / Technical publication remark form Titre / Title : Bull ESCALA EPC Series EPC Connecting Guide Nº Reférence / Reference Nº : 86 A1 65JX 03 Daté / Dated : October 1999 ERREURS DETECTEES / ERRORS IN PUBLICATION AMELIORATIONS SUGGEREES / SUGGESTIONS FOR IMPROVEMENT TO PUBLICATION Vos remarques et suggestions seront examinées attentivement. Si vous désirez une réponse écrite, veuillez indiquer ci-après votre adresse postale complète.
Technical Publications Ordering Form Bon de Commande de Documents Techniques To order additional publications, please fill up a copy of this form and send it via mail to: Pour commander des documents techniques, remplissez une copie de ce formulaire et envoyez-la à : BULL ELECTRONICS ANGERS CEDOC ATTN / MME DUMOULIN 34 Rue du Nid de Pie – BP 428 49004 ANGERS CEDEX 01 FRANCE Managers / Gestionnaires : Mrs. / Mme : C. DUMOULIN Mr. / M : L.
PLACE BAR CODE IN LOWER LEFT CORNER BULL ELECTRONICS ANGERS CEDOC 34 Rue du Nid de Pie – BP 428 49004 ANGERS CEDEX 01 FRANCE ORDER REFERENCE 86 A1 65JX 03
Utiliser les marques de découpe pour obtenir les étiquettes. Use the cut marks to get the labels.