FW V06.XX/HAFM SW V08.02.00 HP StorageWorks SAN High Availability Planning Guide (AA-RS2DD-TE, July 2004)
Table Of Contents
- SAN HA Planning Guide
- Contents
- About this Guide
- Introduction to HP Fibre Channel Products
- Product Management
- Planning Considerations for Fibre Channel Topologies
- Fibre Channel Topologies
- Planning for Point-to-Point Connectivity
- Characteristics of Arbitrated Loop Operation
- Planning for Private Arbitrated Loop Connectivity
- Planning for Fabric-Attached Loop Connectivity
- Planning for Multi-Switch Fabric Support
- Fabric Topologies
- Planning a Fibre Channel Fabric Topology
- Fabric Topology Design Considerations
- FICON Cascading
- Physical Planning Considerations
- Port Connectivity and Fiber-Optic Cabling
- HAFM Appliance, LAN, and Remote Access Support
- Inband Management Access (Optional)
- Security Provisions
- Optional Features
- Configuration Planning Tasks
- Task 1: Prepare a Site Plan
- Task 2: Plan Fibre Channel Cable Routing
- Task 3: Consider Interoperability with Fabric Elements and End Devices
- Task 4: Plan Console Management Support
- Task 5: Plan Ethernet Access
- Task 6: Plan Network Addresses
- Task 7: Plan SNMP Support (Optional)
- Task 8: Plan E-Mail Notification (Optional)
- Task 9: Establish Product and HAFM Appliance Security Measures
- Task 10: Plan Phone Connections
- Task 11: Diagram the Planned Configuration
- Task 12: Assign Port Names and Nicknames
- Task 13: Complete the Planning Worksheet
- Task 14: Plan AC Power
- Task 15: Plan a Multi-Switch Fabric (Optional)
- Task 16: Plan Zone Sets for Multiple Products (Optional)
- Index

Planning Considerations for Fibre Channel Topologies
106 SAN High Availability Planning Guide
■ Nonresilient single fabric — Directors and switches are connected to form a
single fabric that contains at least one single point of failure (fabric element or
ISL). Such a failure causes the fabric to fail and segment into two or more
smaller fabrics. A cascaded fabric topology (Figure 36) illustrates this design.
■ Resilient single fabric — Directors and switches are connected to form a
single fabric, but no single point of failure can cause the fabric to fail and
segment into two or more smaller fabrics. A ring fabric topology (Figure 37)
illustrates this design.
■ Nonresilient dual fabric — Half the directors and switches are connected to
form one fabric, and the remaining half are connected to form an identical but
separate fabric. Servers and storage devices are connected to both fabrics.
Each fabric contains at least one single point of failure (fabric element or
ISL). All applications remain available, even if an entire fabric fails.
■ Resilient dual fabric — Half the directors and switches are connected to
form one fabric, and the remaining half are connected to form an identical but
separate fabric. Servers and storage devices are connected to both fabrics. No
single point of failure can cause either fabric to fail and segment. All
applications remain available, even if an entire fabric fails and elements in the
second fabric fail.
A dual-fabric resilient topology is generally the best design to meet
high-availability requirements. Another benefit of the design is the ability to
proactively take one fabric offline for maintenance without disrupting SAN
operations.
Redundant Fabrics
If high availability is important enough to require dual-connected servers and
storage, a dual-fabric solution is generally preferable to a dual-connected single
fabric. Dual fabrics maintain simplicity and reduce (by 50%) the size of fabric
routing tables, name server tables, updates, and Class F management traffic. In
addition, smaller fabrics are easier to analyze for performance, to fault isolate, and
to maintain.
Figure 45 illustrates simple redundant fabrics. Fabric “A” and fabric “B” are
symmetrical, each containing one core director and four edge switches. All
servers and storage devices are connected to both fabrics.