Administrator's Guide Supporting RSA Data Protection Manager (DPM) Environments (Supporting Fabric OS v7.2.0) Owner's manual
Table Of Contents
- Contents
- About This Document
- Encryption Overview
- In this chapter
- Host and LUN considerations
- Terminology
- The Brocade Encryption Switch
- The FS8-18 blade
- FIPS mode
- Performance licensing
- Recommendation for connectivity
- Usage limitations
- Brocade encryption solution overview
- Data encryption key life cycle management
- Master key management
- Support for virtual fabrics
- Cisco Fabric Connectivity support
- Configuring Encryption Using the Management Application
- In this chapter
- Encryption Center features
- Encryption user privileges
- Smart card usage
- Using authentication cards with a card reader
- Registering authentication cards from a card reader
- Registering authentication cards from the database
- Deregistering an authentication card
- Setting a quorum for authentication cards
- Using system cards
- Enabling or disabling the system card requirement
- Registering systems card from a card reader
- Deregistering system cards
- Using smart cards
- Tracking smart cards
- Editing smart cards
- Network connections
- Blade processor links
- Encryption node initialization and certificate generation
- Steps for connecting to a DPM appliance
- Exporting the KAC certificate signing request (CSR)
- Submitting the CSR to a certificate authority
- KAC certificate registration expiry
- Importing the signed KAC certificate
- Uploading the CA certificate onto the DPM appliance (and first-time configurations)
- Uploading the KAC certificate onto the DPM appliance (manual identity enrollment)
- DPM key vault high availability deployment
- Loading the CA certificate onto the encryption group leader
- Encryption preparation
- Creating an encryption group
- Adding a switch to an encryption group
- Replacing an encryption engine in an encryption group
- High availability clusters
- Configuring encryption storage targets
- Configuring hosts for encryption targets
- Adding target disk LUNs for encryption
- Adding target tape LUNs for encryption
- Moving targets
- Tape LUN write early and read ahead
- Tape LUN statistics
- Encryption engine rebalancing
- Master keys
- Security settings
- Zeroizing an encryption engine
- Using the Encryption Targets dialog box
- Redirection zones
- Disk device decommissioning
- Rekeying all disk LUNs manually
- Thin provisioned LUNs
- Viewing time left for auto rekey
- Viewing and editing switch encryption properties
- Viewing and editing encryption group properties
- Encryption-related acronyms in log messages
- Configuring Encryption Using the CLI
- In this chapter
- Overview
- Command validation checks
- Command RBAC permissions and AD types
- Cryptocfg Help command output
- Management LAN configuration
- Configuring cluster links
- Setting encryption node initialization
- Steps for connecting to a DPM appliance
- Initializing the Fabric OS encryption engines
- Exporting the KAC certificate signing request (CSR)
- Submitting the CSR to a CA
- Importing the signed KAC certificate
- Uploading the CA certificate onto the DPM appliance (and first-time configurations)
- Uploading the KAC certificate onto the DPM apliance (manual identity enrollment)
- Creating a Brocade encryption group
- Client registration for manual enrollment
- DPM key vault high availability deployment
- Setting heartbeat signaling values
- Adding a member node to an encryption group
- Generating and backing up the master key
- High availability clusters
- Re-exporting a master key
- Enabling the encryption engine
- Zoning considerations
- CryptoTarget container configuration
- Crypto LUN configuration
- Impact of tape LUN configuration changes
- Decommissioning LUNs
- Decommissioning replicated LUNs
- Force-enabling a decommissioned disk LUN for encryption
- Force-enabling a disabled disk LUN for encryption
- SRDF LUNs
- Using SRDF, TimeFinder and RecoverPoint with encryption
- Configuring LUNs for SRDF/TF or RP deployments
- SRDF/TF/RP manual rekeying procedures
- Tape pool configuration
- Configuring a multi-path Crypto LUN
- First-time encryption
- Thin provisioned LUNs
- Data rekeying
- Deployment Scenarios
- In this chapter
- Single encryption switch, two paths from host to target
- Single fabric deployment - HA cluster
- Single fabric deployment - DEK cluster
- Dual fabric deployment - HA and DEK cluster
- Multiple paths, one DEK cluster, and two HA clusters
- Multiple paths, DEK cluster, no HA cluster
- Deployment in Fibre Channel routed fabrics
- Deployment as part of an edge fabric
- Deployment with FCIP extension switches
- Data mirroring deployment
- VMware ESX server deployments
- Best Practices and Special Topics
- In this chapter
- Firmware upgrade and downgrade considerations
- Configuration upload and download considerations
- Configuration upload at an encryption group leader node
- Configuration upload at an encryption group member node
- Information not included in an upload
- Steps before configuration download
- Configuration download at the encryption group leader
- Configuration download at an encryption group member
- Steps after configuration download
- HP-UX considerations
- AIX considerations
- Enabling a disabled LUN
- Decommissioning in an EG containing mixed modes
- Decommissioning a multi-path LUN
- Disk metadata
- Tape metadata
- Tape data compression
- Tape pools
- Tape block zero handling
- Tape key expiry
- Configuring CryptoTarget containers and LUNs
- Redirection zones
- Deployment with Admin Domains (AD)
- Do not use DHCP for IP interfaces
- Ensure uniform licensing in HA clusters
- Tape library media changer considerations
- Turn off host-based encryption
- Avoid double encryption
- PID failover
- Turn off compression on extension switches
- Rekeying best practices and policies
- KAC certificate registration expiry
- Changing IP addresses in encryption groups
- Disabling the encryption engine
- Recommendations for Initiator Fan-Ins
- Best practices for host clusters in an encryption environment
- HA Cluster deployment considerations and best practices
- Key vault best practices
- Tape device LUN mapping
- Maintenance and Troubleshooting
- In this chapter
- Encryption group and HA cluster maintenance
- Displaying encryption group configuration or status information
- Removing a member node from an encryption group
- Deleting an encryption group
- Removing an HA cluster member
- Displaying the HA cluster configuration
- Replacing an HA cluster member
- Deleting an HA cluster member
- Performing a manual failback of an encryption engine
- Encryption group merge and split use cases
- A member node failed and is replaced
- A member node reboots and comes back up
- A member node lost connection to the group leader
- A member node lost connection to all other nodes in the encryption group
- Several member nodes split off from an encryption group
- Adjusting heartbeat signaling values
- EG split possibilities requiring manual recovery
- Configuration impact of encryption group split or node isolation
- Encryption group database manual operations
- Key vault diagnostics
- Measuring encryption performance
- General encryption troubleshooting
- Troubleshooting examples using the CLI
- Management application encryption wizard troubleshooting
- LUN policy troubleshooting
- Loss of encryption group leader after power outage
- MPIO and internal LUN states
- FS8-18 blade removal and replacement
- Brocade Encryption Switch removal and replacement
- Deregistering a DPM key vault
- Reclaiming the WWN base of a failed Brocade Encryption Switch
- Removing stale rekey information for a LUN
- Downgrading firmware from Fabric OS 7.1.0
- Fabric OS and DPM Compatibility Matrix
- Splitting an encryption group into two encryption groups
- Moving an encryption blade from one EG to another in the same fabric
- Moving an encryption switch from one EG to another in the same fabric
- State and Status Information
- Index
Fabric OS Encryption Administrator’s Guide (DPM) 257
53-1002922-01
Encryption group merge and split use cases
6
Recovery
If auto failback policy is set, no intervention is required. After the node has come back up, all
devices and associated configurations and services that failed over earlier to N1 fail back to N3.
The node resumes its normal function.
If auto failback policy is not set, invoke a manual failback if required. Refer to the section
“Performing a manual failback of an encryption engine” on page 254 for instructions.
A member node lost connection to the group leader
AssumeN1, N2 and N3 form an encryption group, and N2 is the group leader node. N3 and N1 are
part of an HA cluster. Assume that N3 lost connection to the group leader node N2 but still
maintains communications with other nodes in the encryption group.
Impact
Failover to N1 does not occur, because the isolated node and the encryption engines’ encryption
services continue to function normally. However the disconnect of N3 from the group leader breaks
the HA cluster and failover capability between N3 and N1.
You cannot configure any CryptoTargets, LUN policies, tape pools, or security parameters that
would require communication with the isolated member node. In addition, you cannot start any
rekey operations (auto or manual).
Refer to the section “Configuration impact of encryption group split or node isolation” on page 264
for more information on which configuration changes are allowed.
Recovery
Restore connectivity between the isolated node and the group leader. No further intervention is
required.
A member node lost connection to all other nodes in the encryption
group
Assume N1, N2 and N3 form an encryption group and N2 is the group leader node. N3 and N1 are
part of an HA cluster. Assume that N3 lost connection with all other nodes in the group. Node N3
finds itself isolated from the encryption group and, following the group leader succession protocol,
elects itself as group leader. This action splits the encryption group into two encryption group
islands. EG1 includes the original encryption group minus the member node N3 that lost
connection to the encryption group. EG2 consists of a single node N3, which functions as the group
leader. Both EG1 and EG2 are in a degraded state.
Impact
• The two encryption group islands keep functioning independently of each other as far as host
I/O encryption traffic is concerned.
• Each encryption group registers the missing members as “offline”.