Technical data
DATA CENTER TECHNICAL BRIEF
Operational Simplicity: Automating and Simplifying SAN Provisioning 10 of 15
What Do Customers Do Now?
Dening, adding, or changing a zone database for device connectivity is currently a very manual process. In
cases where the HBAs fail or servers come off lease, the zone database must be updated to accurately reect
the changes in the fabric. Deploying a server in an FC SAN requires multiple administrative teams, such as
server, SAN, and storage teams, to coordinate and perform conguration tasks like creating zoning at the
fabric and Logical Unit Number (LUN) mapping and masking at the storage device before deploying a server. To
congure WWN zoning and LUN masking, administrators need to know the physical pWWN of the server. This
means that administrative teams cannot start their conguration tasks until the physical server arrives and its
physical pWWN is known. Additionally, due to the sequential and interdependent nature of tasks across various
administrative teams, it may take several days or weeks before a server gets deployed in an FC SAN.
Zoning becomes more complicated in an Access Gateway environment where the port that is connected to the
switch is virtualized, allowing up to 255 virtualized connections from a single switch port. DFP reduces the zone
management overhead as devices move between Access Gateway ports.
In order to simplify and accelerate server deployment and improve operational efciency, DFP enables SAN
administrators to pre-provision services like zoning, Quality of Service (QoS), Device Connection Control (DCC),
or any services that require port-level authentication prior to servers arriving in the fabric.
How Does Dynamic Fabric Provisioning Work?
A list of port Fabric-Assigned World Wide Names (FA-WWNs) are automatically generated and can be provisioned
to current and future servers in the fabric. During the initial login, the switch provides a virtual WWN to the
Brocade HBA.
There are a few simple steps for setting up the DFP:
1. Select a switch port.
2. Congure an auto-assigned or user-assigned FA-WWN.
3. Create a zone with the FA-WWN and target device.
4. Enable zoning.
5. Connect a Brocade FC HBA to the switch port.
6. HBA automatically acquires the FA-WWN, and the server is ready for operation.
In the future, as devices are replaced or moved, no additional changes are needed for zone management.
USECASE:PRE-DEPLOYMENTOFPRIVATECLOUDINFRASTRUCTURE
InfrastructureValidation
In a pre-deployment scenario where all the Gen 5 Fibre Channel switches are interconnected (see Figure 4),
all the ports can be placed in the ClearLink diagnostic mode. They are automatically run through the link
performance, latency, and distance measurement tests as well as the optical health checks prior to connecting
servers and storage to the SAN, without having to use expensive SAN testers to individually test each port or
link. If running Brocade FOS v7.1, the UltraScale ICLs of the Brocade DCX 8510 Backbones can also be tested
for link distance and latency. If Brocade 1860 Fabric Adapters are also deployed, the 16 Gbps links to the server
are also automatically tested for link performance, latency, distance measurements, and optical health checks.










