Deployment Guide
21 VCF on VxRail Multirack Deployment using BGP EVPN
4.4 VxRail node connections
Workload domains include combinations of ESXi hosts and network equipment which can be set up with
varying levels of hardware redundancy. Workload domains are connected to a network core that distributes
data between them.
Figure 16 shows a physical view of Rack 1. On each VxRail node, the NDC links carry traditional VxRail
network traffic such as management, vMotion, vSAN, and VxRail management traffic. The 2x 25 GbE PCIe,
shown here in slot 2, is dedicated to NSX-T overlay and NSX-T uplink traffic. Resiliency is achieved by
providing redundant leaf switches at the ToR.
Each VxRail node has an iDRAC connected to an S3048-ON OOB management switch. This connection is
used for the initial node configuration. The S5248F-ON leaf switches are connected using two QSFP28-DD
200 GbE direct-access cables (DAC) forming a VLT interconnect (VLTi) for a total throughput of 400 GbE.
Upstream connections to the spine switches are not shown but are configured using two QSFP28 100 GbE
uplinks.
Stack ID
Stack ID
S5248F-ON
sfo01-leaf01a
VxRail E node
sfo01w02vxrail01
S5248F-ON
sfo01-leaf01b
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 321 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 51 5249 50
S3048-ON
iDRAC mgmt
Rack 1
VxRail E node
sfo01w02vxrail03
GRN=10G
ACT/LN K A
GRN=10G
ACT/LN K B
GRN=10G
ACT/ LNK A
GRN=10G
ACT/ LNK B
GRN=10G
ACT/LN K A
GRN=10G
ACT/LN K B
GRN=10G
ACT/ LNK A
GRN=10G
ACT/ LNK B
Dell EMC VxRail multirack Rack 1 physical connectivity