Deployment Guide
25 VxFlex Network Deployment Guide using Dell EMC Networking 25GbE switches and OS9
5 VMware virtual network design
In this section, tables are provided that outline the virtual network design used in this deployment. Specific
steps to create the distributed switches, VMkernels, and setting NIC teaming policies are not covered in this
document. See vSphere Networking Guide for vSphere 6.5, ESXi 6.5, and vCenter Server 6.5.
5.1 ESXi management
The default VMkernel, vmk0 is used for ESXi management and is migrated from the default standard switch
to the VDS created in this section. See: How to migrate service console / VMkernel port from standard
switches to VMware vSphere Distributed Switch.
5.2 Load balancing
In the deployment, two different load balancing algorithms are used. VxFlex data networks (VxFlex Data 1
and VxFlex Data 2) use the route based on originating virtual port. Each port group is assigned a single
interface as active while the other interface is unused. This creates a traditional storage topology where each
host has two separate networks both logically and physically.
The remaining port groups use Route based on Physical NIC Load. Both uplinks are set as active, and I/O is
automatically balanced across both interfaces. The Virtual Distributed Switches (VDS) tests the associated
uplinks every 30 seconds, and if their load exceeds 75 percent of usage, the port ID of the virtual machine
with the highest I/O is moved to a different uplink.
5.3 Configuration details
The following tables contain the pre- and post-installation configuration details for the VDS used for the
VxFlex cluster.
Virtual switch details
VDS switch name
Function
Physical NIC
port count
MTU
atx01-w01-vds01
• ESXI_MGMT_IP
• ESXI_VMOTION_IP
• SVM_MGMT_IP
• Node_DATA1_IP & SVM_DATA1_IP
• Node_DATA2_IP & SVM_DATA2_IP
2
9000
Port group configuration settings
Parameter
Settings
Failover Detection
Link status only
Notify switches
Enabled
Failback
Yes










